Who says we’re not the AI

[note: I think this would be better explained through a short story (a la Ted Chiang) but until then, this post will have to suffice]

[edit: the story now exists The Big Leap - for max bang:buck, I recommend reading that instead of this]

Okay, let’s go down this rabbithole together. For the record, I am a theist and this pretty much goes against everything I believe in. But, it is an interesting thought experiment that I cannot disprove, and I will gladly explore its limits and consequences.

A Human-built AI

I keep hearing about how we should be afraid of AI, because some intelligence smarter than us could easily overpower us (here, here, and here for Ted talks galore). If some entity could out-maneuver us, and concluded that humans are the greatest pestilence (not a far stretch), there is nothing we could do to stop our extinction. This is a common apocalyptic plot, but one that is not entirely impossible over the next few centuries.

But, if we did make some super-intelligence (defined as any intelligence greater than ours - all intelligence is relative), and we could figure out how to harness it, we could solve computational problems that would otherwise have been impossible for our minds. And, like many mathematical proofs, we could make use of the final solution without the initial bit of brilliance on which it was built.

Notice the massive assumptions required for such an outcome. The second one is especially challenging - ‘harnessing’ the AI. We can break this down into two parts - reaping the benefits of some AI, and keeping it from destroying us. If it were really more intelligent than us, only clever maneuvering outside the realm of the AI could protect us. Presumably, one safe way forward would be to forever conceal our presence from the super-intelligence, and to prevent the actions of the AI from concretely influencing our reality. We could then use the fruits of this super-intelligence when most convenient, without threatening our own existence.

Humanity as an AI

What if all the thoughts, fears, and plans we are putting into an AI, have been applied to us? What if, in the same way that we want to build an AI that would supersede our species without threatening it, we were built ourselves? It is possible (though unlikely) that humanity is the AI built by some preceding, lesser intelligence (who themselves could be the AI built by their predecessors, etc.) - a hypothesis with intriguing implications.

Our own source code

I’ve been learning about Turing machines, and self-referential software. Basically, you can have source code that looks at its own source code. It’s a bit tricky, but as a simple example, you can write a program that prints out its own source code (called a quine).

We as humans keep discovering more and more about ourselves, and how we work. In that sense, we have access to some of our source code. However, it’s very possible that some parts of our source code remain forever hidden to us (i.e. consciousness). What if the very strategy we use to harness our own AI, is the strategy used to harness us? The beauty of this is that, even harnessed, we can transcend our own intelligence using the source code we are given, by building an AI that is more intelligent than us.

And if biology is our source code, then natural selection, sexual selection, and random mutation are the elements of that code that make each next generation (aggregated) a bit better than the previous one.

This would also make our universe a simulated environment (requiring its own source code), defined by the laws of nature that we continually and gradually discover. This means that we can hack not only our own source code, but also the source code of our environment (i.e. bending space-time, going faster than the speed of light). It would seem that both would be not only permitted, but expected of us.

Humanity as computational tech

This is a sample narrative that oversimplifies much of human history. It’s presumptuous, basic, and just one possible hypothesis. Read it to humor me, but don’t take it too seriously.

Human history can be described in many ways. A recurring theme, however, is the idea of progress and ‘civilization.’ For centuries, bright minds have built upon the work of their predecessors to better understand our universe, and the laws that govern it. Many minds have considered the light of civilization as one of the few unshakeable sources of meaning in our world.

For the sake of this example, let us treat computation as the end goal here. We have built layers and layers of computational theory, from the Pythagoreans to modern computer science. And, to keep the example simple in our little story, let us treat the rest of the natural sciences as motivators for computation (it’s harder to justify math for math’s sake, but easier when it applies to physics, or market advantages).

If we are actually an AI ourselves, what if our actual purpose was this ‘progress’ through computational discovery?

There are two versions of this hypothesis. Option A: our predecessors are watching us, so that they can benefit from our enhanced computation. In that case, there would exist some mechanism of observation that would have to remain hidden from us. Otherwise, we would continue to pose an existential threat to our predecessors (amplified exponentially if we were one of multiple simulations, and we made multiple simulations ourselves, etc.). This seems unlikely, as it entails massive risk. The second option is that the problem is entirely recursive - it doesn’t return any answer until the base case is reached - and the base case would require an intelligence capable of transcending its shell without needing to discover ‘observational’ tools described in option A.

Reframing meaning

Peak progress always happens in times of stability, during the heights of civilizations, or in times of great need (warfare). This means that times of anarchy and aimlessness are the ultimate enemies to our purpose as an AI. Very few discoveries were made in Europe during the Middle Ages.

‘Meaning’ or ‘purpose’ has long been an elusive concept. We don’t really know what it is, but we know that it deserves our energy and attention. But without meaning, there is no reason to do anything. This means that meaning is what keeps us going… And without meaning, we cannot fulfill our roles as an AI.

Then why is our progress so slow?

Surely, you say, looking at human history, our rate of progress has been too slow for computation to be our main purpose. But, I reply, it could have been even slower. We do not have to be the optimized solution, just a possible solution, in order to survive.

If we made an AI, why wouldn’t we run multiple parallel simulations? After all, super-intelligence could go in infinite directions, and any one of them could benefit us. It would actually make sense for us to have as many different running copies as possible. Think parallel universes. Maybe some of them have faster progress at one point, but it costs so little to keep ours running that it isn’t worth it to shut us down, unless our program crashes.

This also explains why we haven’t driven ourselves to self-extinction. Maybe earlier versions of our AI have, and maybe our version will, too. But our continued existence is only evidence of our continued existence, not of our future.

Implications (aka, how to build a better AI)

Of course, this would have many interesting consequences. For one, even if constructed, we do still have purpose, which we will never be able to ignore. Even if we understand that we are just one stage in an endless recursive solution.

If anything, it puts forward one path towards human-built super-intelligence that does not threaten humanity, by building a simulated environment without humans. This seems safer than Asimov’s three laws, but only works if the simulation can is fully trapping.

But I would assume that any previous stages may have also come upon these conclusions (it would be statistically unlikely for humanity to be the first generation to generate this thought). So, this means that we could have been designed by an intelligence that knew it was designed itself, and decided to make us the way we are, anyways. And maybe we can use that knowledge to make an AI that doesn’t wipe us out. Or, we could go in the opposite direction and try to build one that breaks us out of our own realm to show us all the previous AIs, and, eventually, the very first intelligence at the heart and root of this chain of existences.

But if we chose to go down this path (back up through the recursion), we would need a super-intelligence anyways (unless we were the unlikely base case). And if we built a super-intelligence to achieve that, there is no guarantee that our recursive AI’s wouldn’t try to discover us. Maybe that curiosity - the wonder of the realms beyond our own - is what really drives us to continue the recursion. And the great irony is that the base case could eventually destroy every one of its predecessors in order to finally find the origin. We would never get to peek beyond our realm, go beyond our shell, or meet our makers - though we would have helped create something that eventually could

Appendix: Why this hypothesis is unlikely (though not impossible)

Historical context makes this story a statistically unlikely one.

First, I happen to be alive at a time when computers are prevalent, computer science is buzzing, and artificial intelligence is a distant possibility within the span of a few lifetimes. I didn’t come up with the idea of parallel universes, the thought that we are living in a simulation, or the fear of an AI that could destroy humanity. And all those thoughts are, until proven otherwise, a fluke in human history and thought.

Second, I also happen to exist within a culture that values ‘innovation’, ‘progress’, and ‘civilization,’ so positing computation as our inbred purpose is only moderately preposterous. Therefore, the entire hypothesis depends much more on cultural background than on solid and unshakeable reasoning (I expect and look forward to much of the logic actually being shaken away).

Finally, unless we are the base case, we would probably never be able to verify this hypothesis. And just because something can’t be disproven doesn’t mean it’s true.