What if it’s all hallucination?

I’ve been thinking a lot recently about something my wife said about AI. She’s finishing up her PhD in computer science, and knows more about generative AI and computational linguistics than just about everyone I know IRL (and most people I follow on the internet, too). So when she speaks on the subject, I do my best to listen.

Ever since OpenAI and ChatGPT took the world by storm, she’s been telling me that she doesn’t think the hallucination problem (where LLMs make stuff up) will ever be solved. Indeed, she doesn’t think it’s a “problem” in a technical sense at all, because every response from a generative AI is a hallucination—and that’s kind of a point. These aren’t really thinking machines, they’re hallucinating machines, replicating patterns in human language and thought. What difference does it make if the answer is false or true?

We call it “artificial intelligence,” but that’s really a misnomer, because these machines have no “intelligence” at all—at least, not in the human sense. Instead, they are like mirrors of our own intelligence, parroting back things that sound like they involve real thought, when really it’s all just pattern replication. They aren’t trained to recognize truth, they’re trained to recognize patterns. So, in reality, everything an AI generates is a “hallucination.”

This is why she thinks that we will never fully solve the hallucination “problem.” Indeed, the whole effort is a bit like trying to turn a lion into a vegan. And until we can train an AI on absolute truth—a thing that humanity has never been able to agree upon, much less reduce to zeroes and ones—then all we will really be able to do is create better and better plumage for our stochastic parrots.

What are the implications of this? First of all, we can safely ignore the worst of the AI doom porn, because a machine that cannot fundamentally recognize truth from falsehood is probably not capable of taking over the world and exterminating or enslaving humanity, even if it does qualify as a “general” intelligence.

We can also lay aside the fear (or the pipe-dream) that AI will 100% replace humans in all or most or really any fields. Even if they can do 90% of the work, recognizing truth is still an essential part of just about everything we as humans do. We can give it jobs and tasks—perhaps even some genuinely complex tasks—but so long as these machines cannot fundamentally distinguish between truth and falsehood, we will still need a human to oversee them.

That doesn’t mean that most humans are safe from being replaced by AI, though. If an AI-augmented person can accomplish the work of 10x or 100x the number of other human workers, we’re still going to face a massive disruption in the labor market and society as a whole. The question, then, is one of ownership and distribution. Who owns the AI? How do we distribute the productivity gains from AI? These are some of the difficult problems we need to solve in the next few years.

But the real problem—and the scariest implication of all of this—is the question of truth itself. After all, if AI is fundamentally incapable of recognizing truth, and all AI output is hallucination on some level, then who determines what is true and what is not? Sam Altman? OpenAI? Congress? Some three-letter government agency?

I think this is going to be the defining question of the rising generation, which is growing up in an AI-native world. What is truth? How can we recognize it? How do we distinguish between what is true and what is false? Increasingly, we are going to find that these are questions that AI cannot answer. And in a world saturated by deep fakes, bots, and sock puppets, where the internet is dead and all the most powerful players are constantly fighting a 5th gen war with each other, truth will be the thing we are all starving for.

The tragedy of the millennial generation is that everything in our world conspired to starve us of the three things we needed most. More than anything else, we hungered for meaning, authenticity, and redemption—and for the most part, we never got it. You can blame social media, the boomers, capitalism, student loan debt, the Republicans, the Democrats—it really makes no difference. All of those things and more came together to hobble our generation and make it almost impossible for us to launch.

Will the same thing happen with the zoomers and gen-alpha over the question of truth? It appears that things are moving in that direction. In a world saturated with AI, truth becomes a scarce and valuable commodity.

So what do we do? First, I think it’s important to recognize that AI cannot and never will be an authority on truth. At best, it only mirrors our own thoughts and ideas back to us—and at worst, it feeds us the thoughts and ideas of those who seek to control us. But AI itself is neutral, just like a gun or a knife lying on a table is neutral. What matters is how it is used.

Beyond that, I don’t really know what to say. Only that this is something I need to think about a lot more. What are your thoughts?

Will super-intelligent AI take over the world?

I’ve been reading a lot of non-fiction books about AI recently. Basically, whenever a nonfiction audiobook that has anything to do with AI comes into my audiobook library app, I jump on the waiting list and listen to it right away. I’ve also been following AI news podcasts and watching lots of YouTube channels that discuss the recent developments… and boy, is there a lot of doom porn out there.

People who are closely watching this stuff believe that AGI (Artificial General Intelligence) is imminent, ie within the next 6 to 72 months, and that when AGI gets mainstreamed, it will either usher in a golden age of post-scarcity, or the ultimate extinction of all mankind (or both, weirdly). The main crux of their thesis is that once we achieve an AGI that can rewrite its own code, it will quickly turn into a superintelligence, and then it will either work to serve humanity or else work to eliminate humanity as a threat, either by outright exterminating us, or putting us into some kind of zoo.

This is all very science fictional stuff—but now more than ever, we are living in a science fictional world. So what is actually going to happen? Do I believe we going to enter the singularity, and give birth to a new species of superintelligent AI that will ultimately replace us? Or, in the lingo of Silicon Valley, what is my P(doom)?

TL;DR: I have two P(doom) values, one of which is 0%, the other of which is 90%. My P(doom) for basically all of the scenarios that involve a runaway superintelligence is 0%, but my P(doom) for massive catastrophic social upheaval due to the disruptive nature of AI technology is 90%.

For the last century or so (basically ever since Turing’s work during WWII), the field of artificial intelligence has followed a cyclical pattern. First, researchers make some sort of breakthrough, which leads to rapid technological advancements and a brief AI boom. During this boom, futurists and technologists rave about how this technology will keep scaling up forever until it ushers in a sci-fi utopia/dystopia and utterly changes what it means to be human. Then, the technological development stalls as researchers run up against a hard barrier that makes further scaling impossible, at which point most of the investors sours on the technology and we fall into an “AI winter” for a decade or two.

The problem with the futurists and technologists who promote AI technology is that the vast majority of them are transhumanists who believe that intelligence is purely an emergent phenomenon that is 100% materialistic in nature. In other words, they believe that the human mind is little more than an organic machine created through the process of evolution, and that 100% of our intelligence, emotions, spirituality, and experience can be explained and understood through purely material processes. Therefore, if they can build a machine that replicates the same biological processes as the human brain, and subject it to similar conditions that evolution subjected us to, intelligence will naturally emerge from such processes and conditions.

But what if they’re wrong? What if there are more things in heaven and in earth than are dreamed up in our modern philosophies? I’m not saying that evolution didn’t play a role in the creation/emergence of intelligence—only that it’s insufficient. And why wouldn’t it be? Science, by definition, can only explain what it can measure. And what about the questions that we can’t ask? The things about this universe that are as foreign to our own understanding as quantum physics is to a German Shepherd?

For these reasons, I do not think that these generative AI models are going to keep scaling upward until we achieve a general superintelligence. At some point in the next 0-18 months, I think that the researchers and developers are going to start hitting hard limits that we don’t understand, because of the limitations of our understanding of the human brain and how our own intelligence emerged or was created.

I am extremely skeptical of all of the doom porn floating around out there, that we are months away from achieving AGI, and that a superintelligence will shortly thereafter replace us as the dominant species on this planet. For one thing, the goalposts for AGI are constantly moving—by the standards two or three decades ago, we have already achieved it—and for another, the transhumanists have turned this concept of AGI into a sort of Messianic savior / world-ending destroyer. And I just don’t buy into that religion.

So if I’m right, all of this doom porn about a world-ending superintelligence is utterly misguided. Which, on a certain level, is somewhat comforting. But on the other hand, that also means that we shouldn’t expect AI to save us—and that anyone who tries to tell us otherwise is ultimately trying to sell us something.

The big AI developers like OpenAI, Anthropic, etc. have every incentive to hype up the doom porn. It makes them look powerful, which in turn attracts investment capital. At the same time, they also have every incentive to promote this idea that a superintelligent AI can be our savior, since if AGI is inevitable, shouldn’t we put everything we have into making sure that our AI overlords are benevolent and have humanity’s interest at heart? But again, if we take that view, we also end up pumping lots of investment capital into these AI companies, turning them into massive cultural behemoths without really questioning their ultimate aims.

What if instead of building a superintelligent AI savior, we ultimately end up with a new form of techno-feudalism, powered by AI? What if a true superintelligence never emerges, and all of the energy and resources we’re pumping into AI is really just going to create a new class of elites, with the rest of us dependent on some sort of universal basic income and totally at the mercy of the owners, controllers, and operators of AI?

To me, this seems like a much more likely scenario—and from what I can tell, we are already in the opening phases of it. Generative AI has already become so powerful that it will likely replace a large number of jobs or render them obsolete—which may or may not be a problem in the medium- to long-term, but will certainly be a problem in the short-term. As increasing numbers of people find themselves unemployed, it will put a tremendous strain on our welfare safety nets, and drive calls for increased government spending on social problems. But our governments are already so deep in debt that these pressures can only lead to some combination of (hyper)inflation, soveriegn debt crisis, and austerity-driven political instability.

Some people think that the solution to all of this is a universal basic income (UBI). But every time a UBI has been introduced, it has always led to negative outcomes, including wealth outcomes. Unfortunately, if AI is truly going to be a huge driver of unemployment (which doesn’t require AGI or a superintelligence—our current models are already powerful enough to drive massive disruption in the labor market), then I don’t see how we can avoid a massive push toward UBI. Certainly not with how our current investments in AI are so centralized—but again, all of the AGI doom-porn is driving us to centralize things even more. So while all of the benefits of this new technology accrue to Sam Altman, Elon Musk, Dario Amodei, etc, and they keep holding out the promise of a messianic superintelligent AI that never truly emerges, the rest of us end up in a world where we have very little agency or control over our lives, with or without a UBI.

It doesn’t have to be this way. But if we all keep buying into the doom porn without looking critically at these AI companies and their transhumanist messianic promises, I think that this is the future we’re most likely going to get.