I’ve been thinking a lot recently about something my wife said about AI. She’s finishing up her PhD in computer science, and knows more about generative AI and computational linguistics than just about everyone I know IRL (and most people I follow on the internet, too). So when she speaks on the subject, I do my best to listen.
Ever since OpenAI and ChatGPT took the world by storm, she’s been telling me that she doesn’t think the hallucination problem (where LLMs make stuff up) will ever be solved. Indeed, she doesn’t think it’s a “problem” in a technical sense at all, because every response from a generative AI is a hallucination—and that’s kind of a point. These aren’t really thinking machines, they’re hallucinating machines, replicating patterns in human language and thought. What difference does it make if the answer is false or true?
We call it “artificial intelligence,” but that’s really a misnomer, because these machines have no “intelligence” at all—at least, not in the human sense. Instead, they are like mirrors of our own intelligence, parroting back things that sound like they involve real thought, when really it’s all just pattern replication. They aren’t trained to recognize truth, they’re trained to recognize patterns. So, in reality, everything an AI generates is a “hallucination.”
This is why she thinks that we will never fully solve the hallucination “problem.” Indeed, the whole effort is a bit like trying to turn a lion into a vegan. And until we can train an AI on absolute truth—a thing that humanity has never been able to agree upon, much less reduce to zeroes and ones—then all we will really be able to do is create better and better plumage for our stochastic parrots.
What are the implications of this? First of all, we can safely ignore the worst of the AI doom porn, because a machine that cannot fundamentally recognize truth from falsehood is probably not capable of taking over the world and exterminating or enslaving humanity, even if it does qualify as a “general” intelligence.
We can also lay aside the fear (or the pipe-dream) that AI will 100% replace humans in all or most or really any fields. Even if they can do 90% of the work, recognizing truth is still an essential part of just about everything we as humans do. We can give it jobs and tasks—perhaps even some genuinely complex tasks—but so long as these machines cannot fundamentally distinguish between truth and falsehood, we will still need a human to oversee them.
That doesn’t mean that most humans are safe from being replaced by AI, though. If an AI-augmented person can accomplish the work of 10x or 100x the number of other human workers, we’re still going to face a massive disruption in the labor market and society as a whole. The question, then, is one of ownership and distribution. Who owns the AI? How do we distribute the productivity gains from AI? These are some of the difficult problems we need to solve in the next few years.
But the real problem—and the scariest implication of all of this—is the question of truth itself. After all, if AI is fundamentally incapable of recognizing truth, and all AI output is hallucination on some level, then who determines what is true and what is not? Sam Altman? OpenAI? Congress? Some three-letter government agency?
I think this is going to be the defining question of the rising generation, which is growing up in an AI-native world. What is truth? How can we recognize it? How do we distinguish between what is true and what is false? Increasingly, we are going to find that these are questions that AI cannot answer. And in a world saturated by deep fakes, bots, and sock puppets, where the internet is dead and all the most powerful players are constantly fighting a 5th gen war with each other, truth will be the thing we are all starving for.
The tragedy of the millennial generation is that everything in our world conspired to starve us of the three things we needed most. More than anything else, we hungered for meaning, authenticity, and redemption—and for the most part, we never got it. You can blame social media, the boomers, capitalism, student loan debt, the Republicans, the Democrats—it really makes no difference. All of those things and more came together to hobble our generation and make it almost impossible for us to launch.
Will the same thing happen with the zoomers and gen-alpha over the question of truth? It appears that things are moving in that direction. In a world saturated with AI, truth becomes a scarce and valuable commodity.
So what do we do? First, I think it’s important to recognize that AI cannot and never will be an authority on truth. At best, it only mirrors our own thoughts and ideas back to us—and at worst, it feeds us the thoughts and ideas of those who seek to control us. But AI itself is neutral, just like a gun or a knife lying on a table is neutral. What matters is how it is used.
Beyond that, I don’t really know what to say. Only that this is something I need to think about a lot more. What are your thoughts?