The most realistic AI worst-case scenario

When it comes to AI, there are a lot of crazy doomsday scenarios floating around out there—just like there are a lot of pie-in-the-sky, utopian visions of an AI-dominated future. But while nobody knows exactly what the future will bring, I think most of these projections are totally wrong. Instead, I think that AI will neither save us nor doom us—but it will completely change us.

With that in mind, I thought I would share this discussion of AI, which is one of the most grounded and realistic discussions of the subject that I’ve heard. It’s also one of the most insightful. We’ve created a technology that we barely understand, but it’s still just a new technology, not a savior or an antichrist. In a hundred years, when our great-grandchildren understand this technology and take it for granted, they will probably laugh at how we thought of it (assuming, of course, that Yudkowsky and Soares are wrong, and we aren’t all exterminated by a superintelligent AI).

Will super-intelligent AI take over the world?

I’ve been reading a lot of non-fiction books about AI recently. Basically, whenever a nonfiction audiobook that has anything to do with AI comes into my audiobook library app, I jump on the waiting list and listen to it right away. I’ve also been following AI news podcasts and watching lots of YouTube channels that discuss the recent developments… and boy, is there a lot of doom porn out there.

People who are closely watching this stuff believe that AGI (Artificial General Intelligence) is imminent, ie within the next 6 to 72 months, and that when AGI gets mainstreamed, it will either usher in a golden age of post-scarcity, or the ultimate extinction of all mankind (or both, weirdly). The main crux of their thesis is that once we achieve an AGI that can rewrite its own code, it will quickly turn into a superintelligence, and then it will either work to serve humanity or else work to eliminate humanity as a threat, either by outright exterminating us, or putting us into some kind of zoo.

This is all very science fictional stuff—but now more than ever, we are living in a science fictional world. So what is actually going to happen? Do I believe we going to enter the singularity, and give birth to a new species of superintelligent AI that will ultimately replace us? Or, in the lingo of Silicon Valley, what is my P(doom)?

TL;DR: I have two P(doom) values, one of which is 0%, the other of which is 90%. My P(doom) for basically all of the scenarios that involve a runaway superintelligence is 0%, but my P(doom) for massive catastrophic social upheaval due to the disruptive nature of AI technology is 90%.

For the last century or so (basically ever since Turing’s work during WWII), the field of artificial intelligence has followed a cyclical pattern. First, researchers make some sort of breakthrough, which leads to rapid technological advancements and a brief AI boom. During this boom, futurists and technologists rave about how this technology will keep scaling up forever until it ushers in a sci-fi utopia/dystopia and utterly changes what it means to be human. Then, the technological development stalls as researchers run up against a hard barrier that makes further scaling impossible, at which point most of the investors sours on the technology and we fall into an “AI winter” for a decade or two.

The problem with the futurists and technologists who promote AI technology is that the vast majority of them are transhumanists who believe that intelligence is purely an emergent phenomenon that is 100% materialistic in nature. In other words, they believe that the human mind is little more than an organic machine created through the process of evolution, and that 100% of our intelligence, emotions, spirituality, and experience can be explained and understood through purely material processes. Therefore, if they can build a machine that replicates the same biological processes as the human brain, and subject it to similar conditions that evolution subjected us to, intelligence will naturally emerge from such processes and conditions.

But what if they’re wrong? What if there are more things in heaven and in earth than are dreamed up in our modern philosophies? I’m not saying that evolution didn’t play a role in the creation/emergence of intelligence—only that it’s insufficient. And why wouldn’t it be? Science, by definition, can only explain what it can measure. And what about the questions that we can’t ask? The things about this universe that are as foreign to our own understanding as quantum physics is to a German Shepherd?

For these reasons, I do not think that these generative AI models are going to keep scaling upward until we achieve a general superintelligence. At some point in the next 0-18 months, I think that the researchers and developers are going to start hitting hard limits that we don’t understand, because of the limitations of our understanding of the human brain and how our own intelligence emerged or was created.

I am extremely skeptical of all of the doom porn floating around out there, that we are months away from achieving AGI, and that a superintelligence will shortly thereafter replace us as the dominant species on this planet. For one thing, the goalposts for AGI are constantly moving—by the standards two or three decades ago, we have already achieved it—and for another, the transhumanists have turned this concept of AGI into a sort of Messianic savior / world-ending destroyer. And I just don’t buy into that religion.

So if I’m right, all of this doom porn about a world-ending superintelligence is utterly misguided. Which, on a certain level, is somewhat comforting. But on the other hand, that also means that we shouldn’t expect AI to save us—and that anyone who tries to tell us otherwise is ultimately trying to sell us something.

The big AI developers like OpenAI, Anthropic, etc. have every incentive to hype up the doom porn. It makes them look powerful, which in turn attracts investment capital. At the same time, they also have every incentive to promote this idea that a superintelligent AI can be our savior, since if AGI is inevitable, shouldn’t we put everything we have into making sure that our AI overlords are benevolent and have humanity’s interest at heart? But again, if we take that view, we also end up pumping lots of investment capital into these AI companies, turning them into massive cultural behemoths without really questioning their ultimate aims.

What if instead of building a superintelligent AI savior, we ultimately end up with a new form of techno-feudalism, powered by AI? What if a true superintelligence never emerges, and all of the energy and resources we’re pumping into AI is really just going to create a new class of elites, with the rest of us dependent on some sort of universal basic income and totally at the mercy of the owners, controllers, and operators of AI?

To me, this seems like a much more likely scenario—and from what I can tell, we are already in the opening phases of it. Generative AI has already become so powerful that it will likely replace a large number of jobs or render them obsolete—which may or may not be a problem in the medium- to long-term, but will certainly be a problem in the short-term. As increasing numbers of people find themselves unemployed, it will put a tremendous strain on our welfare safety nets, and drive calls for increased government spending on social problems. But our governments are already so deep in debt that these pressures can only lead to some combination of (hyper)inflation, soveriegn debt crisis, and austerity-driven political instability.

Some people think that the solution to all of this is a universal basic income (UBI). But every time a UBI has been introduced, it has always led to negative outcomes, including wealth outcomes. Unfortunately, if AI is truly going to be a huge driver of unemployment (which doesn’t require AGI or a superintelligence—our current models are already powerful enough to drive massive disruption in the labor market), then I don’t see how we can avoid a massive push toward UBI. Certainly not with how our current investments in AI are so centralized—but again, all of the AGI doom-porn is driving us to centralize things even more. So while all of the benefits of this new technology accrue to Sam Altman, Elon Musk, Dario Amodei, etc, and they keep holding out the promise of a messianic superintelligent AI that never truly emerges, the rest of us end up in a world where we have very little agency or control over our lives, with or without a UBI.

It doesn’t have to be this way. But if we all keep buying into the doom porn without looking critically at these AI companies and their transhumanist messianic promises, I think that this is the future we’re most likely going to get.

NaShoStoMo

So Dan Wells is taking a page from NaNoWriMo and starting his own writing thing for April, NaShoStoMo, aka National Short Story Month.  The rules are as follows:

  • You must write 30 all new short stories between April 1st and April 30th.
  • Each story must have a distinct beginning, middle, and end.
  • Each story must be at least 200 words.
  • You may write more than one story per day to make up for lost days.

It seems like an awesome idea, and I’m going to try it.  I’m not much of a short story writer, but I wish I were, because there are some really awesome short stories out there that I admire–like Endosymbiont, quite possibly the best singularity story that I have ever read (and available for free from Escape Pod).  Novels and short stories are different arts, but they’re both forms of storytelling, so I figure that no matter what happens I’ll learn something from it.

I’m not sure how many of these stories will take place in universes that I’ve already built, but probably a good number of them will.  I have a few characters from Bringing Stella Home that I would like to do little sketches on, possibly for a later novel, and some things I’d like to do in the worlds I’ve already created.

At the same time, though, I’ve got some crazy ideas for standalone stuff that I’d like to play with, like a crazy awesome dream I had last night that made me  lie awake just thinking about it for almost an hour.  It was insane...but I guess you had to be there.

In unrelated news, my writer friend Charlie got me a thing of sparkling grape juice, for me to open when I celebrate my first major publishing deal (though I suspect another motivation was to make me look like a wino while walking around on BYU campus).

Honestly, I was quite surprised–thanks!  I’ll use it to christen my first yacht that I buy from my multimillion dollar first deal, hehe.

Oh, and in other totally unrelated news, my other writer friend Laura started a blog.  So go check it out!

The technological singularity: a thing of the past?

One of the latest trends in science fiction is the concept of the technological singularity — the point in history at which technological advances occur so rapidly that we can no longer learn the new stuff fast enough to keep up with it.

I hear a lot of people talk about this at cons, and I’ve read/listened to quite a few stories about this concept.  Basically, these stories posit a world where science has become a new magic, and our world has been transformed beyond all intelligible recognition.

However, a recent post on the excellent Rocketpunk Manifesto blog made me wonder if we’ve already passed the point of singularity in our own society.  The post basically asserted that the period 1880 to 1930 saw so many sweeping technological advances that the world in 1930 would have been unrecognizable to a person from 1880, whereas our current society would still be intelligible to a person from 1930.

This made me wonder: how far into the singularity have we already come?  How much of our technological infrastructure has become so advanced that the common man lacks the capacity to comprehend it?

Think about it.  Fish around in your pockets and pull out your phone.  Do you understand how it works well enough to take it apart and put it together again?  To rebuild the device from parts?  Do you own the tools and machinery to construct the parts from which it is made?

How about the building in which you currently find yourself?  Do you possess the knowledge to build a comparable structure that performs the same functions?  That keeps you sheltered and provides the same light, heat, electricity, and internet connection that you now enjoy?

There was a time, not too long ago, when people would move out to the wilderness and homestead land by building their own homes from available natural resources.  If you needed to build your own house, as so many people used to do, could you do it?

How about your means of transportation?  If necessary, could you take apart your car and rebuild it again from the ground up?  Could you perform basic maintenance on it if you needed to?  How many of us can change our own oil–and how many of us are dependent on others for such a simple service?

Or what about the things we take most for granted–our understanding of the way the universe works.  Do you really understand the principles of physics?  Do you comprehend how electricity or magnetism really works, or are you still thinking in overly-simplified terms like electrons flowing through a circuit like water?  Even the most intelligent physicists can’t reconcile electromagnetism with Newtonian physics, so what makes you think you know so much?

How much of what we think we know is really just an illusion, meant to keep us pacified and docile?  To give us a false sense of security–that someone is in control, so we can rest easy?  Does anyone REALLY understand 100% how the economy works?  Do any of us know who or what is really in charge anymore?  Have we unwittingly handed over the reigns of control to some digital algorithm so basic to our newly networked way of life to be practically invisible?

Looking at how few of us are truly self-sufficient, and how much power we’ve ceded to forces beyond our control, our modern society seems so delicate and fragile.  Can anyone REALLY say that our society is not in danger of falling apart?  That our way of life is not an unnatural and unsustainable aberration?

Anyhow, those were some of my initial thoughts.  The more I compare the science fiction of the past with the reality of the present, the more predictions I see coming true in the most unexpected of ways.  The singularity may have less to do with uplinked consciousnesses and more to do with Google’s SEO algorithms than we are comfortable admitting.  And realistically, the light bulb may prove to be more revolutionary than anything Apple has ever or will ever produce.