Will super-intelligent AI take over the world?

I’ve been reading a lot of non-fiction books about AI recently. Basically, whenever a nonfiction audiobook that has anything to do with AI comes into my audiobook library app, I jump on the waiting list and listen to it right away. I’ve also been following AI news podcasts and watching lots of YouTube channels that discuss the recent developments… and boy, is there a lot of doom porn out there.

People who are closely watching this stuff believe that AGI (Artificial General Intelligence) is imminent, ie within the next 6 to 72 months, and that when AGI gets mainstreamed, it will either usher in a golden age of post-scarcity, or the ultimate extinction of all mankind (or both, weirdly). The main crux of their thesis is that once we achieve an AGI that can rewrite its own code, it will quickly turn into a superintelligence, and then it will either work to serve humanity or else work to eliminate humanity as a threat, either by outright exterminating us, or putting us into some kind of zoo.

This is all very science fictional stuff—but now more than ever, we are living in a science fictional world. So what is actually going to happen? Do I believe we going to enter the singularity, and give birth to a new species of superintelligent AI that will ultimately replace us? Or, in the lingo of Silicon Valley, what is my P(doom)?

TL;DR: I have two P(doom) values, one of which is 0%, the other of which is 90%. My P(doom) for basically all of the scenarios that involve a runaway superintelligence is 0%, but my P(doom) for massive catastrophic social upheaval due to the disruptive nature of AI technology is 90%.

For the last century or so (basically ever since Turing’s work during WWII), the field of artificial intelligence has followed a cyclical pattern. First, researchers make some sort of breakthrough, which leads to rapid technological advancements and a brief AI boom. During this boom, futurists and technologists rave about how this technology will keep scaling up forever until it ushers in a sci-fi utopia/dystopia and utterly changes what it means to be human. Then, the technological development stalls as researchers run up against a hard barrier that makes further scaling impossible, at which point most of the investors sours on the technology and we fall into an “AI winter” for a decade or two.

The problem with the futurists and technologists who promote AI technology is that the vast majority of them are transhumanists who believe that intelligence is purely an emergent phenomenon that is 100% materialistic in nature. In other words, they believe that the human mind is little more than an organic machine created through the process of evolution, and that 100% of our intelligence, emotions, spirituality, and experience can be explained and understood through purely material processes. Therefore, if they can build a machine that replicates the same biological processes as the human brain, and subject it to similar conditions that evolution subjected us to, intelligence will naturally emerge from such processes and conditions.

But what if they’re wrong? What if there are more things in heaven and in earth than are dreamed up in our modern philosophies? I’m not saying that evolution didn’t play a role in the creation/emergence of intelligence—only that it’s insufficient. And why wouldn’t it be? Science, by definition, can only explain what it can measure. And what about the questions that we can’t ask? The things about this universe that are as foreign to our own understanding as quantum physics is to a German Shepherd?

For these reasons, I do not think that these generative AI models are going to keep scaling upward until we achieve a general superintelligence. At some point in the next 0-18 months, I think that the researchers and developers are going to start hitting hard limits that we don’t understand, because of the limitations of our understanding of the human brain and how our own intelligence emerged or was created.

I am extremely skeptical of all of the doom porn floating around out there, that we are months away from achieving AGI, and that a superintelligence will shortly thereafter replace us as the dominant species on this planet. For one thing, the goalposts for AGI are constantly moving—by the standards two or three decades ago, we have already achieved it—and for another, the transhumanists have turned this concept of AGI into a sort of Messianic savior / world-ending destroyer. And I just don’t buy into that religion.

So if I’m right, all of this doom porn about a world-ending superintelligence is utterly misguided. Which, on a certain level, is somewhat comforting. But on the other hand, that also means that we shouldn’t expect AI to save us—and that anyone who tries to tell us otherwise is ultimately trying to sell us something.

The big AI developers like OpenAI, Anthropic, etc. have every incentive to hype up the doom porn. It makes them look powerful, which in turn attracts investment capital. At the same time, they also have every incentive to promote this idea that a superintelligent AI can be our savior, since if AGI is inevitable, shouldn’t we put everything we have into making sure that our AI overlords are benevolent and have humanity’s interest at heart? But again, if we take that view, we also end up pumping lots of investment capital into these AI companies, turning them into massive cultural behemoths without really questioning their ultimate aims.

What if instead of building a superintelligent AI savior, we ultimately end up with a new form of techno-feudalism, powered by AI? What if a true superintelligence never emerges, and all of the energy and resources we’re pumping into AI is really just going to create a new class of elites, with the rest of us dependent on some sort of universal basic income and totally at the mercy of the owners, controllers, and operators of AI?

To me, this seems like a much more likely scenario—and from what I can tell, we are already in the opening phases of it. Generative AI has already become so powerful that it will likely replace a large number of jobs or render them obsolete—which may or may not be a problem in the medium- to long-term, but will certainly be a problem in the short-term. As increasing numbers of people find themselves unemployed, it will put a tremendous strain on our welfare safety nets, and drive calls for increased government spending on social problems. But our governments are already so deep in debt that these pressures can only lead to some combination of (hyper)inflation, soveriegn debt crisis, and austerity-driven political instability.

Some people think that the solution to all of this is a universal basic income (UBI). But every time a UBI has been introduced, it has always led to negative outcomes, including wealth outcomes. Unfortunately, if AI is truly going to be a huge driver of unemployment (which doesn’t require AGI or a superintelligence—our current models are already powerful enough to drive massive disruption in the labor market), then I don’t see how we can avoid a massive push toward UBI. Certainly not with how our current investments in AI are so centralized—but again, all of the AGI doom-porn is driving us to centralize things even more. So while all of the benefits of this new technology accrue to Sam Altman, Elon Musk, Dario Amodei, etc, and they keep holding out the promise of a messianic superintelligent AI that never truly emerges, the rest of us end up in a world where we have very little agency or control over our lives, with or without a UBI.

It doesn’t have to be this way. But if we all keep buying into the doom porn without looking critically at these AI companies and their transhumanist messianic promises, I think that this is the future we’re most likely going to get.

Five things I did at work last week

So apparently DOGE’s “what are five things you did at work last week” is now an ongoing weekly task, which I am heartily in favor of, at least until the Trump Administration’s reforms to the executive branch are complete. The best counter-argument for this policy that I’ve heard so far comes from Cal Newport, who points out that this sort of request is typical of an insecure and overbearing manager, but I don’t find that argument very convincing. Given the sheer amount of corruption and outright fraud that Elon Musk’s DOGE has already uncovered, I think there are very good reasons for the Trump Administration to be overbearing. Besides, it really shouldn’t be that hard to come up with five bullet points, as I will demonstrate now.

Last week, I:

  • finished releasing all of my books in audio on Audible, using KDP’s AI narration tools,
  • made a rough outline for a seven book series, of which my current WIP (The Soulbond and the Sling) will be the first,
  • re-released “Lord of the Slaves” as a free short story,
  • wrote up character sheets for all of the viewpoint characters in The Soulbond and the Sling, and
  • outlined twelve separate throughlines in the story bible for The Soulbond and the Sling.

Oh, the trauma. How can I possibly be expected to do this every week? And people say that writing isn’t a “real” job… in any case, I plan to make this a regular thing for as long as DOGE and Elon Musk continue to keep it going. Feel free to add your own five bullets in the comments!

Thinking about getting back on Twitter

So now that the world’s richest African-American—who has done more to save the world from the evil sun monster than everyone at COP 25 put together—has now bought Twitter and promised to bring back free speech to the platform, I am seriously considering whether I ought to make a new Twitter account and become active on social media again.

I deleted my Twitter account back in 2016, before the elections, and blogged about it (in less than 140 characters, of course) by saying “life is better without it.” And that’s true. Life is so much better without a Twitter addiction, and that’s the one thing that makes me reticent to get back on the platform.

There is no doubt that our current incarnation of Twitter, before the Elon Musk takeover, is a toxic dumpster fire of outrage and stupidity. But it is also the public square. Life without social media is a lot healthier in a lot of ways, but it does turn you into something of a hermit as far as the internet goes.

The thing is, I’m not very optimistic about Musk’s makeover of Twitter doing much to change the toxicity of the platform, because I think that toxicity has less to do with politics (though that certainly is a factor) and more to do with the dangers of social media addiction itself. In other words, I think our toxic politics is a symptom of social media toxicity, not a cause. The first half of The Social Dilemma really got this right, though the second half was mostly just bad propaganda about the threat of “misinformation” to “our democracy.”

So before I get back on Twitter again, I need to come up with some personal rules in order to keep it from becoming addictive, unhealthy, or toxic to my author brand. Back in 2010, Douglas Rushkoff came up with a sort of ten commandments for digital media, and that seems like a good place to start. His ten commandments are:

  1. Do not be always on
  2. Live in person
  3. You may always choose none of the above
  4. You are never completely right
  5. One size does not fit all
  6. Be yourself
  7. Do not sell your friends
  8. Tell the truth
  9. Share, don’t steal
  10. Program or be programmed

I probably ought to reread the book where he explains all of these commandments. It’s a quick read, with some good theory and a lot of practical wisdom. It is over a decade old, though, so I’m sure there’s a lot of stuff we’ve learned since then. Some of these rules probably don’t go far enough, while other may go too far.

In any case, I’m not going to get back onto Twitter until I have a plan, because the last thing I want is to get addicted to all of the toxic outrage and watch as my career (and possibly life) implodes because of it.

What personal rules do you follow when using social media?

Weekly Roundup for 2018-2-17

I thought it would be interesting to do a weekly blog post of all the remarkable things I saw or read on the internet in the past seven days, with my thoughts and/or reactions. If nothing else, it should be entertaining. Let’s try it out for a few weeks.

1) Proof that the internet has all the maturity of a horny teenager

Or at least Twitter:

2) Extra Sci Fi concludes the Martian Chronicles

Extra Sci Fi is turning out to be a really great YouTube series. They started with Frankenstein, then spent some time on William Gibson, and recently went through the Martian Chronicles by Ray Bradbury. They really do a good job of getting to the heart of classic science fiction.

It reminds me of a Trope Tuesday post I did a while ago about settling the (final) frontier. The whole idea of restarting humanity by leaving Earth behind is one of those things that draws me to science fiction the most. The stories in Bradbury’s Martian Chronicles are more artistic and thematic, but still, that idea is very much a part of them.

3) Roadster, Starman, Planet Earth

If there was any remaining doubt that Elon Musk is secretly trying to help an extraterrestrial get home, APOD posted this awesome photo last Saturday:

I have got to find a way to fit Elon’s roadster into Gunslinger to the Galaxy.

4) Barnes & Noble Layoffs

In publishing news, Barnes & Noble is laying off a bunch of full-time employees in an effort to save on benefits and health insurance. Passive Guy covered it twice, once for the Publishers Weekly article, and again with comments by the employees on The Layoff. There’s also a lively discussion on Mad Genius Club on the subject.

Felix J. Torres, who often has great nuggets of wisdom, shared his insights in a comment on The Passive Voice:

– Those experienced “leads” is where a company’s corporate memory really resides. The people who’ve been through the wars and seen it all, who know where the scripts and handbooks end and common sense crisis management and experience takes over. They are lobotomizing operations.

– If the difference between “lead” pay and entry level is the only thing between them and bankruptcy… Well, they might as well file right now. $40M in “savings”? That’s less than $80,000 per store. For that they disrupt people’s lives and cripple their operations? Smacks of desperation. Chapter 11 must be closer than even the harshest critics expects.

Looks like choppy waters and a major shakeup for the book industry in the coming months and years.

That does it for this week, but I’m sure I’ll have more in the weeks to come!