Will super-intelligent AI take over the world?

I’ve been reading a lot of non-fiction books about AI recently. Basically, whenever a nonfiction audiobook that has anything to do with AI comes into my audiobook library app, I jump on the waiting list and listen to it right away. I’ve also been following AI news podcasts and watching lots of YouTube channels that discuss the recent developments… and boy, is there a lot of doom porn out there.

People who are closely watching this stuff believe that AGI (Artificial General Intelligence) is imminent, ie within the next 6 to 72 months, and that when AGI gets mainstreamed, it will either usher in a golden age of post-scarcity, or the ultimate extinction of all mankind (or both, weirdly). The main crux of their thesis is that once we achieve an AGI that can rewrite its own code, it will quickly turn into a superintelligence, and then it will either work to serve humanity or else work to eliminate humanity as a threat, either by outright exterminating us, or putting us into some kind of zoo.

This is all very science fictional stuff—but now more than ever, we are living in a science fictional world. So what is actually going to happen? Do I believe we going to enter the singularity, and give birth to a new species of superintelligent AI that will ultimately replace us? Or, in the lingo of Silicon Valley, what is my P(doom)?

TL;DR: I have two P(doom) values, one of which is 0%, the other of which is 90%. My P(doom) for basically all of the scenarios that involve a runaway superintelligence is 0%, but my P(doom) for massive catastrophic social upheaval due to the disruptive nature of AI technology is 90%.

For the last century or so (basically ever since Turing’s work during WWII), the field of artificial intelligence has followed a cyclical pattern. First, researchers make some sort of breakthrough, which leads to rapid technological advancements and a brief AI boom. During this boom, futurists and technologists rave about how this technology will keep scaling up forever until it ushers in a sci-fi utopia/dystopia and utterly changes what it means to be human. Then, the technological development stalls as researchers run up against a hard barrier that makes further scaling impossible, at which point most of the investors sours on the technology and we fall into an “AI winter” for a decade or two.

The problem with the futurists and technologists who promote AI technology is that the vast majority of them are transhumanists who believe that intelligence is purely an emergent phenomenon that is 100% materialistic in nature. In other words, they believe that the human mind is little more than an organic machine created through the process of evolution, and that 100% of our intelligence, emotions, spirituality, and experience can be explained and understood through purely material processes. Therefore, if they can build a machine that replicates the same biological processes as the human brain, and subject it to similar conditions that evolution subjected us to, intelligence will naturally emerge from such processes and conditions.

But what if they’re wrong? What if there are more things in heaven and in earth than are dreamed up in our modern philosophies? I’m not saying that evolution didn’t play a role in the creation/emergence of intelligence—only that it’s insufficient. And why wouldn’t it be? Science, by definition, can only explain what it can measure. And what about the questions that we can’t ask? The things about this universe that are as foreign to our own understanding as quantum physics is to a German Shepherd?

For these reasons, I do not think that these generative AI models are going to keep scaling upward until we achieve a general superintelligence. At some point in the next 0-18 months, I think that the researchers and developers are going to start hitting hard limits that we don’t understand, because of the limitations of our understanding of the human brain and how our own intelligence emerged or was created.

I am extremely skeptical of all of the doom porn floating around out there, that we are months away from achieving AGI, and that a superintelligence will shortly thereafter replace us as the dominant species on this planet. For one thing, the goalposts for AGI are constantly moving—by the standards two or three decades ago, we have already achieved it—and for another, the transhumanists have turned this concept of AGI into a sort of Messianic savior / world-ending destroyer. And I just don’t buy into that religion.

So if I’m right, all of this doom porn about a world-ending superintelligence is utterly misguided. Which, on a certain level, is somewhat comforting. But on the other hand, that also means that we shouldn’t expect AI to save us—and that anyone who tries to tell us otherwise is ultimately trying to sell us something.

The big AI developers like OpenAI, Anthropic, etc. have every incentive to hype up the doom porn. It makes them look powerful, which in turn attracts investment capital. At the same time, they also have every incentive to promote this idea that a superintelligent AI can be our savior, since if AGI is inevitable, shouldn’t we put everything we have into making sure that our AI overlords are benevolent and have humanity’s interest at heart? But again, if we take that view, we also end up pumping lots of investment capital into these AI companies, turning them into massive cultural behemoths without really questioning their ultimate aims.

What if instead of building a superintelligent AI savior, we ultimately end up with a new form of techno-feudalism, powered by AI? What if a true superintelligence never emerges, and all of the energy and resources we’re pumping into AI is really just going to create a new class of elites, with the rest of us dependent on some sort of universal basic income and totally at the mercy of the owners, controllers, and operators of AI?

To me, this seems like a much more likely scenario—and from what I can tell, we are already in the opening phases of it. Generative AI has already become so powerful that it will likely replace a large number of jobs or render them obsolete—which may or may not be a problem in the medium- to long-term, but will certainly be a problem in the short-term. As increasing numbers of people find themselves unemployed, it will put a tremendous strain on our welfare safety nets, and drive calls for increased government spending on social problems. But our governments are already so deep in debt that these pressures can only lead to some combination of (hyper)inflation, soveriegn debt crisis, and austerity-driven political instability.

Some people think that the solution to all of this is a universal basic income (UBI). But every time a UBI has been introduced, it has always led to negative outcomes, including wealth outcomes. Unfortunately, if AI is truly going to be a huge driver of unemployment (which doesn’t require AGI or a superintelligence—our current models are already powerful enough to drive massive disruption in the labor market), then I don’t see how we can avoid a massive push toward UBI. Certainly not with how our current investments in AI are so centralized—but again, all of the AGI doom-porn is driving us to centralize things even more. So while all of the benefits of this new technology accrue to Sam Altman, Elon Musk, Dario Amodei, etc, and they keep holding out the promise of a messianic superintelligent AI that never truly emerges, the rest of us end up in a world where we have very little agency or control over our lives, with or without a UBI.

It doesn’t have to be this way. But if we all keep buying into the doom porn without looking critically at these AI companies and their transhumanist messianic promises, I think that this is the future we’re most likely going to get.

2019-10-03 Newsletter Author’s Note

This author’s note originally appeared in the October 3rd edition of my author newsletter. To subscribe to my newsletter, click here.

When Mrs. Vasicek and I got married, we decided that there would be no smart devices or screens in our house beyond the master bedroom. Our reasoning had mostly to do with personal health and avoiding bad habits, though there was also some concern about data collection and privacy.

One of the things I really like about this rule is that it keeps me from becoming too attached to my smart phone. Most of us are never be more than an arm’s reach away from our phones, and over time we come to feel almost like they’re a physical part of us. But every night, Mrs. Vasicek and I leave our phones to get ready for bed, and we don’t pick them up again until after we’re fully awake.

I have to admit that I had withdrawals at first, but now I feel much better. My phone is just another tool now; it no longer feels like an extension of myself.

Another thing I really like about this rule is how it sets apart a large section of the house that is free from digital distractions. The bedroom is now a really great place to read. Our one exception to the no screens rule is my Kindle Paperwhite, which uses e-ink anyway so it’s not as bad as an LED screen. It’s also seven-and-a-half years old, so web browsing isn’t really practical.

The other thing I really like is how it sets my mind at ease to know that there’s at least one part of the house where there aren’t any digital recording devices surveilling and collecting data on us. (Please don’t tell me that the Paperwhite is recording me too!)

In the last few years, it seems that Big Tech has been increasingly intrusive in our lives. Over the summer, it seemed like every week there’d be a new story about a Silicon Valley whistleblower, or an undercover investigation, or even a senior Google executive coming out on the record about censorship, bias, and control.

A couple of weeks ago, Glenn Beck did a fascinating interview with Robert Epstein, a researcher who found compelling evidence that Google has both the capability and the motivation to sway major national elections. (Epstein voted for Clinton in 2016, so the interview wasn’t partisan.) It reminded me of a presentation that Chamath Palihapitiya (senior executive at Facebook from 2007 to 2011) gave at Stanford in 2017, where he talked about social media addiction and explained why he doesn’t use social media nor allow his children to do so.

It’s becoming increasingly difficult to navigate our modern, complex world in a way that doesn’t surrender most of our agency to Big Tech and Silicon Valley. It’s also becoming increasingly ambiguous how much of that agency is an illusion, with companies like Facebook and Google influencing us in ways we aren’t consciously aware of.

As an indie author who depends on Amazon for a large part of my income, I’m very much aware of these issues. It’s part of the reason why I’m working so hard to build and maintain this newsletter, so that I don’t have to depend on Big Tech for my book marketing. It’s impossible to be a career author these days without a plan for navigating this world.

Where are we headed? Science fiction gives us a chilling answer. Right now, it appears that China and the East are going the way of 1984, while the United States and the West are going the way of Brave New World.

But those books were written almost a hundred years ago, and technologies have been developed that Orwell and Huxley couldn’t have even dreamed of. It’s time for a new generation of writers to pick up the torch that they handed off to us.

That’s a big reason why I’m writing “Sex, Life, and Love under the Algorithms.” As for where to go next, I honestly don’t know. So much happening in the world today screams out for new science fiction just to make sense of it all, so when I’m not writing fantasy I’ll probably delve more into that.

Whatever else happens, we’re all in this rabbit hole together.

Algorithms, social media addictions, and the endless churn of content

In the last 5-6 years, I’ve noticed a shift in most of the media content that I consume. Content has proliferated at an unprecedented rate, and the churn—or the rate at which new content pushes out old content—has become one of the driving factors for those of us trying to make our careers in this way.

We see it on YouTube, where three or four adpocalypses have massacred various channels, and where copystrikes have become part of the game. YouTubers who don’t put up content every day, like Tim Pool or Pewdiepie, quickly lose views and subscribers even when they do put up new content.

We see it in video games, where companies like Paradox are now making the bulk of their money on DLCs, some of which make the vanilla version almost unplayable. Back in the 90s, a game was a game was a game. You could get expansion packs for some of them, but that was just bonus content, not a core part of the gaming experience, or the business model.

It’s a huge issue in journalism, where the news cycle has accelerated so much that weeks feel like months, and months feel like years now. Remember the Kavanaugh hearings? That was less than a year ago. The Covington kids controversy happened this year. Everyone is in such a race to break the story that the quality of journalism has fallen considerably, but by the time the corrections come out, the news cycle has already moved on. Fake news indeed.

The churn has also become a major thing in the indie publishing scene. For the last few years, the established wisdom (if there is any) is that you need to publish a new book about every other month—preferably every other week—to keep your entire catalog from falling into obscurity. There’s a 30-day cliff and a 90-day cliff, at which points the Amazon algorithm stops favoring your books over new ones. And now, to complicate things, AMS ads are taking over from more organic book recommendation methods, like also-boughts. The treadmill is real, and it’s accelerating.

I’ve been thinking a lot about this, and I can think of a few things that may be driving it. I don’t have any statistics or firm arguments to back it up yet, just a couple of hunches, but it’s still worth bringing them up to spark a discussion.

First, social media has taken over our society, not only in public life, but in personal life as well. Now more than ever before, we use Facebook, Twitter, Instagram, Snapchat, and other social media to interact with each other. The problem is that these social media sites are incentivized to get us addicted to them, since we are the product they sell—our data, our time, and our eyeballs. Every like is another dopamine hit. Every outrageous headline is another injection of cortisol.

We have literally become a society of drug addicts. The drugs may be naturally produced by our bodies, but big tech has figured out how to manipulate it like never before. And as addicts, we are always looking for our next hit.

That’s not all, though. There’s a feedback loop between the end-users who consume content, and the algorithms that deliver content recommendations to the end-users. When something new gets hot on social media, the algorithms act as a force multiplier to drive it even further. But because of our addiction, and the fact that we’re constantly looking for the next hit, things can fall off just as quickly as they rise. Hence the churn.

It’s also a function of the massive rate at which content is proliferating across all forms of media. I’m not sure how many millions of English-language books are published any year now, but it’s much, much more than it was back when tradpub was the only real game in town. Same with videos, music, news blogs, etc. With so much new content coming out all the time, and so many people on social media ready to share it, the conditions for churn have never been stronger.

But there’s another, more sinister aspect to all of this, and it has to do with the biases of big tech and Silicon Valley. Yes, there is a feedback loop that governs the algorithm, but it goes both ways: the people who write the algorithm can, within constraints, use it to reprogram all of us, or even society itself.

I don’t think it’s a mistake that the churn is worse on sites that are run by big tech, or worse on content creators who depend on the platforms that big tech provides. The authors experiencing the worst burnout all seem to be exclusive with Amazon and Kindle Unlimited, and news sites that are getting hit the worst now (Vice, Buzzfeed, etc) all depended on clickbait tactics to ride the Facebook algorithm.

There are a few content creators who seem to have escaped the churn. As a general rule, they seem to be scaling back their social media usage and developing more traditional income streams, like subscriptions, sponsorships, and email lists. Steven Crowder, Tim Pool, and Pewdiepie are all examples. A few of them, like Alex Jones, Carl Benjamin, and Paul Joseph Watson, are learning how to swim by getting tossed in the deep end. Big tech has deplatformed them, but they’re learning—and showing to the rest of us—that it’s possible to make your own path, even when all the algorithms conspire against you.

I recently listened to a fascinating interview on the Jordan Peterson podcast, where he talked with Milo Yiannopoulos. Milo fell out of the public sphere when allegations of pedophilia emerged, getting him banned from CPAC in 2018. His career isn’t over, though, and his future prospects look quite bright, especially with the plan he’s been putting together. If he succeeds, big tech and the algorithms will never be able to touch him.

In my post a couple of days ago, I argued that one of the unique advantages of books over other forms of media is that they are timeless. As Kris Rusch puts it, books aren’t like produce—no matter how long they sit on the shelf, they don’t spoil. We are still reading books that were written centuries ago.

If that’s true, then there must be something about books that makes them resilient to churn. In fact, books may be the antidote to churn. That’s basically Jeff VanderMeer’s thesis in Booklife. It’s also worth rereading Program or Be Programmed by Douglas Rushkoff, where he offers some helpful rules to keep social media and the algorithms from completely taking over our lives.

So as indie writers, what’s the best way to deal with all of this? I’m not entirely sure. Back in 2011 when I first started indie publishing, slow-build and long-tail strategies seemed a lot more viable than they do now. But if there is something inherent in books that makes them the antidote to churn, then there has to be a way to take advantage of that.

I’ll let you know when I find it.