“…History will call us wives.”

“Think on it, Chani: that princess will have the name, yet she’ll live as less than a concubine—never to know a moment of tenderness from the man to whom she’s bound. While we, Chani, we who carry the name of concubine—history will call us wives.”

Frank Herbert, Dune (last line)

The best take on the Epstein files that I’ve heard

Worth listening through to the end. I think Malcolm misses some of the deeper nuances of Epstein’s (alleged) operation, but there are plenty of people schooling him on it in the comments to this video.

Epstein did not kill himself… and if it ever became public who did, it would probably start WWIII (or massively escalate it, if indeed it has already started). We certainly live in interesting times.

Remember how I said that AGI is a pipe dream?

A couple of weeks ago, I posted my thoughts on AGI (artificial general intelligence) and all of the doom-porn floating around that we are years, or possibly even months, away from the emergence of an artificial superintelligence that will either usher in an Edenic post-scarcity utopia, or exterminate all of mankind. Believe it or not, this is a big fear in Silicon Valley, among the people who are building these systems (though I suspect that the top-level executives don’t really believe it and are instead exploiting that fear to serve their own ends).

My view, in a nutshell, is that we will not see the emergence of AGI or superintelligence under the current research paradigm, because the current paradigm is based on pure materialism, assuming that intelligence itself is merely an emergent phenomenon, and that if the conditions for that emergence can be replicated, a human-level (or superhuman-level) intelligence will be created. My prediction is that in the next 1-3 years, AI development will run up against a wall, and all of the scaling in the world will fail to produce the sort of drastic gains that the doomsayers are predicting.

Well, it seems that we may be much closer to that wall than I supposed. I’m not super familiar with this YouTuber, but I’ve been following a lot of his content recently, and he seems to be very intelligent and also very keyed into what’s currently happening in AI development. And in this video, he may have just pointed out the wall that we’re about to run up against—if indeed, we haven’t already.

In any case, it’s worth watching, especially if you are looking to incorporate AI into your work life. Lots of practical advice, too.

“Great, green, saurian things…”

The Hegemony Consul sat on the balcony of his ebony spaceship and played Rachmaninoff’s Prelude in C-sharp Minor on an ancient but well-maintained Steinway while great, green, saurian things surged and bellowed in the swamps below.

Hyperion by Dan Simmons (first line)

Fantasy from A to Z: K is for Kings

Why are kings and kingdoms so common in fantasy?

Part of it has to do with the genre’s nostalgic yearning for a distant past. One way of understanding the modern era is to see it as an unending series of political revolutions that have spread like a slow-moving contagion from one part of the world to another. 

It started with the English Civil War, then died down for a while until it manifested in the American Revolutionary War, which resulted in the creation of the United States. After that, it spread to France, leading to the French Revolution and a very messy tug-of-war between the Republicans and the Monarchists, ultimately leading to the permanent end of the French monarchy. 

Then we had the aborted revolutions of 1848, which ultimately gave us Karl Marx and Socialism, the Bolivarian revolutions in Latin America, the American Civil War, which culturally was something of an echo of the old English Civil War (with the Cavaliers in the south and the Puritan Roundheads in the north), and ultimately the Bolshevik Revolution which gave us global communism, etc etc.

I won’t belabor the point (though if you want to hear a good podcast that covers all this stuff, check out Revolutions by Mike Duncan). The point is that the modern era has basically been one long series of very messy wars to depose the old medieval kings and emperors. Today, the only monarchies that survive are either constitutional monarchies that no longer exercise direct political power (for example, King Charles of the United Kingdom), or else they are strange aberrations that only exist because of unique regional history and economic circumstances (for example, Mohammed bin Salman of Saudi Arabia, whose dynasty depends almost entirely on the country’s oil reserves).

Fantasy is all about hearkening back to a romantic view of the premodern past, even if that past never existed. So it shouldn’t come as a surprise that most fantasy—especially classic fantasy—tends to feature kings and kingdoms. Never mind that historically, many medieval kings were almost totally beholden to their dukes, especially in the time before gunpowder, when the dukes could just hole up in their castles and openly defy their kings. That’s why Europe has so many medieval castles.

Of course, some fantasy like George R.R. Martin’s Song of Ice and Fire does a really good job of capturing the complex dynamics of feudal politics. A lot of the old sword & sorcery also plays around with those kinds of medieval political tensions, balancing the nostalgic aspect of fantasy with the savagery of backstabbing courtiers and brutal hand-to-hand combat. Robert E. Howard’s classic Conan the Barbarian stories are a great example of this, with Conan ultimately rising to become King of Aquilonia.

Both grimdark and sword & sorcery embrace the medieval savagery—indeed, it’s a large part of the nostalgic yearning. Other subgenres play down the savagery, either by making the king a distant power, or by making the world out to be a lightly-populated wilderness. Lord of the Rings is a good example of both, though it still defaults to feudal monarchy as the majority political system.

Is there a subconscious yearning for old-fashioned monarchy that fantasy quietly fulfills? Perhaps, but I don’t think so. If kings and kingdoms are the default system of government in most fantasy novels, I think that’s because it was the default for much of the medieval era. In books like Game of Thrones where the political intrigue is a key aspect of the story, you get into the more complicated aspects of feudal politics, but that’s not necessarily a requirement.

Personally, I enjoy fantasy with a little bit of medieval-style political intrigue, though most grimdark tends to overdo it. I did really enjoy Larry Correia’s Saga of the Forgotten Warrior, though (no spoilers please—I haven’t yet read the last book!) Robert E. Howard hits the sweet spot, I think, with a world so wild and savage that no king has managed to subdue it, and even a barbarian can rise to become a king.

June Reading Recap

Books that I finished

Chokepoints by Edward Fishman

Who Is Government? by Michael Lewis

The Lonely Men by Louis L’Amour

Beekeeping by Nancy Ross

Flash Boys by Michael Lewis

Finish by Jon Acuff

Where the Long Grass Blows by Louis L’Amour

Writing the Breakout Novel by Donald Maass

Empire of AI by Karen Hao

The Untold Story of Books by Michael Castleman

Real Artists Don’t Starve by Jeff Goins

Books that I DNFed

  • The Pornography Wars by Kelsy Burke
  • If You Could Live Anywhere by Melody Warnick
  • Writing on Empty by Natalie Goldberg
  • Crashed by Adam Tooze
  • The Four Hour Work Week by Tim Ferris
  • The Long Game by Dorie Clark
  • The Motivation Myth by Jeff Haden
  • When It All Burns by Thomas Jordan
  • Inside the Real Area 51 by Thomas J. Carey and Donald R. Schmitt
  • The AI Con by Emily M. Bender and Alex Hanna
  • The Shadow Rising by Robert Jordan

Will super-intelligent AI take over the world?

I’ve been reading a lot of non-fiction books about AI recently. Basically, whenever a nonfiction audiobook that has anything to do with AI comes into my audiobook library app, I jump on the waiting list and listen to it right away. I’ve also been following AI news podcasts and watching lots of YouTube channels that discuss the recent developments… and boy, is there a lot of doom porn out there.

People who are closely watching this stuff believe that AGI (Artificial General Intelligence) is imminent, ie within the next 6 to 72 months, and that when AGI gets mainstreamed, it will either usher in a golden age of post-scarcity, or the ultimate extinction of all mankind (or both, weirdly). The main crux of their thesis is that once we achieve an AGI that can rewrite its own code, it will quickly turn into a superintelligence, and then it will either work to serve humanity or else work to eliminate humanity as a threat, either by outright exterminating us, or putting us into some kind of zoo.

This is all very science fictional stuff—but now more than ever, we are living in a science fictional world. So what is actually going to happen? Do I believe we going to enter the singularity, and give birth to a new species of superintelligent AI that will ultimately replace us? Or, in the lingo of Silicon Valley, what is my P(doom)?

TL;DR: I have two P(doom) values, one of which is 0%, the other of which is 90%. My P(doom) for basically all of the scenarios that involve a runaway superintelligence is 0%, but my P(doom) for massive catastrophic social upheaval due to the disruptive nature of AI technology is 90%.

For the last century or so (basically ever since Turing’s work during WWII), the field of artificial intelligence has followed a cyclical pattern. First, researchers make some sort of breakthrough, which leads to rapid technological advancements and a brief AI boom. During this boom, futurists and technologists rave about how this technology will keep scaling up forever until it ushers in a sci-fi utopia/dystopia and utterly changes what it means to be human. Then, the technological development stalls as researchers run up against a hard barrier that makes further scaling impossible, at which point most of the investors sours on the technology and we fall into an “AI winter” for a decade or two.

The problem with the futurists and technologists who promote AI technology is that the vast majority of them are transhumanists who believe that intelligence is purely an emergent phenomenon that is 100% materialistic in nature. In other words, they believe that the human mind is little more than an organic machine created through the process of evolution, and that 100% of our intelligence, emotions, spirituality, and experience can be explained and understood through purely material processes. Therefore, if they can build a machine that replicates the same biological processes as the human brain, and subject it to similar conditions that evolution subjected us to, intelligence will naturally emerge from such processes and conditions.

But what if they’re wrong? What if there are more things in heaven and in earth than are dreamed up in our modern philosophies? I’m not saying that evolution didn’t play a role in the creation/emergence of intelligence—only that it’s insufficient. And why wouldn’t it be? Science, by definition, can only explain what it can measure. And what about the questions that we can’t ask? The things about this universe that are as foreign to our own understanding as quantum physics is to a German Shepherd?

For these reasons, I do not think that these generative AI models are going to keep scaling upward until we achieve a general superintelligence. At some point in the next 0-18 months, I think that the researchers and developers are going to start hitting hard limits that we don’t understand, because of the limitations of our understanding of the human brain and how our own intelligence emerged or was created.

I am extremely skeptical of all of the doom porn floating around out there, that we are months away from achieving AGI, and that a superintelligence will shortly thereafter replace us as the dominant species on this planet. For one thing, the goalposts for AGI are constantly moving—by the standards two or three decades ago, we have already achieved it—and for another, the transhumanists have turned this concept of AGI into a sort of Messianic savior / world-ending destroyer. And I just don’t buy into that religion.

So if I’m right, all of this doom porn about a world-ending superintelligence is utterly misguided. Which, on a certain level, is somewhat comforting. But on the other hand, that also means that we shouldn’t expect AI to save us—and that anyone who tries to tell us otherwise is ultimately trying to sell us something.

The big AI developers like OpenAI, Anthropic, etc. have every incentive to hype up the doom porn. It makes them look powerful, which in turn attracts investment capital. At the same time, they also have every incentive to promote this idea that a superintelligent AI can be our savior, since if AGI is inevitable, shouldn’t we put everything we have into making sure that our AI overlords are benevolent and have humanity’s interest at heart? But again, if we take that view, we also end up pumping lots of investment capital into these AI companies, turning them into massive cultural behemoths without really questioning their ultimate aims.

What if instead of building a superintelligent AI savior, we ultimately end up with a new form of techno-feudalism, powered by AI? What if a true superintelligence never emerges, and all of the energy and resources we’re pumping into AI is really just going to create a new class of elites, with the rest of us dependent on some sort of universal basic income and totally at the mercy of the owners, controllers, and operators of AI?

To me, this seems like a much more likely scenario—and from what I can tell, we are already in the opening phases of it. Generative AI has already become so powerful that it will likely replace a large number of jobs or render them obsolete—which may or may not be a problem in the medium- to long-term, but will certainly be a problem in the short-term. As increasing numbers of people find themselves unemployed, it will put a tremendous strain on our welfare safety nets, and drive calls for increased government spending on social problems. But our governments are already so deep in debt that these pressures can only lead to some combination of (hyper)inflation, soveriegn debt crisis, and austerity-driven political instability.

Some people think that the solution to all of this is a universal basic income (UBI). But every time a UBI has been introduced, it has always led to negative outcomes, including wealth outcomes. Unfortunately, if AI is truly going to be a huge driver of unemployment (which doesn’t require AGI or a superintelligence—our current models are already powerful enough to drive massive disruption in the labor market), then I don’t see how we can avoid a massive push toward UBI. Certainly not with how our current investments in AI are so centralized—but again, all of the AGI doom-porn is driving us to centralize things even more. So while all of the benefits of this new technology accrue to Sam Altman, Elon Musk, Dario Amodei, etc, and they keep holding out the promise of a messianic superintelligent AI that never truly emerges, the rest of us end up in a world where we have very little agency or control over our lives, with or without a UBI.

It doesn’t have to be this way. But if we all keep buying into the doom porn without looking critically at these AI companies and their transhumanist messianic promises, I think that this is the future we’re most likely going to get.

Restored footage from just after WWI

I don’t know how the YouTube algorithm decides what to show me, but every once and a while something really fascinating shows up in my recommends. This was one of those times. Really excellent job restoring this old footage.