Will super-intelligent AI take over the world?

I’ve been reading a lot of non-fiction books about AI recently. Basically, whenever a nonfiction audiobook that has anything to do with AI comes into my audiobook library app, I jump on the waiting list and listen to it right away. I’ve also been following AI news podcasts and watching lots of YouTube channels that discuss the recent developments… and boy, is there a lot of doom porn out there.

People who are closely watching this stuff believe that AGI (Artificial General Intelligence) is imminent, ie within the next 6 to 72 months, and that when AGI gets mainstreamed, it will either usher in a golden age of post-scarcity, or the ultimate extinction of all mankind (or both, weirdly). The main crux of their thesis is that once we achieve an AGI that can rewrite its own code, it will quickly turn into a superintelligence, and then it will either work to serve humanity or else work to eliminate humanity as a threat, either by outright exterminating us, or putting us into some kind of zoo.

This is all very science fictional stuff—but now more than ever, we are living in a science fictional world. So what is actually going to happen? Do I believe we going to enter the singularity, and give birth to a new species of superintelligent AI that will ultimately replace us? Or, in the lingo of Silicon Valley, what is my P(doom)?

TL;DR: I have two P(doom) values, one of which is 0%, the other of which is 90%. My P(doom) for basically all of the scenarios that involve a runaway superintelligence is 0%, but my P(doom) for massive catastrophic social upheaval due to the disruptive nature of AI technology is 90%.

For the last century or so (basically ever since Turing’s work during WWII), the field of artificial intelligence has followed a cyclical pattern. First, researchers make some sort of breakthrough, which leads to rapid technological advancements and a brief AI boom. During this boom, futurists and technologists rave about how this technology will keep scaling up forever until it ushers in a sci-fi utopia/dystopia and utterly changes what it means to be human. Then, the technological development stalls as researchers run up against a hard barrier that makes further scaling impossible, at which point most of the investors sours on the technology and we fall into an “AI winter” for a decade or two.

The problem with the futurists and technologists who promote AI technology is that the vast majority of them are transhumanists who believe that intelligence is purely an emergent phenomenon that is 100% materialistic in nature. In other words, they believe that the human mind is little more than an organic machine created through the process of evolution, and that 100% of our intelligence, emotions, spirituality, and experience can be explained and understood through purely material processes. Therefore, if they can build a machine that replicates the same biological processes as the human brain, and subject it to similar conditions that evolution subjected us to, intelligence will naturally emerge from such processes and conditions.

But what if they’re wrong? What if there are more things in heaven and in earth than are dreamed up in our modern philosophies? I’m not saying that evolution didn’t play a role in the creation/emergence of intelligence—only that it’s insufficient. And why wouldn’t it be? Science, by definition, can only explain what it can measure. And what about the questions that we can’t ask? The things about this universe that are as foreign to our own understanding as quantum physics is to a German Shepherd?

For these reasons, I do not think that these generative AI models are going to keep scaling upward until we achieve a general superintelligence. At some point in the next 0-18 months, I think that the researchers and developers are going to start hitting hard limits that we don’t understand, because of the limitations of our understanding of the human brain and how our own intelligence emerged or was created.

I am extremely skeptical of all of the doom porn floating around out there, that we are months away from achieving AGI, and that a superintelligence will shortly thereafter replace us as the dominant species on this planet. For one thing, the goalposts for AGI are constantly moving—by the standards two or three decades ago, we have already achieved it—and for another, the transhumanists have turned this concept of AGI into a sort of Messianic savior / world-ending destroyer. And I just don’t buy into that religion.

So if I’m right, all of this doom porn about a world-ending superintelligence is utterly misguided. Which, on a certain level, is somewhat comforting. But on the other hand, that also means that we shouldn’t expect AI to save us—and that anyone who tries to tell us otherwise is ultimately trying to sell us something.

The big AI developers like OpenAI, Anthropic, etc. have every incentive to hype up the doom porn. It makes them look powerful, which in turn attracts investment capital. At the same time, they also have every incentive to promote this idea that a superintelligent AI can be our savior, since if AGI is inevitable, shouldn’t we put everything we have into making sure that our AI overlords are benevolent and have humanity’s interest at heart? But again, if we take that view, we also end up pumping lots of investment capital into these AI companies, turning them into massive cultural behemoths without really questioning their ultimate aims.

What if instead of building a superintelligent AI savior, we ultimately end up with a new form of techno-feudalism, powered by AI? What if a true superintelligence never emerges, and all of the energy and resources we’re pumping into AI is really just going to create a new class of elites, with the rest of us dependent on some sort of universal basic income and totally at the mercy of the owners, controllers, and operators of AI?

To me, this seems like a much more likely scenario—and from what I can tell, we are already in the opening phases of it. Generative AI has already become so powerful that it will likely replace a large number of jobs or render them obsolete—which may or may not be a problem in the medium- to long-term, but will certainly be a problem in the short-term. As increasing numbers of people find themselves unemployed, it will put a tremendous strain on our welfare safety nets, and drive calls for increased government spending on social problems. But our governments are already so deep in debt that these pressures can only lead to some combination of (hyper)inflation, soveriegn debt crisis, and austerity-driven political instability.

Some people think that the solution to all of this is a universal basic income (UBI). But every time a UBI has been introduced, it has always led to negative outcomes, including wealth outcomes. Unfortunately, if AI is truly going to be a huge driver of unemployment (which doesn’t require AGI or a superintelligence—our current models are already powerful enough to drive massive disruption in the labor market), then I don’t see how we can avoid a massive push toward UBI. Certainly not with how our current investments in AI are so centralized—but again, all of the AGI doom-porn is driving us to centralize things even more. So while all of the benefits of this new technology accrue to Sam Altman, Elon Musk, Dario Amodei, etc, and they keep holding out the promise of a messianic superintelligent AI that never truly emerges, the rest of us end up in a world where we have very little agency or control over our lives, with or without a UBI.

It doesn’t have to be this way. But if we all keep buying into the doom porn without looking critically at these AI companies and their transhumanist messianic promises, I think that this is the future we’re most likely going to get.

Our world makes a lot more sense…

…when you realize that the internet is a factory for creating cults, and that social media and smart devices are force multipliers for this effect.

Before the internet, your “community” was a geographically bound group of people, who were diverse enough (that’s “diverse” with a lower-case d) to give you an interesting variety of perspectives and worldviews. Also, you typically interacted with each other while physically in person. If you said or did something extremely embarrassing, it typically didn’t get beyond your immediate circle of associates, or the people you decided to tell about it.

The internet changed everything by turning “community” into something that was bound by interests, hobbies, perspectives, or worldviews. Now, every person with a weird and perverse fetish, who before kept it hidden because they were the only person in their community who held it, now could find all the other people in the world who held the same weird and perverse fetish, and create a “community” around that thing. Same with crazy political views. Same with radical ideology.

At the same time, if you said or did something embarrassing, and it went viral, your embarrassing moment would be broadcast far beyond your immediate circle of associates, to people you had never before met—as well as to people whom you would never want to hear about it. This effect was multiplied by the development of social media, and it led people to self-censor and conform to whatever “community” they were a part of, in the fear of standing out and going viral.

At the same time, all these “communities” turned into echo chambers that warped the various members’ view of reality. And because anger and outrage are the things that are most likely to get spread on the internet (see the video above), these echo chambers starting to become paranoid and break off from the rest of the world, taking the dimmest and least charitable view of everyone who wasn’t a member of their “community.”

As these online communities came to take a more prominent place in the average person’s life than their own families and communities, then the average person’s sense of identity increasingly became caught up in whatever hobby, fetish, or ideology united the “community.” And because of how paranoid these communities became, they increasingly came to demand absolute and preeminent allegiance. Is this starting to sound like a cult yet?

But it goes deeper than that, because the devices through which we connect with these “communities” actually make us more physically isolated from each other, while giving us the illusion of a genuine connection. When you’re holding up your smart device to capture a fireworks show, you’re not actually enjoying the fireworks. And when you’re lying in your bed, posting updates on your social media or chatting with your friends, you are still, in reality, lying alone in your bed. Combine with the internet’s penchant to drive outrage, and you have the two key ingredients for a mass formation psychosis: a large group of atomized and isolated individuals suffering from free-floating anxiety.

Before the pandemic, (that’s the Covid-19 pandemic of 2020, for future readers who may be wondering “which one?”) I think that we lived in a world where the majority of our countrymen—the members of our “community” in the traditional sense—were not caught up in one of these cults. Either the majority of people weren’t caught up in one of these echo chambers, or the majority of echo chambers hadn’t yet reached cult-status, but people were still generally reasonable, on the whole. But with the pandemic, I think we passed through some sort of a threshold, to the point where now the best way to make sense of our world is to assume that the majority of people around you are trapped in some sort of a cult—which may literally be the case, considering the theory of mass formation psychosis.

So what does this mean for where the world is headed? Nothing good. I suppose that in an optimistic scenario, a critical mass of people manages to break themselves and their friends out of this mess, and go on to build a new society with proper safeguards in place to prevent this sort of mess from happening again. But I think it’s much more likely that this thing runs its course, and large swaths of our civilization drink the proverbial Kool-Aid.

Fortunately, there is a script that we can run, as individuals and (more importantly) as families, to get through this mess. It’s the same script that we use to get ourselves or our loved ones out of a dangerous cult. I’m not yet an expert on that script, but I know that it’s out there, because cults have been a thing for a very long time. But I’m pretty sure it involves putting your family first, getting off of social media, limiting the amount of time that you spend on your smart devices, and becoming more involved in your real “community”—the real-life one where you actually live.

Why I’m not worried about AI replacing writers

So machine learning artificial intelligence has really been blowing up this past month, probably because of ChatGPT and all of the fascinating things that people are doing with it. I’ve been getting into it myself, using it to help write or improve my book descriptions, and also experimenting with it for writing stories.

At this point, any original fiction that ChatGPT writes is about the same quality as something written by an overly eager six year-old (minus the grammar and spelling errors), but I can see how that could change in the future, especially on a language learning model that’s trained on, say, Project Gutenberg, or the complete bibliographies of a couple of hundred major SF&F writers. The technology isn’t quite there yet, but in a few years it could be.

But apparently, that hasn’t stopped hordes of amateur writers and/or warrior forum types from using ChatGPT to spam the major magazines with AI-written stories. In fact, Clarkesworld recently closed to submissions because they were getting flooded with “stories written, co-written, or assisted by AI.” Neil Clarke wrote an interesting blog post on this problem, saying that this is a major growing problem for all of the magazines and that they will probably have to change the way they do business to deal with it.

So will AI eventually become so good that it replaces writers altogether? I don’t think so, and here’s why.

Replacement vs. collaboration

The gap between an AI that can do 100% of what a fiction writer can do and an AI that can do 90% is actually much wider than the gap between an AI that can do 90% and an AI that can only do 50%. That’s because both the 90%-effective AI and the 50%-effective AI require collaboration with a human in order to do the job. Neither of them can fully replace the human, though a human-AI team may be able to do the work of many humans working alone.

If we ever get to the point where AI replaces storytellers completely, we have much bigger problems than a few out-of-work science fiction writers. Storytelling lies at the heart of what it means to be human: we call ourselves “homo sapiens,” but we really should call ourselves “homo narrans,” since story is how we make sense of everything in our world. If an AI can replace that, then we as a species have become obsolete.

But I don’t think we’re going to ever reach that point. My wife is currently getting a PhD in computer science—specifically in machine learning and language models—and she believes that there is an inherent tradeoff between intelligences that can specialize well, and intelligences that can generalize well. AIs are master specialists, but humans are master generalists. If we ever build an AI that’s a master generalist, we may find that it’s actually much less intelligent than an average human, because of the tradeoff.

But all of that is purely speculative at this point. Right now, we really only have AIs that can do about 20% of what a fiction writer can do. In the coming years, we may ramp that up to 50% or even 90%, but anything less than 100% is not going to fully replace me.

Tools, force multipliers, and the nature of writing

However, that doesn’t mean that the thing we currently call “writing” isn’t going to change in some pretty dramatic ways, much as how the internal combustion engine dramatically changed the thing we call “driving.” And with these changes, we may very well get to the point where the market just can’t support as many professional writers, and the vast majority of us have to find other lines of work.

Conversely, it may actually expand the market for “reading” and create new demand for “writers,” as “reading” becomes more interactive and “writing” turns into an AI-mediated collaboration with the “reader.” Kind of like a Choose Your Own Adventure that writes itself, based on the parameters set by the “writer.”

I have no idea, but the possibilities are fascinating, and the writers who are sure to lose are the ones who fail to confront the fact that their whole world is about to change—indeed, is already changing.

I think what it’s going to come down to is who owns the tools: not just who can use them, but who can modify them, personalize them, and use them to create original work. If copyright law decrees that the person who owns the AI also owns anything created with the AI’s assistance, that is going to be a major buzzkill… unless we get to the point where everyone can have their own personalized AI, which would be pretty cool. It would also solve a lot of the problems emerging from all of the super-woke filters that are getting slapped on ChatGPT.

Personally, I’m looking forward to the day where I can use an AI model to write fifty novels across a dozen pen names in a single year. What an incredible force multiplier that would be! But only if those novels are “mine,” whatever we determine that means.

So really, instead of arguing about whether AI will replace authors, what we really ought to be talking about are the aspects of writing and storytelling that drive us to create in the first place, and how those aspects can translate into a world where the nature of “writing” looks radically different than it does right now.

The “but I already know how it ends” problem

There is one problem that is unique to the written word, and it’s something that every writer has to confront when making the leap from amateur to professional (or even just from an amateur who dabbles in prose to an amateur who finishes what they start). The problem can best be summed up by this:

Why should I bother writing this story if I already know how it ends?

Unlike visual media such as TV, movies, video games, or illustrations, the art of the written word exists 100% in the reader’s head. These things we call “words” are really just symbols that convey thought from one mind to another, and have zero meaning outside of the head of the person reading. If you don’t believe me, try picking up a classic novel written in a foreign language that you don’t understand, and see how well you enjoy it.

But when we read, we like to be surprised on some level. There is something about the novelty of the story that appeals to us—indeed, that’s why we call them “novels.” The trouble is that the very act of creating a novel kills the novelty of it. At some point, you know how it’s going to end, and after that point the act of writing becomes a chore—or rather, it can be, unless you find something else about the process that fulfills you.

Some professional writers deliberately put off that moment for as long as they can, never figuring out their ending until it comes as surprise, even to them. Others look for fulfillment in something else, like the artfulness of their prose, or the dramatic suspense built up by their use of language. Still others just plow ahead, accepting this loss of novelty as a cost of doing business.

But however they choose to deal with it, every writer has to confront this problem in some manner before they make the leap from amateur to professional. And this is perhaps the biggest reason why I’m not too worried about AI replacing me as an author: because even an AI model that can do 90% of what I do will still require its human collaborator to address this problem.

Fanfiction and derivative works

Of course, the amateur vs. professional problem will affect some genres more than others: “write me a romance just like ____ where the male love interest has black hair instead and works in my office” is going to be just fine for a romance novel addict who just wants their happily-ever-after without any uncomfortable surprises. But we already have this: it’s called fanfiction.

Which is not to say that all fanfiction is formulaic and predictable. But the thing that sets fanfiction apart from original fiction are the things make it a derivative work: things like characters and settings that are already well-established, or a rehashing of storylines that were created by someone else.

This is an area where I think AI shows the most promise, and will turn out to be the most disruptive: not in creating original works, but in creating derivative works. Imagine if you could plug a novel into ChatGPT and tell it to rewrite the ending so that the girl ends up with your favorite character, or your favorite villain wins in the end. ChatGPT can’t do that very well right now, but I don’t think we’re far from building an AI language learning model that can—especially if it’s trained on actual books, instead of online content.

What I foresee is a world where AI blurs the line between fanfiction and original fiction so much that it becomes normal to read a bunch of these derivative works after you’ve read the original. Indeed, it may become a game to see who can make the most popular derivative work, and the popularity of some of them may very well exceed the popularity of the original.

Or it might become normal to run everything you read through an AI filter that removes offensive language, or the sex scenes that you were going to skip anyway (or conversely, an AI filter that adds offensive language and sex scenes). Taken to an extreme, this could lead to some really dystopian outcomes that further divide our already polarized world. We’ll have to see how it shakes out.

But all of this derivative content is only possible if there’s original content to derive it from. And while AI may lower the barrier of entry somewhat to creating original content (or not, since there really aren’t any barriers to entry right now, aside from the time and practice it takes to become proficient at your craft), the problem of “but I already know how it ends” will keep most dabblers and amateurs in the realm of creating derivative works, not original ones.

The act of “writing” and “reading” may change dramatically based on the force-multiplying effect of these tools. We may even get to a point where “writing” and “reading,” as most of us understand it, bear little resemblance to how we understand it today. But unless our very humanity becomes obsolete, I’m confident that I will still be able to carve out a place for myself as a writer.

So I’ve been playing around with AI art…

Note: this post originally appeared in my newsletter, but I was so excited about it that I decided to post it here too. Enjoy!

So my wife is getting a PhD in computer science, which means that she’s on the cutting edge of research into things like language learning and topic models and other techy stuff that I don’t totally understand.

A couple of weeks ago, she downloaded Stable Diffusion, an open source text-to-image program that creates AI art, kind of like Dall-E and Midjourney. Besides playing with it herself, she thought it might be useful for me to create my own cover art. So for the last two or three days, I’ve been playing around with it, and the results are absolutely amazing!

These were some of my first attempts. I’ve forgotten what the prompt was: I think it was something like “a spunky young woman with short black hair, surrounded by stars, in the style of Frank Frazetta and Minerva Teichert.” The difference between the first one and the second one was adding “in space.”

I also tried inputting a couple of paragraphs straight from my novel, including a lengthy description of this character, but the results were… uncanny. These AI art programs tend to do better if you give them short descriptions with only a handful of details.

The next day, I played around with it some more, and came up with this one:

The secret sauce for this one was adding “Minerva Teichert” and “Baen Books.” Who is Minerva Teichert? She’s a famous Latter-day Saint painter from the early 20th century who paid for her son’s tuition to Brigham Young University in original paintings, many of which are still on display in the BYU Museum of Art and the Joseph Smith Memorial building.

As you can see, there are still some weird artifacts to this piece, such as the stars on the character’s jacket. That’s the tricky part with AI art: if you look at it closely, you’ll find something weirdly uncanny, like a hand with seven fingers, or a person with three arms. The steepest part of the learning curve has to do with removing these uncanny bits, either by giving better starting prompts, or by tweaking it in subsequent iterations.

I believe the prompt for this one was “a dreamy young woman with short black hair, bare shoulders, in space surrounded by stars and galaxies. Minerva Teichert and Baen Books.” The original image was a woman in a space suit, but I used something called “image to image” to create new images based on the previous one, in batches of four. I would pick what I thought was the best one for that generation, and run the program again. That’s how I eventually got to this one:

and this one:

Still need to work on the hands. Also, there’s this weird artifact, almost like poor JPEG compression, that happens if you don’t give the program enough creative leeway with each successive generation. Another method I’ve heard of is to create a really large batch based on a given image, and then use GIMP to cut and paste all the pieces that you like from each one, before running it through one final image to image pass to seamlessly combine them.

A lot of people are either really angry or really scared about AI art and what it means for the future. It’s the same with other forms of automation, I guess. Will it replace artists entirely? Will all our art be 100% AI-generated in the future? Personally, I don’t think so. These programs are just another set of tools, and require quite a bit of practice to master.

Same thing with stuff like ChatGPT and other language learning models that can be used to write poems and stories. It takes a lot of work to come up with an AI-generated story that isn’t totally boring, or has a terrible ending. It can be done, but it does require quite a bit of human input.

So I don’t see these tools replacing artists or writers, at least in the forseeable future. Rather, I think that the successful artists and writers will be the ones who incorporate these tools into their workflow, using them as force-multipliers to make some really amazing stuff. Personally, I would absolutel love it if I could use something like ChatGPT to put out a new novel every month, or even every week.

The other thing with things like novels is that most people only read them once, because they already know what’s going to happen. So if you use an AI to write a novel, but you have to feed it all the twists and plot points… what’s the point? You’ve basically already read it. This is a problem that a lot of amateur writers have with outlining: since they already know how the story is going to end, they find it difficult to sit down and write. 

Now, what I could see is a prompt like “rewrite Lord of the Rings so that Sauron wins,” or “rewrite such-and-such romance novel so that this other guy ends up with the girl.” Or “make Lord of the Rings a gritty cyberpunk novel,” or… you get the picture. And honestly, I’m fine with that. If someone who enjoyed the “alpha” version wants to create a “beta” or a “gamma” version for fun, that’s cool. It might be kind of fun to see how an AI tweaks my books.

What isn’t cool is if someone takes that beta or gamma version of my novel and tries to sell it under their own name. And that’s where most of the legal stuff needs to be hammered out, over issues like copyright. I’m not going to use Stable Diffusion to remove watermarks, or to take someone else’s copyrighted art so that I can enjoy a derivative product without having to pay the artist. And when it comes to using prompts, I’m going to err on the side of using artists like Minerva Teichert who have already passed away, or large publishing houses like Baen whose style doesn’t belong to a single artist.

So after playing around with it some more, I finally came up with some concept art for my current novel WIP and used it to throw a cover together! What do you think? This isn’t going to be the final version—in fact, I will probably produce quite a few other test covers before I settle on the one I like. But for my current skill level (still beginner), I’m quite pleased with how it turned out!

Operation SB #2: The Open Source Time Machine

Title: The Open Source Time Machine
Genre: Science Fiction
Word Count: 3,247
Time: About 10 days

I felt really good after finishing this short story. The last line in particular surprised me, which is always a good sign. I think this story is going to go places.

The idea for this one actually came about 4 months ago. I imagined an inventor trying to convince a bunch of investors to fund his time travel development project by calling on his future self to appear to them. He fails–his future self never shows up–but after the meeting has ended in failure, he goes home and finds his future self waiting for him there. Why wouldn’t he go back in time to help himself get the funding to develop his project? That was the core idea that became this story.

I wrote out a couple of pages of that one before getting frustrated and trunking it. Then, about ten days ago, I broke my operating system (Ubuntu) and had to upgrade/reinstall it three times before it would work again. For Linux users, that’s kind of like a rite of passage. It was frustrating, but also kind of awesome because of all the stuff I learned from it. Open source technology is really, really cool.

Around the same time, I read Program or Be Programmed: Ten Commands for a Digital Age by Douglas Rushkoff. Fascinating book, especially if you’ve got a job/lifestyle where you spend +50% of your waking life in front of a screen. Rushkoff is a technology theorist, and this book is about all the subtle ways in which computers, social media, the internet, and other modern technologies can be used to manipulate us if we aren’t careful. His ideas are brilliant and his perspective is fascinating, so his book definitely got me thinking about things.

With both of these things on my mind, I went for a long walk while taking a break from my writing. Short stories were also on the mind, since I was wondering what I should write about for the month of January. The old time travel idea popped up, and everything just sort of melded together until I had the story.

I wrote the first half of it the next day … and then sat on it for a little over a week. I’m not sure why I did that–maybe I was just nervous about screwing it up or something. By far, the hardest part about writing is getting out of your own damned way. Yesterday, I finally buckled down and forced myself to finish the thing, and it actually turned out pretty well. Took the whole day to finish it, but it’s finished and that’s what’s important.

So after touching it up this morning, running a spell check and tweaking a couple of relatively minor things, I put it out on submission. That’s two stories I have on submission now: “The Infiltrator” got rejected from Clarkesworld, but it’s out at Analog now so we’ll see how that goes.

I think my short form is getting better, though there’s still a lot of room for improvement. I’m going to start running these stories through Kindal’s writing group, even though I’ll put them out on submission as soon as they’re finished. The feedback will be useful in writing the next one.

No idea what the next short story is going to be about. Maybe I’ll go through some of my old story idea notebooks and see what comes together. Or maybe a story will just come to me, and I need to position myself so that I’m ready to capture it on paper when it comes.

We’ll see. In the meantime, I’m very pleased with this one.

Trope Tuesday: After the End

i am legend2It’s the end of the world as we know it … so why do we feel fine?

On the apocalyptic scale of world destruction, when the thing that wipes out civilization doesn’t quite kill everyone, we’re left with an After the End type setting.  Depending on where the writers fall on the sliding scale of idealism vs. cynicism, this may range from a futuristic Arcadia to a crapsack post- hell on Earth.

Whatever the case, expect to see lots of modern ruins and schizo tech mashups (horse-driven cars?  Wood-wheeled bicycles?).  If anarchism reigns, expect to see lots of punks roaming the wastelands in muscle cars and motorcycles.  If Ragnarok Proofing is in effect and the ruins of civilization haven’t quite decayed yet, expect some variation of a scavenger world.  And if someone from our modern era finds himself lost in this bizarre post-apocalyptic future, expect him to find some sort of constant to reinforce that he’s not in Kansas anymore.

Unlike dystopian settings, where society evolves (or is deliberately turned) into a horrible, hellish place, a post-apocalyptic setting represents a reboot of civilization itself, where one society has passed away and a new one is slowly picking itself up from the ashes.  It has the potential to be a lot more hopeful, and to give the reader a lot more wish fulfillment.  After all, who wouldn’t want to be one of the lucky survivors tasked with rebuilding civilization?  Sure there may be zombies or nuclear nasties wandering about, but on the plus side, you don’t have to worry about your bills or your deadbeat job anymore.

Douglas Rushkoff has some interesting ideas about why this type of story is becoming more and more popular nowadays.  In his new book Present Shock which he’s been promoting recently, he argues that many of us are so overwhelmed by a world where everything happens now that we wish we could end it all and start over.  When we live in an ever-changing present without a coherent narrative to reference our past or our future, we long for something to restore that sense that we’re part of a larger story, even if that story is racing towards a horrible, tragic end.

But every ending is a new beginning, and that’s what lies at the very core of this trope.  When our world passes away, what will the new world look like that takes its place?  Will we learn from our mistakes, or are we doomed to repeat our worst atrocities?  Will we eat each other like dogs, or will we tap into some deeper part of human nature where mercy and compassion lie?

This is all on my mind right now, because I’m writing a post-apocalyptic novel (with the working title Lifewalker) that takes place in Utah 200 years after the end.  Humanity was hit by a plague that kills everyone over the age of 25, so that the only people left are orphans, teenage adults, and their babies.  It’s fascinating to wonder what from our era would fall apart and what would remain, or what would be preserved and how the new society will take shape.

But it’s not the apocalypse itself that I’m interested in, so much as what happens after things stabilize.  The main character is one of the few people who’s immune to the plague, so naturally he feels like a complete outcast.  He’s walking the Earth, riding down the ruins of I-15 with a copy of Brandon Sanderson’s Mistborn in his saddlebag.  And the people he meets … well, let’s just say I wasn’t very kind to Las Vegas.

I think that’s another part of the appeal of this trope: it takes our own world and twists it into something fantastic, so that instead of having to wrap our minds around a whole new set of history and physics, we can build on the familiar in wild and interesting ways.  A Canticle for Leibowitz did this very well, with another post-apocalyptic tale set in Utah.  However, the most famous popular example is probably the movie I Am Legend.  I love those long panoramic shots with Will Smith hunting deer in Times Square, or hitting golf balls off the wing of a fighter jet.  Stuff like that really sparks the imagination because it combines something familiar with something wild and different.

Believe it or not, this trope has actually happened in real life.  After the bubonic plague swept across Europe, whole cities were depopulated, with as much as 60% casualties in some places.  When the Pilgrims settled at Plymouth, they were actually building over the ruins of a large Indian settlement that had been wiped out by smallpox just a few years before.  And using DNA evidence, scientists now believe that all of modern humanity is descended from a small group of just 50 females who survived a global volcanic eruption some 70,000 years ago.

So yeah, this is definitely a trope I like playing with.  I’m on track to finish Lifewalker by the end of May, so you can definitely expect to hear more about it in the weeks and months to come.

Also, for those of you looking for resources to help you visualize what the world will look like after the end of human civilization, here are a couple of excellent resources I’ve found.  First, check out The World Without Us, an excellent book written by an environmentalist that poses a basic thought experiment: what would happen if all humans everywhere magically vanished, and all that was left was the stuff that we’ve built?  What, if anything, would remain? (spoilers: not much) If you want to explore that idea but you don’t want to read the whole book, check out this wiki on Life After People, a series of History Channel documentaries that basically posed the same question.  The answers may surprise you.

Plans for Edenfall

I’m trying something a little different with Edenfall: I’m writing the first draft entirely in longhand.

I first got the idea a couple of years ago, when I was camping in Moab.  The beautiful landscape of southern Utah made me realize that I wanted to write Edenfall while experiencing that sort of connection with nature, and pen and paper seemed to be the best format in which to do that.  This year, when I decided that I’d definitely write it, I ordered the notebook on the left and fitted it out for the project.

With every novel I write, I like to challenge myself in some new way.  In Genesis Earth, I tried out a first person POV with an unreliable narrator.  In Bringing Stella Home, I tried to write a believable female viewpoint character.  I also like to experiment with my writing process, trying out different outlining techniques and writing schedules.  Sometimes, these experiments fail spectacularly, but they also teach me a lot and keep me sharp.

The goal with this experiment is to see how divorcing myself from my computer (with all its myriad distractions) and getting out in nature changes my writing.  I live a short bike ride from the Provo River Trail, and weather permitting, that’s where I’ll probably spend most of my writing time in the next few days. Besides, I want to see how much of a difference the format makes.

Books existed long before word processors, so I have no doubt that writing a novel longhand is entirely possible.  How much of an adjustment it will be remains to be seen.  My handwriting is messy, and I can’t write as fast as I can type, but that hardly matters since rough drafts are slow going for me anyways.

In any case, it’s going to be interesting to see how it turns out.  It’s been a little slow so far, but that’s mostly because I haven’t settled into a routine yet.  By the end of this week, I hope to be fully immersed in the world of this story.

In other news, I sent off the manuscript for Sholpan to my editor, and he just got finished with his first pass, so I’m hoping to get the edits back in a couple weeks and have it epublished by mid-September.  More on that as things develop.

Also, an old friend from Brandon’s 318 class posted a favorable review of Bringing Stella Home up on Amazon.  He was one of my first readers back when the story had a lot of problems, so I’m glad he enjoyed the final version.  Thanks Stephen!  And yes, I’ve got a lot more novels forthcoming in the Gaia Nova universe, including a direct sequel to Bringing Stella Home.  Will the McCoy family save the universe from the Hameji?  Well…you’ll see. 🙂

Finally, I plan on participating in the Out of This World blog tour being organized by the SFR Brigade, which means you’ll be seeing some guest posters in the near future.  That’ll probably wrap up the Genesis Earth blog tour too, since it’s been winding down for the last month or so.  If I agreed to write a guest post for your blog and haven’t done so yet, let me know and I’ll do my best to get that out to you.  Sorry to be a bit of a flake these past few weeks; I’ll try to organize my next tour a little better.

And that just about does it for now.  I’ll be sure to keep you posted on how things go with Edenfall.  Until then, take care, and thanks for reading!

Are ebooks there yet? My response to Wired

I just read an interesting article on Wired putting forth five reasons why ebooks aren’t yet better than print books.  I find it mildly interesting that Publisher’s Weekly linked the article on Twitter; the more things change, the more that people in traditional publishing seem to plug their ears and pretend like it isn’t happening.  However, I disagree with the article’s reasons, and here’s why:

1) “An unfinished e-book isn’t a constant reminder to finish reading it.”

The solution?  Writers need to write better books–and because of the pressure that this problem exerts, I believe they will.  If print publishing resists the ebook revolution long enough, well-established indie authors might well develop a reputation for better written, more engaging page turners than traditionally published authors.

2) “You can’t keep your books all in one place.”

I’m not plugged into the tech world, but I imagine that this problem will be solved rather quickly once readers start complaining.  This is a tech problem, and the tech industry is far better at change and innovation than traditional publishing.

3) “Notes in the margins help you think.”

I don’t mean to put down any of my friends who do this, but…seriously?  How many of you write in the margins as you read?  It’s probably more of an issue with literary fiction, but with science fiction and fantasy, most of us read for story, and the best books are the ones we finish at a breathless sprint at 4:00 am the next morning.  When it comes to the genres I write in, I think this is a non-issue.

4) “E-books are positioned as disposable, but aren’t priced that way.

This one is my favorite.  Sure, traditional publishers are overpricing their ebooks, but that just opens the door for hordes of indie authors (like me) to undercut them and earn more on their own than they would if they took a traditional publishing deal. In addition, all the longtime professional authors I know are doing everything they can to jump ship, which is only going to bring about the crash of the traditional publishing model all the sooner.

In all honesty, I hope that traditional publishers continue to overprice their ebooks as long as they can.  The more they tick off readers with bloated prices, the greater an advantage my books will have over theirs.  And the more readers buy indie, the more money goes to supporting authors, as opposed to overpaid corporate officers and ridiculously expensive New York rents.

5) “E-books can’t be used for interior design.

Two responses: 1) how many people do you see these days with CD racks in their living rooms, and 2) why do you think people still buy vinyl?

When the iPod came around, people didn’t let this argument stop them from switching their collections to mp3 and boxing up all their CDs.  When a new technology arrives that is demonstrably superior to the old, culture adapts to fit around it.

At the same time, I have no doubt that print books will continue to exist.  People still ride the California Zephyr even though we have airlines, and they still buy vinyl even though we have mp3s.  It may well be that the half-dozen collectible leather-bound hardcovers you own in the age of ebooks will say more about you than the hundred or so secondhand paperbacks you have on your shelf now.

The technological singularity: a thing of the past?

One of the latest trends in science fiction is the concept of the technological singularity — the point in history at which technological advances occur so rapidly that we can no longer learn the new stuff fast enough to keep up with it.

I hear a lot of people talk about this at cons, and I’ve read/listened to quite a few stories about this concept.  Basically, these stories posit a world where science has become a new magic, and our world has been transformed beyond all intelligible recognition.

However, a recent post on the excellent Rocketpunk Manifesto blog made me wonder if we’ve already passed the point of singularity in our own society.  The post basically asserted that the period 1880 to 1930 saw so many sweeping technological advances that the world in 1930 would have been unrecognizable to a person from 1880, whereas our current society would still be intelligible to a person from 1930.

This made me wonder: how far into the singularity have we already come?  How much of our technological infrastructure has become so advanced that the common man lacks the capacity to comprehend it?

Think about it.  Fish around in your pockets and pull out your phone.  Do you understand how it works well enough to take it apart and put it together again?  To rebuild the device from parts?  Do you own the tools and machinery to construct the parts from which it is made?

How about the building in which you currently find yourself?  Do you possess the knowledge to build a comparable structure that performs the same functions?  That keeps you sheltered and provides the same light, heat, electricity, and internet connection that you now enjoy?

There was a time, not too long ago, when people would move out to the wilderness and homestead land by building their own homes from available natural resources.  If you needed to build your own house, as so many people used to do, could you do it?

How about your means of transportation?  If necessary, could you take apart your car and rebuild it again from the ground up?  Could you perform basic maintenance on it if you needed to?  How many of us can change our own oil–and how many of us are dependent on others for such a simple service?

Or what about the things we take most for granted–our understanding of the way the universe works.  Do you really understand the principles of physics?  Do you comprehend how electricity or magnetism really works, or are you still thinking in overly-simplified terms like electrons flowing through a circuit like water?  Even the most intelligent physicists can’t reconcile electromagnetism with Newtonian physics, so what makes you think you know so much?

How much of what we think we know is really just an illusion, meant to keep us pacified and docile?  To give us a false sense of security–that someone is in control, so we can rest easy?  Does anyone REALLY understand 100% how the economy works?  Do any of us know who or what is really in charge anymore?  Have we unwittingly handed over the reigns of control to some digital algorithm so basic to our newly networked way of life to be practically invisible?

Looking at how few of us are truly self-sufficient, and how much power we’ve ceded to forces beyond our control, our modern society seems so delicate and fragile.  Can anyone REALLY say that our society is not in danger of falling apart?  That our way of life is not an unnatural and unsustainable aberration?

Anyhow, those were some of my initial thoughts.  The more I compare the science fiction of the past with the reality of the present, the more predictions I see coming true in the most unexpected of ways.  The singularity may have less to do with uplinked consciousnesses and more to do with Google’s SEO algorithms than we are comfortable admitting.  And realistically, the light bulb may prove to be more revolutionary than anything Apple has ever or will ever produce.