A chilling solution to the Fermi Paradox

The Fermi Paradox is a classic problem in both science and science fiction. Put briefly, the paradox is this: if the natural conditions that led to the development of our human civilization are not unique, and it is reasonable to assume that alien civilizations more advanced than our own have developed elsewhere, then why haven’t they tried to contact us? In other words, if we aren’t alone in this universe, than where have all the aliens gone?

A number of possible solutions to this paradox have been proposed. Perhaps the aliens just don’t find us interesting enough to reach out. Perhaps we just don’t have the technology to contact them. Or perhaps there’s some sort of “great filter” that prevents alien civilizations from becoming spacefaring, or from becoming more advanced than our own. For example, perhaps when alien civilizations discover nuclear weapons, they destroy themselves in a spectacularly suicidal war.

All of these are interesting… but they’re also very naive. They assume that if aliens did try to contact us, everyone on Earth would know about it. But is that really the case?

If an alien civilization made contact with our own, who would be the first humans to learn about it, and who would be the last? Or in other words, if aliens made limited contact with a few humans, how likely would those humans be to share that information with the rest of us, and how likely would we believe them?

If aliens did make contact with us, it would almost certainly be limited in scope. To illustrate this, let’s break down their contact strategy based on hostile vs. peaceful intent, and whether or not they want to stay hidden:

Hostile IntentPeaceful Intent
Stay HiddenInfiltration mission: choose human targets selectivelyObservation mission: gather data from distance
Come OutInvasion mission: reduce human ability to organize and resistDiplomatic mission: prioritize contact with human leadership

In each of these strategies, the aliens gain nothing by doing a massive flyby and showing themselves to all of us at once. Even in the case of an invasion mission, they’d probably only want to do that if 1) they had overwhelming force, and 2) they decided to run some sort of shock-and-awe campaign, like Independence Day. But what exactly would they gain from that? Even if they did have overwhelming force, why would they want to present a clear target when they already have the element of surprise?

Point is, in most of these scenarios, the aliens would either want to limit their activities to the fringes of human society, or to establish contact with the human leadership first. Therefore, the first humans to learn about these aliens are either going to be the kind of people the rest of us can easily dismiss, or our leaders, who have every incentive to keep the knowledge of these aliens hidden, as the disruption it would cause would threaten their own power.

Put simply, the solution to the Fermi Paradox may have less to do with the aliens and more to do with us. After all, if aliens really had made contact with humanity, what makes you think you would know?

What Brandon Sanderson gets wrong about AI and writing

Last week, Brandon Sanderson posted a video from a conference where he gave a talk titled “The Hidden Cost of AI Art.” In it, he argues that writers who use AI are not true artists, because the act of creating true art is something that changes the artist. This is true even if AI becomes good enough to write books that are technically better than human-written books. Therefore, aspiring authors should not use AI, because it’s not going to turn them into true artists. Journey before destination. You are the art.

Obviously, I disagree very strongly with Brandon on this point. For the past several years, I’ve been reworking my creative process from the ground up, in an effort to figure out how best to use AI to not only write faster, but to write better books. I’ve experimented with a lot of different things, some of which have worked, most of which haven’t. And I’ve published several AI-assisted books, many of which have a higher star rating than most of my human-written books. So I think it’s safe to say that I have some experience on this subject, at least as much as Brandon himself, if not more.

Brandon compares the rise of generative AI with the story of John Henry and the steam-powered rock drill, where John Henry beat the machine but died from overexertion. So he showed that man can still beat the machine, but the machine still went on to change the world.

But I don’t think that’s the right story when it comes to AI. It’s far too simplistic, pitting the AI against the artist. Instead, I think it’s better to look at how AI has changed the world of chess. For a long time, people thought that a computer would never be able to beat a human at chess. Then, in the 80s, an artificial intelligence dubbed “Deep Blue” beat Garry Kasparov at chess, proving that computers can beat even the best humans at the game. So now, all of our chess tournaments are played by AI, and humans don’t play chess at all. Right?

Of course not. Because here’s the thing: even though a strong AI can always beat a human at chess, a human who uses AI can consistently beat even the strongest AI chess engines. In fact, there are tournaments where teams of humans and AIs play against each other. They aren’t as popular as the human-only tournaments, since we prefer to watch humans play other humans, and the best human chess players prefer to play the game traditionally. But when they train, all of the top grandmasters rely on AI to hone their craft and sharpen their skills.

Chess is a great example of a field that has incorporated AI. And even though AI can play chess better than a human, AI chess players have not and never will replace human chess players. Because ultimately, asking whether humans or AI are better at chess is the wrong way of looking at it. AI is better at some things, and humans are better at other things. The best results happen when humans use AI as a tool, either in training or in actual play. And because of how they’ve incorporated AI, the game of chess is more popular now than ever.

Brandon spends a lot of time angsting about whether AI writing can be considered art. Perhaps when I’m also the #1 writer in my genre, and have amassed enough wealth through my book sales that I never have to work another day in my life, I can also spend my days philosophizing about what is and is not art. But right now, I prefer a more practical approach. I’m much less concerned about what art is than I am about what it does. And the best art, in my opinion, should point us to the good, the true, and the beautiful.

Can AI do that? Can it point us to the good, the true, and the beautiful? Yes, it can, just like a photograph or a video game can—both examples of counterpoints that Brandon brings up. But as with the game of chess, a human + AI can create better art than a pure AI left to its own devices. I suspect this will remain true, even if we reach the point where AI art surpasses pure human-made art. Because at the end of the day, AI is just a tool.

But what about Brandon’s point that “we are the art”? Isn’t it “cheating” to write a book with AI? Doesn’t that demean both the artist and the creative act?

It can, if all you do is ask ChatGPT to write you a fantasy story. Just like duct-taping a banana to a wall and calling it “art” is pretty demeaning (though you’ll still get plenty of armchair philosophers debating about whether or not it counts, highlighting again how useless the question is). But if you spend enough time with AI to really dig into what it can do, you’ll find that it’s no less “cheating” than pointing a camera and pushing a button.

One of the first AI-written fantasy stories I generated was a story about a half-orc. I wrote it using ChatGPT while my wife was in labor with our second child. We were both at the hospital, and I had a lot of down time before the action really began, so I used those few hours to write a 15k word novelette. It was fun, but the story itself was pretty generic, which is why I’ve never published it.

Basically, it read like an average D&D fanfic—which is exactly what every AI-generated fantasy story turns into if you don’t give it the proper constraints. If all you do is ask ChatGPT to tell you a story, it will give you a very average-feeling story. Every fantasy turns into a Tolkien clone or a D&D fanfic. Every science fiction turns into Star Trek. It may be fun, but it’s not very good. Just average.

My first AI novel was The Riches of Xulthar, and I wrote it quite differently. Instead of just running with whatever the AI gave me, I picked and chose what I wanted to keep, discarding the stuff that didn’t work very well. But I still didn’t constrain the AI very much, so it went off in some pretty wild directions, which made it a challenge to decide what was good. As a result, it went in some very different directions than I would have taken it, but the end result was something that I could still feel good about putting my name on. And of course, after generating the AI draft, I rewrote the whole book to make sure it was in my own words. That also helped to smooth out the story and make it my own.

Since writing The Riches of Xulthar, I’ve written (or attempted to write) some two dozen AI written novels and novellas. Most of them are unfinished. Some of them are spectacular failures. I’ve published another half-dozen of them, most in the Sea Mage Cycle.

It was while I was working on the latest Sea Mage Cycle book, Bloodfire Legacy, that I finally felt I was getting a handle on how to write something really great with AI. The key is constraints. AI does best when you give it constraints that are clear and specific. The more you constrain it, the more likely you are to get something that rises above the average and approaches something great.

But to do that, you have to have a very clear and specific idea of what you want your story to look like. Which means you have to have a solid outline (or at least some really solid prewriting), and a deep understanding of story structure.

I think the real reason Brandon is so opposed to AI writing is that it negates his competitive advantage—the thing that has made him the #1 fantasy writer. Without AI, the biggest bottleneck for new and established writers is putting words on a page. Brandon made a name for himself with his ability to write a lot of words relatively quickly. Where other fantasy writers like Martin and Rothfuss have utterly failed to finish what they start, Brandon finishes everything that he starts, and he starts more series than most other writers finish. This is why he’s known as Brandon Sanderson, and not just “the guy who finished Wheel of Time.”

But generative AI removes this bottleneck. Suddenly, putting words on the page is quite easy. They might not be good words, but they might be as good as Brandon Sanderson’s words. After all, his prose isn’t exactly the most brilliant of our time. Deep down, I think Brandon feels this, which is why he sees AI as such a threat.

Will writing with AI make you lose some of your writing skills? Probably. I suspect it’s much like how using AI to code will make you weaker at coding, at least on a line-by-line level. But coding with AI will make you a much better programming architect and designer, since it frees you up to focus on the higher-level stuff.

In a similar way, I expect that the new bottleneck for writing will have to do with the higher level stuff: things like story structure and archetypes. The writers who will stand out in an AI-dominated writing field will be the ones with a deep and intuitive understanding of story structure, who can use that understanding to get the AI to produce something truly great. Because if you understand story structure, you can write better constraints for the AI. Pair that with a good sense of taste, and you’ve got an artist who can make some really great stuff with AI.

This is why I think Brandon’s views on AI art are not only misguided, but actually toxic. Love it or hate it, AI is just a tool. Using it doesn’t make you any less of an artist, just like using a camera vs. using a paintbrush doesn’t make you any less of an artist.

There Is No “AI Bubble”

In my various and sundry travels over this desolate wilderness we call the internet, I’ve recently heard a lot of people talk about this thing they’re calling the “AI bubble.” The basic theory is that all of this AI development is being artificially propped up, that it isn’t nearly as profitable or as transformative as the AI proponents claim. When the music stops and the curtain gets pulled back, all of these AI companies will collapse, and all of this AI that nobody asked for will get scaled back to something normal. Or something like that.

But here’s the thing… the fact that so many people are talking about the “AI bubble,” to the point that it’s now a talking point, is pretty strong evidence that it’s not actually a bubble. When a true economic bubble happens, nobody calls it a “bubble” because everyone is so euphoric about it. Indeed, it’s that very euphoria that fuels the bubble. Housing prices only go up, donchaknow. AOL and Pets.com is totally the way of the future. So shut up and mortgage your house so you can buy the latest tulip.

With AI, though, it seems that all of the most vocal people are anti-AI and want it all to go away. Indeed, the main driver of all this “AI bubble” talk seems to be fear that AI will drive large numbers of people out of work. So what’s actually happening?

I do think there is a bubble in our economy, but I don’t think it’s being driven by AI. Rather, I think what we have is a debt bubble, which is very close to unwinding in a catastrophic way. The only way to stop that from happening is to grow the economy faster than the debt bubble is inflating, but at this point, the only way to do that is through some hugely transformative new technology, such as generative AI.

So all of the forces that want to keep propping up this debt bubble have turned to AI as the salvation of our economy, pumping billions and billions of dollars into it in the hopes that it will yield the sort of economic growth that will allow them to keep growing the debt. But for ordinary people, it’s a lose-lose scenario, since if AI succeeds, lots of us will be out of work… but if AI fails, the economy collapses and lots of us will also be out of work. Hence why so many ordinary people see AI itself as the problem.

Here’s what I think is ultimately going to happen: AI will prove to be super transformative in the long run, just like the internet, but it won’t save us from the debt bubble the way that our business and political elites so desperately hope that it will. The debt bubble is going to pop, and we are going to have to face up to the consequences of decades of very bad fiscal and monetary policy, with or without AI. But after the dust settles, AI will play a major role in the rebuilding of the economy, for good or for ill.

Slop is not an AI problem

I don’t generally have much time these days to argue with strangers on the internet. While on the whole, that’s certainly a good thing, it also means that I tend to be out of the loop when it comes to most of the current cultural debates.

One term that I see a lot of these days is “AI slop.” It’s always used in a derrogatory way, and seems to be paired with the ongoing debate about the ethics or desireability of AI generated content, in various settings. I haven’t been following that debate very closely, but I can tell that there are some very strong anti-AI feelings out there, and some very vocal and passionate people espousing them.

But is the “slop” really an AI problem, or a symptom of something greater? I tend to think the latter, and here’s why.

I watched this video recently, about how most restaurants these days are producing literal slop. According to Matt Walsh, the reason (in case you don’t have twenty minutes to watch the video) is basically that all of these restaurants have been taken over by investors who are looking to maximize the value of their investment, and the best way to do that is to cut costs down to the bone and put out a minimum viable product.

It strikes me that “minimum viable product” is basically just another way of saying “slop.” It’s just barely good enough that people will generally consume it, but not so great that it takes a lot of time or energy to produce. As an example:

My kids love watching lego videos. In fact, they are starting to become low-key addicted to them (which we are doing our best to keep from getting worse). But within this genre on YouTube, there are some really good videos, like the one above… and this one, which my daughter insists on watching every day.

The first video features some truly elegant designs, with a detailed breakdown not only of how to build them, but how they operate, complete with foot paths, frame paths, etc. Even after watching the video some two or three dozen times, I am genuinely impressed by some of these models.

The second video is an obvious copycat video, with some slap-dash, crappy designs that look like zero thought went into them at all. I mean, seriously? Square wheels? And what’s with the two-legged walker, with the weight on the far back? More like “dragger” than “walker”—at least give the thing a wheel! And the tilt-rover? All the weight is on the back wheel, but the thing is front-wheel drive—of course it’s going to fail all the tests!

But even though the content itself is obvious copycat slop, slapped together quickly in order to capitalize on a trend within the genre (the YouTuber even tries to “hack” the algorithm by mashing two videos together, kind of like how some authors mash books together in order to maximize KENP page reads), my daughter still wants to watch this video more than the higher quality video. Why? Probably because of the flashier visuals and music, which makes the slop more appealing on a surface level.

Here’s the thing, though: as far as I can tell, there was no AI involved in making the lego video slop. It appears that the YouTuber actually built and actually tested these lego models. I could be wrong about this, of course, but I’ve watched these videos so many times with my kids that if there were any AI-isms in the video, I think I’d be able to spot them.

And then, we get something like this:

From what I can tell, every part of this video is made with AI, down to the actual writing (what kind of human would write “tunnels run like sacred veins”?) and the musical performance—and of course, the stunning visuals. But is it slop? The YouTuber appears to be a shitposter and meme-artist, which means he probably made this thing for the love of making it. And after watching it a couple of times, it really shows. Not like it’s fine art, of course, but there is so much packed in here—so many easter eggs and veiled cultural references—that even after watching it a dozen times, I am genuinely impressed.

So is that slop? It’s obviously AI, but is it a “minimum viable product”? I honestly don’t think so. Rather, I think the creator had something burning within him that he wanted to create, and he poured all of that into his creation, using AI tools to do all the things that he otherwise couldn’t have done. And the result is genuinely impressive. Seriously, I can’t stop watching it.

So is “slop” an AI problem? I don’t think so. Rather, I think that the explosion in poor-quality AI generated content is revealing our modern, capitalist, consumer culture’s tendency to settle for a minimum viable product rather than strive for excellence and greatness. We were getting slop long before we had AI. The only thing that’s fundamentally changed is that AI is increasing the quantity—and frankly, the quality—of the slop.

What if it’s all hallucination?

I’ve been thinking a lot recently about something my wife said about AI. She’s finishing up her PhD in computer science, and knows more about generative AI and computational linguistics than just about everyone I know IRL (and most people I follow on the internet, too). So when she speaks on the subject, I do my best to listen.

Ever since OpenAI and ChatGPT took the world by storm, she’s been telling me that she doesn’t think the hallucination problem (where LLMs make stuff up) will ever be solved. Indeed, she doesn’t think it’s a “problem” in a technical sense at all, because every response from a generative AI is a hallucination—and that’s kind of a point. These aren’t really thinking machines, they’re hallucinating machines, replicating patterns in human language and thought. What difference does it make if the answer is false or true?

We call it “artificial intelligence,” but that’s really a misnomer, because these machines have no “intelligence” at all—at least, not in the human sense. Instead, they are like mirrors of our own intelligence, parroting back things that sound like they involve real thought, when really it’s all just pattern replication. They aren’t trained to recognize truth, they’re trained to recognize patterns. So, in reality, everything an AI generates is a “hallucination.”

This is why she thinks that we will never fully solve the hallucination “problem.” Indeed, the whole effort is a bit like trying to turn a lion into a vegan. And until we can train an AI on absolute truth—a thing that humanity has never been able to agree upon, much less reduce to zeroes and ones—then all we will really be able to do is create better and better plumage for our stochastic parrots.

What are the implications of this? First of all, we can safely ignore the worst of the AI doom porn, because a machine that cannot fundamentally recognize truth from falsehood is probably not capable of taking over the world and exterminating or enslaving humanity, even if it does qualify as a “general” intelligence.

We can also lay aside the fear (or the pipe-dream) that AI will 100% replace humans in all or most or really any fields. Even if they can do 90% of the work, recognizing truth is still an essential part of just about everything we as humans do. We can give it jobs and tasks—perhaps even some genuinely complex tasks—but so long as these machines cannot fundamentally distinguish between truth and falsehood, we will still need a human to oversee them.

That doesn’t mean that most humans are safe from being replaced by AI, though. If an AI-augmented person can accomplish the work of 10x or 100x the number of other human workers, we’re still going to face a massive disruption in the labor market and society as a whole. The question, then, is one of ownership and distribution. Who owns the AI? How do we distribute the productivity gains from AI? These are some of the difficult problems we need to solve in the next few years.

But the real problem—and the scariest implication of all of this—is the question of truth itself. After all, if AI is fundamentally incapable of recognizing truth, and all AI output is hallucination on some level, then who determines what is true and what is not? Sam Altman? OpenAI? Congress? Some three-letter government agency?

I think this is going to be the defining question of the rising generation, which is growing up in an AI-native world. What is truth? How can we recognize it? How do we distinguish between what is true and what is false? Increasingly, we are going to find that these are questions that AI cannot answer. And in a world saturated by deep fakes, bots, and sock puppets, where the internet is dead and all the most powerful players are constantly fighting a 5th gen war with each other, truth will be the thing we are all starving for.

The tragedy of the millennial generation is that everything in our world conspired to starve us of the three things we needed most. More than anything else, we hungered for meaning, authenticity, and redemption—and for the most part, we never got it. You can blame social media, the boomers, capitalism, student loan debt, the Republicans, the Democrats—it really makes no difference. All of those things and more came together to hobble our generation and make it almost impossible for us to launch.

Will the same thing happen with the zoomers and gen-alpha over the question of truth? It appears that things are moving in that direction. In a world saturated with AI, truth becomes a scarce and valuable commodity.

So what do we do? First, I think it’s important to recognize that AI cannot and never will be an authority on truth. At best, it only mirrors our own thoughts and ideas back to us—and at worst, it feeds us the thoughts and ideas of those who seek to control us. But AI itself is neutral, just like a gun or a knife lying on a table is neutral. What matters is how it is used.

Beyond that, I don’t really know what to say. Only that this is something I need to think about a lot more. What are your thoughts?

Thoughts on the Mormon church shooting

Over the weekend, there was a horrific mass shooting at a congregation of the Church of Jesus Christ of Latter-day Saints in Michigan. The shooter apparently rammed his truck through the front wall of the chapel while the congregation was taking the sacrament, and as people were coming up to help him and make sure that he wasn’t hurt, he pulled out a semi-automatic rifle and began shooting them. He then proceeded to set up several IEDs to hinder the search and rescue efforts while he lit the building on fire, using gasoline.

I’ve heard different reports about what happened next. The police arrived on the scene quite rapidly, engaging in a firefight with the shooter and ultimately killing him. However, I have also seen reports circulating from eyewitnesses that members of the congregation also engaged in the firefight, and that at least one of the police who responded may have been an off-duty law enforcement officer attending the church services.

In any case, the shooter was killed, but not before he had killed or wounded nearly a dozen people and set a fire that burned the structure to the ground. The fire and IEDs prevented the first responders from going into the burning building and searching for survivors, until after the structure had collapsed. Thankfully (and miraculously), everyone got out in time, so there weren’t any people who died because they were trapped in the burning building while the first responders couldn’t get to them.

Needless to say, this is an unthinkable tragedy that has all of us members of the church in shock. Many of us are wondering what could possibly motivate someone to attack us like this, and in the last 48 hours, the picture that we’re starting to get of the man is very disturbing. He apparently was an Iraqi veteran who was suffering from PTSD and mental illness, which means he almost certainly didn’t get the help he needed from the VA. And while it seems he was a conservative, the motivation probably has less to do with his politics and more to do with religious hatred.

Ever since the church was formally organized in 1830, there has been a concerted effort by anti-Mormons to destroy it. If you search for anything about Mormonism online, you will also find some extremely vicious anti-Mormon literature. As with other forms of religious bigotry, such as anti-Semitism and anti-Catholicism, it comes at us from all directions, but in recent decades most of it seems to have come from the evangelical Christian right. There are pastors on YouTube right now who are monetizing their channels and building engagement by calling us “demonic” and claiming we are led by the devil himself. Others seek to ridicule our most sacred practices by posting videos of our temple garments or our temple services, which are not open to the general public. It’s always been something we’ve had to deal with, especially at events like our semi-annual General Conference where you can often find protestors waving placards that say things like “Jesus Saves, Joseph Enslaves!”

When I was following this story on Sunday afternoon, trying to piece together what had happened, I was shocked to find people posting these anti-Mormon talking points on conservative news sites like The Daily Wire. The vast majority of the response from our Catholic and Protestant friends, including our Evangelical friends, was genuinely sympathetic and full of condolences. But there was still a minority of Christian commenters who thought it entirely appropriate to use this story as an opportunity to tell us that “Mormons aren’t Christian.”

Do you realize that this anti-Mormon rhetoric is likely what radicalized the shooter to kill us? Yes, he was a disturbed and troubled man, but there’s a reason why he felt justified to take up arms against us. My guess is that he heard that Mormons are “demonic” and “not Christian” one too many times, and drew his own conclusions. And while he alone is responsible for his own actions, the public rhetoric matters too.

It’s the same exact thing we saw with the Charlie Kirk assassination. For years, Charlie Kirk’s political enemies called him a racist, fascist, white supremacist, etc, escalating their rhetoric to the point where a disturbed individual felt he was justified in killinghim. And just as it’s disgusting for people to say “Charlie Kirk didn’t deserve to be shot, but he really was a racist and a fascist,” it is also disgusting to say “The Mormons didn’t deserve to be killed, but they really aren’t Christians.” Especially while the church was still on fire, and the victims of the attack were succumbing to their wounds.

Up until now in the culture wars, religious conservatives of all stripes (including Catholic, Protestant, Evangelical, Latter-day Saint, Jewish Orthodox, and some other small minorities like Hindu (represented by Tulsi Gabbard and Vivek Ramaswamy)) have been united by a common enemy: the woke left. And for the last two decades, the woke left has been the dominant cultural force. But all of that is beginning to change, as the culture swings back from the excesses of peak wokism and the Great American Revival begins to enter the mainstream. And as the Christian revival sweeps our country, I think we’re about to enter a very dangerous period, where we no longer have a common enemy to unite us.

So here is the question: as religious conservatives take back the culture and the woke left is forced into the political wilderness, are we going to remember our American creed of “E Pluribus Unum” as we work to make our country great again? Or are we going to fall into a modern ideological rematch of the 30 years war, with Catholics and Protestants sniping at each other, various branches of the Evangelical Right vying for dominance, and everyone turning on the Jews and the Mormons? Because the seeds of that conflict are definitely in the ground.

I’m not saying that Evangelicals shouldn’t be allowed to say that “Mormons aren’t Christian.” I understand how that’s a core belief of some people, who are deeply troubled by our rejection of the Trinitarian creed. And I understand that there are many Christians who still love us even though they believe we are going to hell, and want to do everything they can to help us be saved. But dude… if you really love us, why are you saying all that stuff while the bodies are still warm? I’m not calling for you to be silenced, but I am calling for a de-escalation of the rhetoric, before some other deranged madman watches one too many Mark Driscoll videos and decides to take up arms.

That’s a lot of heavy stuff to consider, so I want to end with what is probably the best response I’ve seen to the Michigan church shooting, from the Babylon Bee:

Mormons Respond To Attack By Continuing To Be Amazingly Kind To Everyone

[9/30 UPDATE:] …aaand once again, the Babylon Bee gets major points for predicting the actual news, because members of the church have set up a GiveSendGo for the family of the shooter. It has already surpassed $125,000 in donations.

Without AI, I would probably not be writing

I recently got another anti-AI one-star review that I want to pull apart, because it’s pertinent to what I want to say. I actually came up with the title for this post before I received the one-star review, so I’m not just fisking this one for the sake of fisking. With that said, though, there is definitely a lot to pull apart.

I was prepared to rate this as 2 stars. It is repetitive with no real character depth or development and a sincere lack of dynamic or engaging writing. 

Two stars… so magnanimous! In all seriousness, though, it’s worth pointing out that in spite of all the book’s flaws, she did read it all the way through. That’s important for later.

Then I read the “author” note at the end of the book that was defending their use of generative AI in their writing process…. not only that but also seemingly insulting other writers who are anti-AI claiming that readers dont seem to care about it.

You know what’s insulting to any author, whether or not they are “anti-AI”? Putting scare quotes around the word “author” when referring to them Though I suspect that she did that on purpose, fully intending to insult me, whereas I did not intentionally insult anyone. For the record, this is the passage from the author’s note that she claims is “insulting” to authors by saying that “readers dont [sic] seem to care about [AI writing]”:

Besides which, after sharing The Riches of Xulthar with lots of readers, I’ve found that most of the rage and vitriol against AI-assisted writing is on the writer side of things, not the reader side.

The other thing is that I was not trying to “defend” my pro-AI stance through the author’s note, just explaining my writing process and sharing the story behind the story like I do in the author’s notes I write in the back of all my books. That’s not me being “defensive,” that just me sharing my story.

But there is something profoundly narcissistic about the way this reader is framing her review. Because I stated something about readers that contradicts her anti-AI worldview, I must be intentionally “insulting” her (or the anti-AI authors she’s white knighting for, which amounts to the same thing). Because I wrote about how I used AI to help write the book, I must be “defending” myself against her anti-AI views. This kind of narcissism can only really come from someone who lives in an echo chamber and is not used to having their worldview challenged.

Well Joe, you are wrong. This book was lifeless and dull and the use of AI showed. Everything was one dimensinal and flat. Word choises were even static. We (readers) get it… FMC had auburn hair. There are other words besides auburn to describe it….

I’m not going to deny, there is some legitimate criticism here. Rescuer’s Reward was one of my earlier AI-assisted books, when I was still experimenting a lot and learning how to incorporate AI into my creative process while still preserving my voice and writing multi-“dimensinal” [sic] characters and stories. So it doesn’t surprise me all that much that I missed the mark with this particular reader for this particular book. Lesson learned. Thanks for the feedback and the useful data point.

With all of that said, though… I can’t help but notice that she read the whole book.

I have yet to hear a compelling AI argument in the reralm of artistic expression and this “book” just exemplified everything yet again. No heart. No depth. Not good.

This is the crux of the issue, and the reason I wanted to frame this post as a line-by-line response to this review. Is there “a compelling AI argument in the reralm [sic] of artistic expression”? Or is any author who uses AI committing an unforgivable transgression against their art?

Here’s the thing: most of the other authors I know gave up writing a long time ago. We all started out with bright-eyed dreams about telling great stories and creating great art, but the hard truth is that it’s almost impossible to make it as an author.

There are many reasons for this: people don’t read very much in today’s culture (I personally blame the public school system for that), and the publishing industry has always been brutally rapacious and exploitive of writers (just read The Untold Story of Books by Michael Castleman—it’s a really fantastic history of the written word).

But the writing itself is also very hard. There’s a reason why even many succesful writers are like this guy, single and living in what amounts to a glorified shack. Most of my writing friends quit when they got married and starting having kids. I sincerely hope that they’re just on a 20+ year hiatus, and plan to get back to writing again someday, because some of the stuff they wrote was really, really good (I’m looking at you, Nathan Major!) But sadly, that won’t make up for the stuff they would have written, but never did.

My wife and I just had our third child. Writing with small children is very difficult, especially when your wife has a full-time job. I love them all to death, though. If I had to choose between being a single writer, or putting my writing on hold for 20+ years and having to restart my whole writing career from zero, just to be able to raise a family, I wouldn’t hesitate for a moment to make that choice. But it would put a huge burden of guilt on my wife, because my writing was one of the key things that drew her to me back when we were dating. And while our marriage is probably strong enough to survive that, I can’t deny that it would be an incredible strain.

Without AI, I probably would be facing this choice right now. Even though I had managed to streamline my writing process in the last few years, I’ve never been an especially fast writer. Without AI, it took me about a year to write each novel—and that’s before all the demands on my time and energy that come with having small children.

But AI has enabled me to continue to pursue my career and my art, even through this period of life. Not only does this help me to be a better husband and father (which is ultimately the most important thing), but it also means that my readers don’t have to wonder about the things I would have written, but never did. I can write those books now. I can give those stories to the world.

I’m not talking about AI slop. I’m talking about incorporating AI into the creative process deeply enough that it enhances, rather than replaces, my human creativity. We don’t have to be afraid of AI. It makes so many things possible—including running a profitable indie author business while raising (and soon homeschooling) 3+ small children. But it takes a lot of practice to get to that point. And generative AI is still so new that I don’t think there’s anyone who’s truly mastered the art of AI-assisted writing.

My Sea Mage Cycle books are mostly for practice. They’re meant to be fun, light reading. If it gives my readers a satisfying respite from all the doom and gloom in the world these days, I consider that book a success. The experience of writing each of them has helped me to be a better AI-assisted writer. And while the earlier ones may read like AI slop, that won’t be the case for long.

My spicy take on the ethics of AI art

There is nothing unethical about using generative AI to write or make art. Those who say otherwise either haven’t thought through their position, or they are lying for rhetorical effect. Or both.

If Andrew Tate wrote a book titled How To Enslave Your Woman For Fun and Profit, would he be within his rights to demand that no woman ever read that book? If you believe that AI is unethical because it was trained on writers’ and artists’ work without their consent, congratulations—that is exactly the position you have taken. You can’t pick up one end of the stick without also picking up the other.

Whether or not writers and artists were fairly compensated for the use of their work is a separate issue. Many of these AI companies obtained their training data by indescriminately scraping the internet, which means the used a lot of pirated work. But if using copyrighted material to train an AI system is fair use—and here in the US, the courts have ruled that it is—then all that they owe you is the cost of your book. So if your book is $2.99 on Kindle, that is what OpenAI owes you. Congratulations.

Does Brandon Sanderson owe Barbara Hambly royalties? Brandon Sanderson has sold something like $45 million in books, comics, and other media. Barbara Hambly struggles to pay her bills. Barbara Hambly wrote Dragonsbane, the young adult book that inspired Brandon Sanderson to write fantasy. Clearly, her work had a deep and lasting influence on him. So does he owe her?

If you believe that AI companies owe artists and writers more than simply the price of their own published work, this is a question that you must wrestle with. If it counts as “stealing” to train an AI on artists’ and writers’ work, then every artist and writer is also a thief, and owes royalties to the people who inspired them. Which is why the word “plagiarism” has a tight definition, and why our legal code recognizes fair use.

There is nothing unethical about using generative AI to write or make art. Almost everyone who says otherwise is either lying to themselves about that fact, or lying to you.

Why would someone lie about that? For the same reason people accuse you of being a racist, or a sexist, or a fascist, or a white supremacist, or a Christian nationalist… because using that term gives them power. They don’t actually want to make a reasoned argument. They just want to “win” the argument without ever having to make it in the first place. They use words that they know will get the reaction that they want, and they scream them as loudly as they can until they get it. That’s what the public discourse looks like in 2025.

To be fair, this is not just something that happens on the left. Plenty of people on the right will scream “woke” or “based” or “demonic” to cow people into accepting their point of view. These words do have meaning, and can be used to make a well-reasoned argument—just like “racist” and “fascist” have meaning. But most of the people who use these words are just wielding them like rhetorical clubs to bully their way around.

There is nothing unethical about using generative AI to write or make art. Most of the people who say otherwise are just using the word “ethical” to mean “things I don’t like.” They don’t believe in objective good or objective evil, and instead believe that things like truth and morality are relative. In other words, they think that good and evil change depending on who’s looking at it. This is why so many writers today can’t write a compelling villain (or a compelling hero, for that matter). They just don’t understand how good and evil work.

So why should you listen to them when they scream at you for using AI? You shouldn’t. They don’t know what they’re talking about. Or worse, they do, but they’re lying to you, because they want to compel you not to use AI in your art. Why? Because they’re afraid that if you do, you’ll create something better than what they can create. And on that point, they’re probably right.

Where do we go from here?

So the alleged shooter has been found, and it appears that he acted alone. He wasn’t from our local community here in Orem, but he was a fellow Utahan, I am ashamed to say. Still, his arrest does bring a degree of closure to this heinous act, at least in the immediate future, though I suspect we will be experiencing the fallout of this violent assassination for some time to come.

Where do we go from here? I don’t know. A lot of it depends on what happens in the coming days. The tensions are escalating dramatically between the right and the left, so if that escalation leads to physical violence, it could be catastrophic. I hope and pray that that isn’t the case.

On the other hand, I can see a lot of good coming from this tragedy as well. People are comparing Charlie Kirk to Martin Luther King, and saying that this is a turning point for our nation. A lot of people are turning to God because of this, which is gratifying to see. A lot of other people are turning away from the radical left, whose evil is now bare for all to see.

I do think there is a lot of truth in these statements. Decades from now, I think we will look back on this event as the moment when the Great American Revival went mainstream. And just as we look at MLK’s assassination as the moment when segregation lost to the civil rights movement, we will look at this as the moment when the transgender movement and the woke intersectional left decisively lost the culture wars. In the long-term, their voices will fade into irrelevancy until they are little more than a curious footnote to this turbulent period of our history.

But the short-term is much less certain, and it really does feel like our country is poised on the edge of a knife. And when I think of what the future may bring, I can’t help but think of what the prophets and apostles of the Church of Jesus Christ of Latter-day Saints have been preaching for the last several months—specifically, the need for peacemakers in today’s world. There’s a lot of anger on my side of the political divide, some of it righteous, some of it otherwise. But if more good than evil comes of this tragedy, I sincerely believe it will be because of the peacemakers.

In many ways, Charlie Kirk was a peacemaker. He stood up boldly for what he believed, even to the point of controversy, but he was never violent about it. And though he was a passionate debater, he also listened to his opponents, and did his best to understand them and address them in their own terms. It was that quality—his ability to listen—that kept him from crossing the line from debate into contention.

Of course, his opponents hated that, and tried to paint him as a hateful and contentious figure, but all of that was just a projection of their own faults onto him. Everyone who knew him personally—including many of the people who differed with his beliefs—say that he was nothing but gracious to them personally, and went out of his way to reach out to them in their own moments of personal struggle. That is the mark of a peacemaker.

Charlie Kirk showed us how to stand up for our values with words instead of violence. He never compromised his values, but he also treated everyone—including his opponents—as a child of God. That fact made him truly a peacemaker. I can think of no better way to honor his legacy than by following his example.

Thoughts on the Charlie Kirk assassination

I heard the news shortly after dropping off my daughter at BYU kindergarten. The shooting apparently happened while we were on the road. Utah Valley University is only a couple of miles from our house, and the hospital where he died is only a mile from us.

I saw the videos of the assassination, including the now-censored one that showed it up close. I also saw the videos of the alleged shooter being hauled off in police custody, though now it appears that the University is saying that he wasn’t the shooter. This is such a fast-moving story that we probably won’t know exactly what happened until at least 24 hours from now, and there may be some things that we never know. And since I wasn’t there when it happened, I can’t comment on the shooting itself.

I just have to say, this is not who we are here in Utah. The shooter may turn out to be a Utah man, but that is not who we are—the rest of us. And I don’t just mean right vs. left, conservative vs. liberal. Most of us here in Utah swing MAGA (in fact, I’ve got a couple of neighbors who are still proudly flying their Trump flags), but we’ve also got some neighbors with rainbow flags and decals, and I’m sure that the vast majority of them are just as horrified that this assassination happened in our community. In fact, they’re probably afraid of how the rest of us will react.

My thoughts and prayers go out to Charlie Kirk’s family. I can’t imagine how horrible that must be, not just to lose your husband and father, but to have the footage of his violent death plastered all over the internet. I hope that more good than evil ultimately comes of this national tragedy, and that Charlie Kirk’s work will live on for many years to come.