What Brandon Sanderson gets wrong about AI and writing

Last week, Brandon Sanderson posted a video from a conference where he gave a talk titled “The Hidden Cost of AI Art.” In it, he argues that writers who use AI are not true artists, because the act of creating true art is something that changes the artist. This is true even if AI becomes good enough to write books that are technically better than human-written books. Therefore, aspiring authors should not use AI, because it’s not going to turn them into true artists. Journey before destination. You are the art.

Obviously, I disagree very strongly with Brandon on this point. For the past several years, I’ve been reworking my creative process from the ground up, in an effort to figure out how best to use AI to not only write faster, but to write better books. I’ve experimented with a lot of different things, some of which have worked, most of which haven’t. And I’ve published several AI-assisted books, many of which have a higher star rating than most of my human-written books. So I think it’s safe to say that I have some experience on this subject, at least as much as Brandon himself, if not more.

Brandon compares the rise of generative AI with the story of John Henry and the steam-powered rock drill, where John Henry beat the machine but died from overexertion. So he showed that man can still beat the machine, but the machine still went on to change the world.

But I don’t think that’s the right story when it comes to AI. It’s far too simplistic, pitting the AI against the artist. Instead, I think it’s better to look at how AI has changed the world of chess. For a long time, people thought that a computer would never be able to beat a human at chess. Then, in the 80s, an artificial intelligence dubbed “Deep Blue” beat Garry Kasparov at chess, proving that computers can beat even the best humans at the game. So now, all of our chess tournaments are played by AI, and humans don’t play chess at all. Right?

Of course not. Because here’s the thing: even though a strong AI can always beat a human at chess, a human who uses AI can consistently beat even the strongest AI chess engines. In fact, there are tournaments where teams of humans and AIs play against each other. They aren’t as popular as the human-only tournaments, since we prefer to watch humans play other humans, and the best human chess players prefer to play the game traditionally. But when they train, all of the top grandmasters rely on AI to hone their craft and sharpen their skills.

Chess is a great example of a field that has incorporated AI. And even though AI can play chess better than a human, AI chess players have not and never will replace human chess players. Because ultimately, asking whether humans or AI are better at chess is the wrong way of looking at it. AI is better at some things, and humans are better at other things. The best results happen when humans use AI as a tool, either in training or in actual play. And because of how they’ve incorporated AI, the game of chess is more popular now than ever.

Brandon spends a lot of time angsting about whether AI writing can be considered art. Perhaps when I’m also the #1 writer in my genre, and have amassed enough wealth through my book sales that I never have to work another day in my life, I can also spend my days philosophizing about what is and is not art. But right now, I prefer a more practical approach. I’m much less concerned about what art is than I am about what it does. And the best art, in my opinion, should point us to the good, the true, and the beautiful.

Can AI do that? Can it point us to the good, the true, and the beautiful? Yes, it can, just like a photograph or a video game can—both examples of counterpoints that Brandon brings up. But as with the game of chess, a human + AI can create better art than a pure AI left to its own devices. I suspect this will remain true, even if we reach the point where AI art surpasses pure human-made art. Because at the end of the day, AI is just a tool.

But what about Brandon’s point that “we are the art”? Isn’t it “cheating” to write a book with AI? Doesn’t that demean both the artist and the creative act?

It can, if all you do is ask ChatGPT to write you a fantasy story. Just like duct-taping a banana to a wall and calling it “art” is pretty demeaning (though you’ll still get plenty of armchair philosophers debating about whether or not it counts, highlighting again how useless the question is). But if you spend enough time with AI to really dig into what it can do, you’ll find that it’s no less “cheating” than pointing a camera and pushing a button.

One of the first AI-written fantasy stories I generated was a story about a half-orc. I wrote it using ChatGPT while my wife was in labor with our second child. We were both at the hospital, and I had a lot of down time before the action really began, so I used those few hours to write a 15k word novelette. It was fun, but the story itself was pretty generic, which is why I’ve never published it.

Basically, it read like an average D&D fanfic—which is exactly what every AI-generated fantasy story turns into if you don’t give it the proper constraints. If all you do is ask ChatGPT to tell you a story, it will give you a very average-feeling story. Every fantasy turns into a Tolkien clone or a D&D fanfic. Every science fiction turns into Star Trek. It may be fun, but it’s not very good. Just average.

My first AI novel was The Riches of Xulthar, and I wrote it quite differently. Instead of just running with whatever the AI gave me, I picked and chose what I wanted to keep, discarding the stuff that didn’t work very well. But I still didn’t constrain the AI very much, so it went off in some pretty wild directions, which made it a challenge to decide what was good. As a result, it went in some very different directions than I would have taken it, but the end result was something that I could still feel good about putting my name on. And of course, after generating the AI draft, I rewrote the whole book to make sure it was in my own words. That also helped to smooth out the story and make it my own.

Since writing The Riches of Xulthar, I’ve written (or attempted to write) some two dozen AI written novels and novellas. Most of them are unfinished. Some of them are spectacular failures. I’ve published another half-dozen of them, most in the Sea Mage Cycle.

It was while I was working on the latest Sea Mage Cycle book, Bloodfire Legacy, that I finally felt I was getting a handle on how to write something really great with AI. The key is constraints. AI does best when you give it constraints that are clear and specific. The more you constrain it, the more likely you are to get something that rises above the average and approaches something great.

But to do that, you have to have a very clear and specific idea of what you want your story to look like. Which means you have to have a solid outline (or at least some really solid prewriting), and a deep understanding of story structure.

I think the real reason Brandon is so opposed to AI writing is that it negates his competitive advantage—the thing that has made him the #1 fantasy writer. Without AI, the biggest bottleneck for new and established writers is putting words on a page. Brandon made a name for himself with his ability to write a lot of words relatively quickly. Where other fantasy writers like Martin and Rothfuss have utterly failed to finish what they start, Brandon finishes everything that he starts, and he starts more series than most other writers finish. This is why he’s known as Brandon Sanderson, and not just “the guy who finished Wheel of Time.”

But generative AI removes this bottleneck. Suddenly, putting words on the page is quite easy. They might not be good words, but they might be as good as Brandon Sanderson’s words. After all, his prose isn’t exactly the most brilliant of our time. Deep down, I think Brandon feels this, which is why he sees AI as such a threat.

Will writing with AI make you lose some of your writing skills? Probably. I suspect it’s much like how using AI to code will make you weaker at coding, at least on a line-by-line level. But coding with AI will make you a much better programming architect and designer, since it frees you up to focus on the higher-level stuff.

In a similar way, I expect that the new bottleneck for writing will have to do with the higher level stuff: things like story structure and archetypes. The writers who will stand out in an AI-dominated writing field will be the ones with a deep and intuitive understanding of story structure, who can use that understanding to get the AI to produce something truly great. Because if you understand story structure, you can write better constraints for the AI. Pair that with a good sense of taste, and you’ve got an artist who can make some really great stuff with AI.

This is why I think Brandon’s views on AI art are not only misguided, but actually toxic. Love it or hate it, AI is just a tool. Using it doesn’t make you any less of an artist, just like using a camera vs. using a paintbrush doesn’t make you any less of an artist.

My spicy take on the ethics of AI art

There is nothing unethical about using generative AI to write or make art. Those who say otherwise either haven’t thought through their position, or they are lying for rhetorical effect. Or both.

If Andrew Tate wrote a book titled How To Enslave Your Woman For Fun and Profit, would he be within his rights to demand that no woman ever read that book? If you believe that AI is unethical because it was trained on writers’ and artists’ work without their consent, congratulations—that is exactly the position you have taken. You can’t pick up one end of the stick without also picking up the other.

Whether or not writers and artists were fairly compensated for the use of their work is a separate issue. Many of these AI companies obtained their training data by indescriminately scraping the internet, which means the used a lot of pirated work. But if using copyrighted material to train an AI system is fair use—and here in the US, the courts have ruled that it is—then all that they owe you is the cost of your book. So if your book is $2.99 on Kindle, that is what OpenAI owes you. Congratulations.

Does Brandon Sanderson owe Barbara Hambly royalties? Brandon Sanderson has sold something like $45 million in books, comics, and other media. Barbara Hambly struggles to pay her bills. Barbara Hambly wrote Dragonsbane, the young adult book that inspired Brandon Sanderson to write fantasy. Clearly, her work had a deep and lasting influence on him. So does he owe her?

If you believe that AI companies owe artists and writers more than simply the price of their own published work, this is a question that you must wrestle with. If it counts as “stealing” to train an AI on artists’ and writers’ work, then every artist and writer is also a thief, and owes royalties to the people who inspired them. Which is why the word “plagiarism” has a tight definition, and why our legal code recognizes fair use.

There is nothing unethical about using generative AI to write or make art. Almost everyone who says otherwise is either lying to themselves about that fact, or lying to you.

Why would someone lie about that? For the same reason people accuse you of being a racist, or a sexist, or a fascist, or a white supremacist, or a Christian nationalist… because using that term gives them power. They don’t actually want to make a reasoned argument. They just want to “win” the argument without ever having to make it in the first place. They use words that they know will get the reaction that they want, and they scream them as loudly as they can until they get it. That’s what the public discourse looks like in 2025.

To be fair, this is not just something that happens on the left. Plenty of people on the right will scream “woke” or “based” or “demonic” to cow people into accepting their point of view. These words do have meaning, and can be used to make a well-reasoned argument—just like “racist” and “fascist” have meaning. But most of the people who use these words are just wielding them like rhetorical clubs to bully their way around.

There is nothing unethical about using generative AI to write or make art. Most of the people who say otherwise are just using the word “ethical” to mean “things I don’t like.” They don’t believe in objective good or objective evil, and instead believe that things like truth and morality are relative. In other words, they think that good and evil change depending on who’s looking at it. This is why so many writers today can’t write a compelling villain (or a compelling hero, for that matter). They just don’t understand how good and evil work.

So why should you listen to them when they scream at you for using AI? You shouldn’t. They don’t know what they’re talking about. Or worse, they do, but they’re lying to you, because they want to compel you not to use AI in your art. Why? Because they’re afraid that if you do, you’ll create something better than what they can create. And on that point, they’re probably right.

What do you think of these covers?

I’ve been playing around some more with ChatGPT, working on cover art for the Falconstar Trilogy. The best way to do it, I’ve found, is to make the art with AI, but to do the typography myself.

Anyhow, here are the test covers. What do you think?

The one that I feel most ambivalent about is Queen of the Falconstar. I really like how Zlata turned out, and the Falconstar looks pretty cool too, but the background… let’s see if I can fix that:

Anyhow, what do you think?

Get ready for the anti-AI witch trials…

Really interesting news story about an artist at Dragoncon who was forcibly removed from the convention, with the cops getting called and everything, for allegedly selling AI-generated art. So even though the artist bought the table and everything, the convention threw them out. It’s unclear whether the table was refunded, but I’m guessing it probably wasn’t.

Of course, as JDA rightly points out, if you do AI well then it’s impossible for anyone to tell whether it’s AI or human, so what this really does is set the precedent that merely accusing someone of using AI in their art is sufficient to cause serious damage to someone’s business. A lot of these artists make most of their money at conventions like this, and Dragoncon is one of the top-tier sci-fi media conventions, right up there with San Diego Comic-con and FanX Salt Lake.

How long is it going to take before an artist is falsely accused of using AI and ruined because of it? Has that already happened? How many more artists are going to be thrown out of conventions like this? How many artists are going to decide that it just isn’t worth it to attend these sorts of conventions, whether or not they use AI? How long before we find that some of the artists leading these witch hunts are themselves using AI to create their art?

In the end, when AI has been normalized and no one (not even in fannish circles) blinks an eye at AI-assisted art, we’re going to look back at this time with much the same dismay that we look back at the Salem witch trials. But that may not be for another ten or twenty years. Do we really need to go through all this madness first? This is why we can’t have nice things.

Anti-AI is the new virtue signaling

According to Merriam-Webster, “virtue signaling” is:

the act or practice of conspicuously displaying one’s awareness of and attentiveness to political issues, matters of social and racial justice, etc., especially instead of taking effective action.

Because it is much easier to signal your virtue than it is to actually be virtuous, the people who virtue signal the loudest also tend to be the ones who have something they’re trying to cover up. This hypocrisy is a big part of what makes virtue signaling so obnoxious.

Time for me to spill a little tea. A couple of years ago, after I wrote “Christopher Columbus: Wildcatter,” I got an acceptance from the editor of Interzone. It wasn’t formalized yet, but he expressed over email that he was interested in purchasing the publishing rights for that story, the sequel, and possibly others after. It got far enough along that we were going back and forth on editorial details, our vision for the stories, etc.

Then the time came for him to send me a contract. Aaand… he ghosted me. Flat out ghosted me. A month went by without any correspondence at all. I didn’t want to seem too forward, but I also was starting to get a little concerned. So I sent out a brief follow-up email, asking about the contract… and I got a response that read like something copy-pasted from a form rejection.

Now, as far as literary transgressions go, that’s kind of tame. It’s not like the editor owed me money and refused to pay. And as far as I know, Interzone is prompt with all of their payments and pays all of their authors in full. After all, everyone deserves the benefit of the doubt.

But that sort of unprofessionalism really wasn’t cool, either. In fact, it was enough that I stopped sending Interzone any submissions. After all, if the editor saw nothing wrong with yanking my chain around before he published me, that’s kind of a yellow flag. Not to mention that it left a very sour taste in my mouth.

So when I saw this story from Jon Del Arroz, with the editor of Interzone accusing Asimov’s of using AI art, and using that as a pretext to blacklist all of their authors, I immediately recognized that sort of behavior for what it is: virtue signaling. Which made me wonder: how much of the anti-AI vitriol that’s ubiquitous in online writing communities these days really just a new form of virtue signaling?

Think about it. It explains so much about the insane anti-AI faux controversies that have been blowing up around 2025 WorldCon. For more than a decade now, the people chasing the Hugo Award have been among the worst offenders of gratuitous virtue signaling (especially Scalzi). It also explains why so much of the anti-AI content on YouTube is less about presenting well-reasoned arguments, and more about sighing dramatically or making snide, sarcastic remarks. Virtue signaling always appeals to pathos before it appeals to reason.

I expect this phenomenon is going to get a lot worse in the next few years, at least until AI-assisted art and writing become normalized (which is going to happen eventually, it’s just a matter of time and degree). So the next time you see someone publicly posting about how horrible it is for creatives to use AI, take a good, hard look at the person leveling the accusations. Chances are, they’re just virtue signaling.

Your taste in AI art can say a lot about you…

So a couple of weeks ago, my wife and I both got into the new trend of using chatgpt to convert photos and images into “studio Ghibli style.” We started with some pictures of ourselves…

We then tried out some of our wedding photos…

And then, we realized that we didn’t have to upload an actual picture–we could actually just tell chatgpt what we wanted it to make, and guide it through the creative process until it made what we were looking for.

On the free version, this is super difficult, because you only get like 3 image generations per day, and you often have to go through several iterations to get what you want.

But both of us have the paid version of chatgpt, me through my writing business, and my wife through her school. So over the last couple of weeks, we’ve been playing around with it quite a lot!

This is the direction I decided to take it…

… And this is the direction my wife decided to take it…

… Needless to say, you can tell a lot about someone by their taste in AI art!

What do you think of this cover redesign?

I’ve been playing around with ChatGPT’s new image generator, and I decided to toss in the cover for Star Wanderers and see what it could do. This is what I managed to come up with.

What do you think? I’ll play around with some more to see if I can get something better, but I do kind of like this one.

ETA: I think I like this one even better!