Fisking 1-star reviews bashing AI

They say that authors should never respond to one-star reviews. That’s generally good advice, and for most of my career, I’ve studiously kept it. However, I’ve recently begun to get a new kind of one-star review that baffles me—reviews that essentially say: “the book was good, but it was written with AI so I hate it.”

Here’s an example:

This book is written with AI. Incredibly disappointing as a reader to give a book/author a chance and then to get to the end of the book only for the “author” to then announce the AI card. If I could give zero stars, I would for this alone. I also didn’t appreciate that this use of AI was not announced until the ending Author’s Note. If “authors” are going to cut corners and put their name to computer-generated mush, they should be willing to put that information on the front cover. The book struggled to find its pace, and some parts read as though they were written for a child’s short story competition while others felt as though the writer was snorting crushed up DVDs of Pirates of the Caribbean as they wrote.

Let’s break it down:

This book is written with AI. Incredibly disappointing as a reader to give a book/author a chance and then to get to the end of the book only for the “author” to then announce the AI card.

Yes… but I can’t help but notice that you got to the end of it. In other words, you finished the book. Also, from the way you tell it, it seems that you didn’t realize the book was written with AI until you got to the very end. So based on your own behavior, it doesn’t seem that quality was the issue.

I also didn’t appreciate that this use of AI was not announced until the ending Author’s Note. If “authors” are going to cut corners and put their name to computer-generated mush, they should be willing to put that information on the front cover.

Okay… but if my book was just “computer-generated mush,” why did you finish it? And why were you surprised when you learned that it was written with AI-assistance?

I can understand the objection to books that were written solely with AI, with little to no human input. But that’s not how I write my AI-assisted books. Instead, I outline them thoroughly beforehand, write and refine a series of meticulously detailed prompts (usually using Sudowrite), and generate multiple drafts, combining the best parts of them to make a passable AI draft. And then I rewrite the whole thing in my own words, using the AI draft as a loose guide with no copy-pasting.

Why would I go through so much trouble? Because of how the AI drafting stage gives me a bird’s eye view of the book, allowing me to identify and fix major story issues before they metastasize and give me writer’s block. Before AI, that’s where 80% of my writer’s block came from, and it often derailed my projects for months, so that it took me well over a year to write a full-length novel. But with AI, I’m no longer so focused on the page that I lose sight of the forest for the trees. So even though generating and revising a solid AI draft adds a couple more steps to the process, it’s worth it for the time and trouble that it saves.

That’s the way I use generative AI in my writing process. But there are many other ways—and I hate to break it to you, but most authors use AI in one way or another. If an author uses Grammarly to fix their spelling and grammar, should they disclose that on the cover? If they use MS Word? What if they used a chatbot to brainstorm story ideas, but went on to write it entirely themselves? Should that also be disclosed?

The book struggled to find its pace, and some parts read as though they were written for a child’s short story competition while others felt as though the writer was snorting crushed up DVDs of Pirates of the Caribbean as they wrote.

Yes… but again, I can’t help but notice that you finished the book. And after you finished it, you were surprised to learn that it was written with AI. So with all due respect, I’m going to call BS on your objections here. I think you only decided you hated the book after you learned it was written with AI, and you came up with these objections after the fact. Whatever.

I think a lot of the people who object to AI are really just scared and angry. They claim to have principled, ethical objections to the technology, but few of them follow through to implement that principled stance into every area of their lives. After all, if you use Grammarly, Google Docs, or MS Word, you are using generative AI just as surely as I am using ChatGPT and Sudowrite. For most people, the ethical objections are just a smokescreen for their general fear of change. They’re fine with embracing the convenience the technology offers them in their own personal lives, but they insist that everyone else—including me—live according to their principles, no matter how inconvenient or difficult it may be.

As an example of that, check out this one-star review:

The arts! Whether visual, performance, or literary—my haloed experience has been the act of creating and sharing a connection to the profound or sublime. Why, then, would any artist—musician, dancer, sculptor, painter, or author—offload (abdicate) the act of creation to AI? Process versus product. Mr. Vasicek included an afterword for this volume, describing his workflow and the efficiency of collaboration with AI: a 6,624-word day! another volume completed! Mr. Vasicek obviously owns the skills to weave rich character development and scenes. Perhaps Mr. Vasicek’s AI collaboration explains why these characters, the plot, the narrative—and subsequently, the entire story— are so flat and undeveloped. Although his lead male shows some undeveloped promise, the mother’s too-oft used “dear” and “my love,” and the daughter’s clutching at her mother’s apron are cringe-inducing. Perhaps Mr. Vasicek might eschew AI-assisted writing, seeking a future of quality over quantity.

Let’s break it down:

The arts! Whether visual, performance, or literary—my haloed experience has been the act of creating and sharing a connection to the profound or sublime. Why, then, would any artist—musician, dancer, sculptor, painter, or author—offload (abdicate) the act of creation to AI?

Because for some of us, writing is more than a “haloed experience”—it’s an actual job. It’s what we do for a living. And if you want to do your best work, you need to use the best tools. We used to build houses with plaster and lath and wrought-iron nails, using hand tools and locally-sourced lumber. But today, you’d be a fool not to use power tools and materials sourced from a building supply store, or your local Home Depot. If that makes your building experience less profound or sublime, so be it.

Process versus product. Mr. Vasicek included an afterword for this volume, describing his workflow and the efficiency of collaboration with AI: a 6,624-word day! another volume completed!

I’m not gonna lie: there is a certain degree of tension between art-as-product and art-for-art’s-sake. But the two are not mutually exclusive. A house can still be a beautiful work of art, without taking as long as a cathedral to build it. Likewise, a book can still be a beautiful work of art, without taking as long as Tolkien’s Lord of the Rings.

Again, you’re trying to pidgeon-hole me into your “haloed” idea of what a “true artist” should be. Which would make it absolutely impossible for me to make a living at this craft. If all of us writers followed that path, there are a lot of wonderful books that would never get written. And I doubt that the overall quality of the books that do get written would rise.

Mr. Vasicek obviously owns the skills to weave rich character development and scenes.

Now we get to the interesting part. I checked this reviewer’s history, and this was the only review they’ve written for any of my books. Therefore, I can only assume that this is the only book of mine that they’ve read. But if that’s the case, how do they know that I have “the skills to weave rich character development and scenes”? If the book I wrote with AI was pure trash, why would they say that I obviously have some skill?

Once again, we’ve got a case of “I enjoyed this book, but it’s written with AI so I hate it.” In other words, it’s not the book itself that you hate, so much as the way I wrote it. You object to the idea of authors using AI, not to what they actually write with AI.

Perhaps Mr. Vasicek’s AI collaboration explains why these characters, the plot, the narrative—and subsequently, the entire story— are so flat and undeveloped. Although his lead male shows some undeveloped promise, the mother’s too-oft used “dear” and “my love,” and the daughter’s clutching at her mother’s apron are cringe-inducing.

Finally, some specific and legitimate criticism. And while I do think there’s a degree of retroactively looking for faults after enjoying the book, I’m totally willing to own that these criticisms are valid. This particular book (The Widow’s Child) was one of my first AI-assisted books, and I was still learning to use these AI tools as I was writing it. I did the best I could at the time, but if I were to write it today, I could probably do a lot better, smoothing out the annoying AI-isms that you’ve pointed out here.

But the book is currently sitting at 4.4 stars on Amazon (4.1 on Goodreads). And the other readers do not share your objections. Here is another review, pulled from the same book:

Since waiting a year or more to read the next book in a sequel is hard on my stress levels, I’m liking this AI. It means talented authors like Joe Vasicek can churn out an outline faster. Then he can bring in his talented ideas, such as the content of this heart-stopping adventure of The Widow’s Child, to fill out the nitty gritty in record time.

Clearly, it’s not the case that all (or even most) readers feel the same way about AI as you do.

Perhaps Mr. Vasicek might eschew AI-assisted writing, seeking a future of quality over quantity.

Why can’t we have both? Why can’t we have quantity with quality? Why can’t AI make us more creative, instead of replacing our human creativity?

This is all giving me flashbacks to the big debate between tradition vs. indie publishing, back in the early 2010s. Back then, the debate was between purists who said that indie publishing would destroy literature by flooding the market with crappy books. Indies argued that removing the industry middlemen would create a more dynamic market that would give readers more choices and allow more writers to make a living. Both were right to some degree, and both were also wrong about some things. In the end, we reached a middle ground where “hybrid publishing” became the norm.

The same kind of debate is happening right now between human-only purists and AI-assisted writers. The biggest difference is dead internet theory. In the early 2010s, the ratio of bots to humans on the internet was still low enough to allow for a lively debate. Today, there’s so much bot-driven outrage on the internet that most of us are just quietly doing our own thing and avoiding the debate.

That same bot- and algorithm-driven outrage is driving a lot of peole to be irrationally angry or afraid of AI. With that said, I can understand why so many people are upset. And I do think there are a lot of valid criticisms about this new technology, including its environmental impact, copyright considerations, how the models were trained, and the societal impact it’s already starting to have. But if we don’t have an honest and good-faith debate about these issues, we can’t solve any of them. And we can’t have a good-faith debate if one side is coming at it from a place of irrational anger or fear.

In any case, I find it super annoying when readers who clearly found some value or enjoyment in my books turn around and give it a one-star review merely because they don’t like how I used AI. And at the risk of going viral and soliciting more one-star anti-AI reviews, I think its worth voicing my views on the subject and opening that debate. So what are your thoughts on the subject? How do you feel about using AI as a tool to help write books? Can we have quantity with quality? Can AI help us to be more creative, not just more productive? What has been your experience?

Fantasy from A to Z: U is for Unicorns

If you were expecting a post on unicorns or other mythical beasts, I hate to disappoint you again, but that’s not what this is going to be. Instead, I want to write a bit about that most mythical of all human creatures: the full-time fiction writer.

Okay, perhaps we’re not that mythical. After all, Brandon Sanderson estimates that of all his students over the years, perhaps as many as 10% of the ones who set out to become full-time writers actually make that dream into a reality. I sometimes wonder: would Brandon count me as one of those 10%? Should he? The answer to that is… complicated. 

One of the first questions I get whenever I tell people that I’m a writer is “oh, wow—how is that working out for you?” Which is really a roundabout way of asking how much money I make, and whether I’ve been able to turn it into a full-time career. I am not (yet) a major bestselling author, and the closest thing I’ve had to a breakout thus far has been my (now unpublished) Star Wanderers novella series, which managed (mostly by accident) to hit the algorithms correctly back when a permafree first-in-series with lots of direct sequels was the best path to success. Then the publishing landscape changed, the algorithms shifted to favor pay-to-play advertising, and my books got left behind.

I will admit that if it weren’t for my wife’s income, I wouldn’t be able to pursue writing full-time. As a family, we’re following a path very similar to my Scandinavian ancestors, where the wife tends the farm while the husband goes off a-viking. In other words, my wife has the stable, traditional career that provides our family with some degree of security, while I have the more risky career that has the potential to catapult us into transformative levels of wealth and prosperity. We’re doing just fine, but it does sometimes feel like my Viking ship has yet to land ashore.

Because here’s the thing: something like 90% of the money in book publishing (after the booksellers and publishers and other middlemen take their often-exorbitant cuts) goes to less than 1% of the writers who actually make any money (and something like 30% of kindle books never sell a single copy). 

For every Brandon Sanderson, there are thousands—perhaps hundreds of thousands—of published authors who write on nights and weekends while holding down a day job to pay the bills. My writing contributes enough to the family budget to justify pursuing it, but if I were still single, I would need at least a part-time job.

Indie publishing has created a lot of opportunity for authors to make a career out of their writing, and there are many successful indies who are making a decent living at it. At the same time, indie publishing has also massively exploded the number of books that are published, so the proportion of full-time to still-aspiring authors is probably about the same (and may have actually tilted the other way). 

In recent years, it has very much turned into a zero-sum pay-to-play game, especially with advertising. From what I can tell, most authors lose money on advertising, and most of those who are making money are spending upwards of $10,000 each month to make $11,000. The elite few who learn how to successfully game the algorithms to blow up their books often put their writing on the backburner to launch their own companies or provide publishing services, leveraging their expertise to make a lot more than they otherwise would.

The algorithms are changing books in some very strange ways. If J.R.R. Tolkien or Roger Zelazny or Robert E. Howard were writing today, would they be able to make it in today’s publishing environment? 

Howard’s Conan stories would either have to be a lot sexier, or else would have to include the sort of tables and character stats you find in LitRPG. His covers would also be a lot more anime, and show a ridiculous amount of cleavage (which he actually might not have had a problem with, judging from some of the old Weird Tales covers). 

Zelazny’s Chronicles of Amber would all be far too short to make it in Kindle Unlimited—to make it in that game, you have to have super long books that max out on page reads, in order to maximize advertising ROI so that you can outbid your competitors. And if you aren’t winning the pay-to-play advertising game, your KU books will sink like rocks. Also, Zelazny took way too much time between books. Gotta work on that rapid release strategy, Roger.

As for Tolkien… hoo boy, there’s an author who did everything wrong. Decades and decades spent polishing his magnum opus, with a short prequel novel that falls squarely in the children’s category (totally different genre) as the only other fantasy book published in his lifetime. I suppose he could have serialized Lord of the Rings, except nothing really happened in episode 1: A Long-Expected Party. Certainly not anything that would adequately foreshadow all the dark and epic battles to come. Perhaps if he followed a first-in-series permafree strategy, and just gave away Fellowship of the Ring for free… and then made The Hobbit his reader magnet for signing up for his email list… maybe that could have worked? After all, there’s always BookBub…

I jest, of course. Each of these authors’ books became classics, not because of their marketing strategy, but because they hit the cultural zeitgeist in exactly the right way. But is it possible for an author to do that today without also getting a boost from the algorithms? Or do the algorithms have more power to shape our culture than anything else? Those are disturbing questions, and I honestly do not know the answer.

And then there’s the question of AI, which is massively disrupting all of the creative fields. In the interest of full disclosure, I am actually quite sanguine about generative AI, and have already been working to incorporate it into my creative process. I’m not a fan of AI slop, but I don’t feel particularly threatened by it. I decided a long time ago that if AI ever became good enough to write an entertaining book, it still would never be able to write a Joe Vasicek book. That’s insulated me from most of the doom porn out there.

Right now, there is a HUGE fight happening between authors like me who are embracing AI, and authors who treat it all as anathema, and have vowed to never use any sort of AI in any of their books (except Grammarly, of course, because… reasons. And Microsoft Word. And…) Frankly, it reminds me of the big debate between indie and traditionally published authors, back before self-publishing had lost its stigma. The biggest difference is that the level of online outrage has been ramped up to 11, mostly as a result of the social media algorithms (which weren’t as robust or as powerful back in the early 2010s). I suspect that we will ultimately settle on a “hybrid” approach, much like we did with publishing, but the sheer level of vitriol has made me wonder about that. 

On the reader end of things, though, it seems like most readers don’t really care if a book was written with or without AI assistance, so long as it’s actually a good book. Which means that there is a real opportunity for authors who 1) know how to tell great stories, 2) have already found and honed their voice, and 3) know how to strike the right balance between the AI and the human elements. 

Which describes my own position almost perfectly. Over the last fifteen years, I’ve read, written, and published enough books that I have a pretty good handle on what makes a great story. I’ve also honed my voice well enough that I can write in it quite comfortably. And as for the balance between AI and human writing, I’ve been working hard on that since ChatGPT burst onto the scene in 2022. Half a dozen books and about a million words later, I’ve learned quite a lot about how to best strike that balance.

Will AI replace authors entirely, making this particular unicorn extinct? I don’t think so. But AI may radically change our concept of what “books,” or “writers,” or “writing” really are. A long time ago, I realized that even if AI became good enough to write a decent book, it would never be able to write a Joe Vasicek book. Only I can do that. Whether or not that’s worth something is up to the readers to decide.

The dangers of relying too much on AI

I saw this really interesting video last week, and it made me think: am I relying too much on AI?

In my personal life, this probably isn’t an issue. I do occasionally ask ChatGPT to make me a recipe, or to advise me on a particular topic, but I always do a gut check and assume that it’s hallucinating if it doesn’t pass. If it gives me something that I can quickly and easily verify, I always do that… and half of the time, it turns out to be a hallucination to some degree. So yeah, I don’t rely on it nearly as much in my personal life as some of the characters in this video.

What about blogging? Don’t be too scandalized, but with my new blogging schedule, I have experimented a bit with using ChatGPT to write some of these blog posts. It’s not like I’ve been copy-pasting everything straight from the chatbot, but I have relied on it a little more heavily than I do in my own writing.

After trying that a couple of times, though, I decided to cut that out and write all of these blog posts by hand. Why? Because I felt like it was creating too much distance between myself and the people who read this blog, and the purpose for writing this blog is to foster a human connection. So it kind of defeats the purpose to rely on a chatbot to generate most of the content I post here. For that reason, I plan to keep writing all these blog posts entirely myself, with only minimal AI input.

So what about my fiction? This is where things get a little tricky. While I totally agree that simply copy-pasting from AI is a piss-poor way to write a book, I do think that AI can be a very useful tool in writing and crafting a novel, provided that you understand the limitations of the AI and don’t rely on it too much. But how much is too much? That is the question.

The biggest way that AI has helped to enhance my own writing is in giving me a birdseye view of the story as I generate a “crappy first draft.” This birdseye view allows me to see and fix major story issues before they metastatize and give me writer’s block, which is what tends to happen if I write these drafts out entirely by hand. When I’m focused on the page, I tend to lose sight of the forest for the trees, so I don’t notice that there’s a problem with the story until I’m several chapters in and find that I just can’t write.

This has happened with basically every project that I write on my own, and is the main reason why it took me anywhere from six to eighteen months (or longer) to write even a short novel, before I started using AI. However, since I began incorporating AI into my writing process, this problem has basically gone away, and I no longer experience this form of writer’s block at all.

However, while I do rely on AI to help me to craft my “crappy first draft,” that isn’t the draft that I publish. Once the AI draft is as good as I can make it, I will then go through scene-by-scene and rewrite the entire book in my own words. The purpose for this step is to make sure that I’m telling the story in my own words, and to make the story my own. I will still have the AI draft open on another screen, and refer to it as I write out the story, but I don’t do any copy-pasting. It’s all written out by hand.

Is this enough, though? Or do I need to add more steps to make sure that I’m not relying too much on AI, and thus losing my own voice? Recently, I’ve been spending a lot more time on the AI draft, generating multiple iterations and combining the best parts to (hopefully) boost the quality. I’ve also been doing a revision pass over the AI draft, tweaking it to smooth over some common AI-isms and (hopefully) adding a bit of my own voice before I move on to the human draft and rewrite the whole thing to make sure it’s all in my voice.

But while this might be enough to keep the book in my own words, is this enough to keep my own writing skills from atrophying? Or do I need to occasionally pick up a WIP that is 100% human writing, with no AI at all, just to make sure I don’t lose these writing skills? That is the question that I’m currently pondering. Perhaps this is the sort of thing that short stories could serve really well to help with. Perhaps I should go back to writing short stories again, just as a way to keep my writing skills sharp.

If I were starting out right now as a new writer, I would definitely avoid writing with AI until I’d written enough to find my own voice. And I would also make sure to write at least one novel 100% without AI-assistance, just for the experience, and to prove to myself that I could do it. Otherwise, I think there would be a very real danger in becoming over-reliant on AI to write my books, and thus risk losing my own unique voice, so that none of the books that I write ever truly become my own.

Anyhow, those are some of my current thoughts on the subject. What do you think of this problem?

Anti-AI is the new virtue signaling

According to Merriam-Webster, “virtue signaling” is:

the act or practice of conspicuously displaying one’s awareness of and attentiveness to political issues, matters of social and racial justice, etc., especially instead of taking effective action.

Because it is much easier to signal your virtue than it is to actually be virtuous, the people who virtue signal the loudest also tend to be the ones who have something they’re trying to cover up. This hypocrisy is a big part of what makes virtue signaling so obnoxious.

Time for me to spill a little tea. A couple of years ago, after I wrote “Christopher Columbus: Wildcatter,” I got an acceptance from the editor of Interzone. It wasn’t formalized yet, but he expressed over email that he was interested in purchasing the publishing rights for that story, the sequel, and possibly others after. It got far enough along that we were going back and forth on editorial details, our vision for the stories, etc.

Then the time came for him to send me a contract. Aaand… he ghosted me. Flat out ghosted me. A month went by without any correspondence at all. I didn’t want to seem too forward, but I also was starting to get a little concerned. So I sent out a brief follow-up email, asking about the contract… and I got a response that read like something copy-pasted from a form rejection.

Now, as far as literary transgressions go, that’s kind of tame. It’s not like the editor owed me money and refused to pay. And as far as I know, Interzone is prompt with all of their payments and pays all of their authors in full. After all, everyone deserves the benefit of the doubt.

But that sort of unprofessionalism really wasn’t cool, either. In fact, it was enough that I stopped sending Interzone any submissions. After all, if the editor saw nothing wrong with yanking my chain around before he published me, that’s kind of a yellow flag. Not to mention that it left a very sour taste in my mouth.

So when I saw this story from Jon Del Arroz, with the editor of Interzone accusing Asimov’s of using AI art, and using that as a pretext to blacklist all of their authors, I immediately recognized that sort of behavior for what it is: virtue signaling. Which made me wonder: how much of the anti-AI vitriol that’s ubiquitous in online writing communities these days really just a new form of virtue signaling?

Think about it. It explains so much about the insane anti-AI faux controversies that have been blowing up around 2025 WorldCon. For more than a decade now, the people chasing the Hugo Award have been among the worst offenders of gratuitous virtue signaling (especially Scalzi). It also explains why so much of the anti-AI content on YouTube is less about presenting well-reasoned arguments, and more about sighing dramatically or making snide, sarcastic remarks. Virtue signaling always appeals to pathos before it appeals to reason.

I expect this phenomenon is going to get a lot worse in the next few years, at least until AI-assisted art and writing become normalized (which is going to happen eventually, it’s just a matter of time and degree). So the next time you see someone publicly posting about how horrible it is for creatives to use AI, take a good, hard look at the person leveling the accusations. Chances are, they’re just virtue signaling.

Thoughts on Sudowrite’s new Muse 1.5 model

Sudowrite just released their new, updated version of Muse, their in-house generative AI model that’s optimized for writing fiction. It’s very similar to Muse 1.0, where you select a “creativity” setting from 1 to 11, optionally add some prose for it to work from, and then let it go. To allow their subscribers to experiment with it, they made it free to use today—but honestly, it doesn’t use a ton of credits anyway, so unless you’re on the cheapest plan (or generating a +1M word tome) it’s not going to break the bank.

I happened to be working on the rough AI draft of Captive of the Falconstar, so I decided to try it out. I wasn’t too impressed with the earlier version of muse, since I found that it didn’t have much internal consistency and felt a bit like it had just thrown my whole story bible into a blender. But perhaps the main problem was that I was setting the creativity too high. With Muse 1.5, I still ran into those problems on the higher range, but when I set creativity down to 1 or 2, it actually was fairly coherent (though the dialogue was still a bit like “let’s throw these characters in a blender and see what happens!”)

I think it might be that the way I’m writing my scene beats works better for the more reliable version of Muse. I tend to write very detailed scene beats, running in the 200 to 500 word range. I suspect that Muse would work much better if I were “discovery writing” my AI draft, instead of outlining it rigorously and generating multiple iterations of each chapter to pick out the best parts of each one. When you crank up things on the creative end, it can get pretty wild, especially on the 11 setting.

But while the internal consistency of the writing isn’t nearly as good with Muse as it is with Claude, the prose is definitely better. So what I’m probably going to do in the future is generate the first iteration of each chapter in Claude 4 Sonnet (or Opus, if I have enough credits for it—Opus is a monster of a credit spender, but the results are quite excellent! I really hope Sudowrite builds a “delux” model based on Claude 4 Opus.) After that, I’ll generate a couple of other iterations using Muse, then go through it line by line to copy-paste the best parts of the Muse iterations into the master version.

It’s a lot more work, but I think this way I can get the best aspects of both models, and produce a really clean AI draft. And the cleaner the AI draft is, the faster and easier it is to write the human draft—and likely with better results too.

Fantasy from A to Z: C is for Conan

Before there was J.R.R. Tolkien, there was Robert E. Howard. And before there was Middle Earth, there was Conan the Barbarian and the Hyborian Age.

Robert E. Howard had an amazingly prolific writing career, cut tragically short by his suicide. When I think of all the books and stories we could have had if Howard had not shot himself in grief after the death of his mother, it fills me with a profound sense of loss (and makes me want to rewatch the excellent biopic about him—or more accurately, his girlfriend—The Whole Wide World). I love Howard’s fantasy stories—not just the ones about Conan and his adventures, but the ones about Bran Mak Morn, Kull of Atlantis, Solomon Kane… honestly, he wrote so many stories that I have yet to exhaust them all. 

But my favorite are the stories about Conan the Barbarian, who is undoubtedly his most famous literary creation. Over the course of the last century, Conan the Barbarian has taken on a life of his own, with dozens of writers taking a stab at writing stories in the Cimmerian’s world. My favorite of these is probably John Maddox Roberts, though I have a soft spot for L. Sprague de Camp. Harry Turtledove also wrote an excellent Conan novel, Conan of Venarium. 

In a lot of ways, Robert E. Howard’s Conan stories set the standard for modern fantasy—or at least for the sword & sorcery strain for it. Tolkien later established the epic fantasy strain, and you can make a solid argument that every other successful fantasy book is derivative of one or the other (or both). Where the epic fantasy strain tends run super long, with novels in the 200k word to 400k word range, the sword & sorcery strain tends to run much shorter, with many of the original Conan stories clocking in at under 10k words. In fact, from what I’ve gathered, until Lord of the Rings became popular in the 60s and 70s, most readers thought that the natural length of a fantasy story was under 10k words.

For the Conan stories, that’s probably true—or at least, under 40k words, since many of Howard’s original novellas are quite good. My favorite of his is probably either “The Tower of the Elephant” (perhaps the most classic Conan story) or “The Black Stranger,” which had a very interesting Mexican standoff between three stranded pirate captains that Conan totally blows up. I also really enjoyed “Iron Shadows in the Moon,” mostly because the female love interest gets an interesting and satisfying character arc. The crucifixion scene from “A Witch Shall Be Born” was really great, too, and of course, the brutal savagery of “Red Nails” made a really big impact—though since that was the last Conan story Howard wrote before he shot himself, it has a very dark edge to it.

Howard only wrote one Conan novel, and to tell the truth, I wasn’t particularly impressed by it—it just felt like a generic Conan story, padded with a bunch of filler to increase the length. But I did really love Conan the Marauder by John Maddox Roberts, where Conan rises through the ranks of a horde of nomadic tribesmen, starting as their slave and eventually becoming the right-hand man of the Hyborian age’s Genghis Khan. The two major villains of that book had exceptionally satisfying deaths, and the writing was almost as pulpy and glorious as Howard’s writing itself.

After you’ve read all the original Conan stories, you really should watch The Whole Wide World. It’s a wonderful film about the only woman Howard ever loved, his on-again off-again girlfriend Novalyne Price, and their turbulent relationship. As a writer, I really appreciated the glimpse that the movie gives into the life of the author himself—and on how some of his eccentricities as a writer mirror my own. Thankfully, though, my family life has been much more stable. I don’t blame Novalyne Price for rejecting Howard, but I am very thankful for my own wife and children. My own writing changed dramatically when I became a husband and father. I can only imagine what wonderful stories we would have had if Robert E. Howard’s life had taken a similar path.

The book I’ve written that comes closest to matching the mood, theme, and action of a typical Conan story is probably The Riches of Xulthar. It isn’t nearly as good as the original Conan stories, but I do think it compares favorably against some of the later knock-offs. The idea for it came when I was playing around with ChatGPT and asked it to write me a fantasy adventure story in the style of Robert E. Howard. Things took off from there. Riches of Xulthar was my first AI-assisted novel, though after using AI to generate the rough draft, I rewrote the whole book to put it in my own words, which is the process I use for all of my AI-assisted books. If you’re interested, you can do a side-by-side comparison between the AI draft and the human draft on my blog. 

Thoughts on the Worldcon 2025 AI “scandal”

I’ll just come out and say it: I predict that the world’s last Worldcon will happen before 2034, and that after that, the convention (and possibly the Hugo Awards themselves) will be permanently disbanded. That’s what I think will be the ultimate consequence of the latest “scandal” regarding Seattle Worldcon’s use of ChatGPT, and the anti-AI madness currently sweeping the science fiction community on Bluesky.

If you haven’t been following the “scandal,” you ought to check out Jon Del Arroz’s coverage of it. He’s definitely partisan when it comes to politics and fandom, but he’s neutral on the subject of AI, or as neutral as you’re going to find, especially in writerly circles.

But here’s the TL;DW: the people organizing Worldcon 2025 in Seattle decided to use ChatGPT to help them decide which authors and panelists to put on which panels. This triggered a bunch of authors and panelists who are opposed to generative AI, simply on principle. Some of these authors—including Jeff VanderMeer, who is up for a Hugo award—have bowed out, while others have called for resignations and apologies. Many of the volunteer staff have also stepped down, exacerbating the staffing shortage—which is why the convention relied on ChatGPT in the first place. And apparently over on Bluesky, the scandal is taking on a life of its own, with everyone working themselves up to a massive frenzy over the subject.

My own opinion of the “scandal” is this: it isn’t a freaking scandal! Whatever your opinion on AI-assisted writing, using ChatGPT as an aid to research panelists is totally above-board and a legitimate use of AI. To disagree with that is to say that there is no ethical use-case for generative AI whatsoever, which is hypocritical and absurd—unless, of course, you’re still writing your books on a manual typewriter and submitting them to your publisher via the US postal service. Or using WordStar, if your name is G.R.R. Martin and you’re the last person on earth who “writes” with that defunct software (putting “writes” in quotation marks, since we all know by now that Martin isn’t actually writing anything).

But it isn’t the “scandal” itself that interests me, so much as what the fallout will likely be. Ever since the Sad Puppies debacle in 2015 (and arguably long before that), Worldcon has been dominated by the wokest fringe of SF&F fandom, and it’s been an open secret that the Hugo awards themselves are controlled by the publishers, largely for marketing purposes.

So at this point, the only things really keeping the whole Worldcon/Hugo charade going are 1) woke authors who use the convention to manufacture clout for their failing careers, because they wouldn’t otherwise have a platform, and 2) woke publishers who use the awards to manufacture clout for their poorly-selling books, because they don’t actually know how to market books effectively (at least, not to readers—libraries are a whole other subejct deserving of its own discussion, because there is a genuine scandal there). Once those two things dry up, and all of the ruin has been exhausted from these institutions (ie Worldcon and the Hugos), I really do think they will collapse and go away.

That’s what I find so fascinating about this scandal: it is so utterly toxic and absurd on its face that it’s going to do permanent damage to Worldcon and the Hugos. The writers of the rising generation who will one day dominate the field are all playing around with these AI tools right now, and doing really interesting things with them. Meanwhile, most of the authors who are screaming about AI on Bluesky right now will either be dead or irrelevant (or both) in the next 20 years. And yes, Mike Glyer, you can quote me on that.

Seriously, though: if the Worldcon community is so vociferiously opposed to a legitimate use-case of ChatGPT—namely, to alleviate the already overwhelming burdens being carried by the volunteer staff—AND they continue to be absolutely toxic about it online… who in their right mind would want to be a part of that community? And since the only thing keeping the whole charade going is its ability to manufacture clout, that’s why I think its years are numbered—and likely in the single digits.

On the plus side, if/when the Hugos finally die, I won’t have to read any more crappy woke books to be able to say I’ve read (or DNFed) every Hugo award-winning novel.

In defense of AI art & AI writing

If Andrew Tate wrote a book about how to make your wife or girlfriend into your slave, would he be within his rights to demand that no woman reads that book without his consent?

Brandon Sanderson was inspired to become a fantasy writer when, as a child, he read Dragonsbane by Barbara Hambly. Sanderson is now worth some seven or eight figures, while Hambly, who is still alive and still writing, struggles to pay her bills*. Should Hambly be entitled to a portion of Sanderson’s earnings, for inspiring him to become a fantasy writer?

Every mother who has ever lived gives tremendously of herself to her children, even if only in the physical act of giving birth. Should mothers have a legal claim on their children, for monetary compensation for all of the sacrifices they make?

These might seem like crazy questions, but when you consider them in the context of the ethical arguments about AI art and AI writing, they really aren’t. They illustrate just a few of the unintended consequences of the regime that many disgruntled and resentful creators are arguing for, when really what they want is a world in which AI doesn’t exist.

One of the most difficult parts of being a creator is putting your work out into the world and letting it go. At that point, you really have little control over what it does and how it impacts the world. Many artists who labor in obscurity dream of making an impact on the world, not realizing that success—even artistic success—can be far more devastating and traumatic than obscurity. After all, just ask Rachel Zegler about that now.

I’m not saying that artists shouldn’t be paid for their work. Certainly they should be paid—and certainly there are valid ethical concerns with how AI is disrupting art and literature. But unhinged people who rant online about how AI is “stealing” artists’ work, or how it is “plagiarizing” writers’ books, simply because the LLM’s training data includes free online content (much of which was posted online by said artists and writers)—I don’t think those people really care about the ethical nuances of the debate. I think they just want to force us all to go back to a world where generative AI doesn’t exist.

Did David Weber steal from Star Trek when he wrote the first Honorverse novel? Did John Scalzi steal from Robert A. Heinlein and Joe Haldeman when he wrote Old Man’s War? Did Terry Brooks steal from Tolkien? How about George R.R. Martin?

Where exactly is the line between the “stealing” that should get you thrown in prison, and the “stealing” that people wink and nod at when they say that good artists copy and great artists steal? And how do we know that we’ve drawn the line in the right place? Would we have worse art, or better art if Star Wars had gone into the public domain in the 80s or 90s? Would artists be making less money, or more?

I don’t have the answers to these questions, but I ask them because I think they are worth considering. And I think that most of the artists who think they have the answers are really just acting out of fear.

Will AI outright replace artists and writers? Will it make it impossible for artists and writers to make a living? I remain skeptical, though I acknowledge that there are some ways in which AI art appears to be doing exactly that. For example, I’ve been playing around with OpenAI’s new image generator, making some cover mock-ups, and I’ve been very impressed. But I will still seek out James at GoOnWrite.com for my covers, because he has a much better eye for this sort of thing, and my sales data reflects that his covers sell more of my books than my own covers do.

Should writers and artists expect to be paid whenever their art is used to train an LLM? Aside from the impracticality of enforcing such a law, I don’t think that we should—at least, not for general training data. Fine tuning is a different matter. If an AI is going to be fine-tuned to write in my particular style, I think I have a right to be recompensed for that—and I’d be willing to license that right for a reasonable fee. Perhaps this is a path that artists could pursue as well. But demanding that every AI company pay every artist for training their LLMs is kind of like Barbara Hambly demanding that Brandon Sanderson pay her a portion of his earnings. Likewise, whenever artists or writers demand that their intellectual property is excluded from the training data, it smacks to me of the first question with Andrew Tate and his hypothetical book.

I will admit that I’m biased in favor of AI, since for the last two years I’ve been working to incorporate it into my own creative process. But I’ve been doing this out of a recognition that these things we call “writing” or “making art” is going to change because of these new technologies. In a world saturated with AI, will it still be possible to make a living as an artist or a writer? Yes, I believe it will, but at the same time, I believe that our conception of what it means to be an “artist” or a “writer” will almost certainly change. That’s why I’ve chosen to embrace these tools, rather than fight them—and why I think my fellow artists and writers should as well.

*At CONduit 2010 in Salt Lake City, Barbara Hambly was the guest of honor, and in her keynote address she talked about her struggles to pay her bills with writing. I assume that things haven’t changed much in the years since then, though I would be delighted to learn that I’m wrong.

Should I split my epic fantasy series into two trilogies?

So I’m working on the first book in a new epic fantasy series, called the Soulbound King. It’s basically a fantasy retelling of the life of King David, loosely adapted from the biblical stories about his life. I’ve already outlined the first book and generated a rough AI draft, which came in at 153k words. The final draft will likely be longer than that, but I think it’s very likely that I will be ready to publish it before the end of the year.

The question I’m currently grappling with is whether to keep it as a seven book series, or to release it as two trilogies with a bridge novel in the middle. Frank Herbert did a similar thing with his Dune books: the first three books (Dune, Dune Messiah, and Children of Dune) were a trilogy, and the next book, God Emperor of Dune, was supposed to be a bridge novel setting up the second trilogy—except he died before finishing the last book, so his son Brian Herbert got together with Kevin J. Anderson to write it, and then they blew it up into a franchise… point being, stuff like this has been done before.

Now, I’m reasonably confident that I’m not going to die before finishing the last book. In fact, I’ve already made a 7-point outline for all seven books, so I know exactly where they start and end, with the inciting incident, midpoint, climax, etc. I’m also writing these books with AI assistance, which is making it possible for me to write these books much faster than I otherwise would have been able to write them. For the first book, The Soulbond and the Sling, I anticipate that it will only take between six to nine months of total work to go from story idea to finished draft.

But the trouble with writing a seven book epic fantasy series is that a lot of readers aren’t going to bother picking it up until all seven books are out. This is because so many readers have been burned by authors like George R.R. Martin and Patrick Rothfuss, who have not and likely will never finish their bestselling series. I can’t really blame the readers for that (though I can and do blame the authors), but it creates a market reality that I need to anticipate and plan for.

So here’s what I’m thinking: instead of making it a seven book series, I’ll make it two trilogies with a bridge novel in-between. The first three books will complete one arc, and the last three books will complete another arc. I’ll wait to release the first book until after I’ve completed the AI draft of the third book, so that way I can release all of the books in the first trilogy within 1-3 months of each other. And after the first trilogy is complete, I’ll market it as a trilogy while working on the last four books, probably releasing each of those a year apart, as I finish them.

The reason I’m thinking about this now is because a strategy like this is going to influence how I write all of these books. If I’m going to split the series into two trilogies, the last thing I want to do is end the first trilogy on a cliffhangar. It has to hold together as a complete story, with only one or two loose threads. But since I’m still in the early writing stages of the first book, I still have enough room creatively to make that kind of adjustment. I just have to decide if that’s truly the plan.

By the way, the first trilogy ends with the fantasy equivalent of the Battle of Mount Gilboa, where the Saul and Jonathan characters die in an epic battle and the David character becomes king (I know that in the Bible, there was a gap of several years between those two events, but I’m combining them for purposes of this book). So it is a rather natural stopping place, even if it does end on a massive downer, followed by a false victory (the second trilogy begins with David and Bathsheba).

Anyways, what do you think of this plan? Does it sound like a good idea, or is there a compelling reason I haven’t thought of yet for why I shouldn’t do it?

The rough AI draft of The Soulbond and the Sling is complete!

So I just finished the rough AI draft of The Soulbond and the Sling, after nine days of outlining and prewriting, and five days of working with Sudowrite to generate it. The rough draft clocks in at 18 chapters (plus a prologue and an epilogue), 80 scenes, and 153,254 words. I used about 770,000 AI credits from start to finish, including for generating all of the characters and worldbuilding in addition to the text of the draft itself.

I have to say, I am really impressed with the incremental improvements over at Sudowrite, and with Claude 3.7, which was the AI model that I used to generate most of this book. I did try out Sudowrite’s new Muse model, but I wasn’t too impressed with it, at least for generating new chapters. For the in-chapter tools, such as guided write, expand, or rewrite, it’s probably fantastic, but with generating new chapters from my outline it just felt too much like it threw all my worldbuilding into a blender. Most likely I either had the creative setting set too high, or I gave it too many prompts.

But when I switched to Claude 3.7 (Sudowrite’s “Excellent” model), the results were amazing. I seriously felt less like I was writing the novel and more like I was reading it for the first time. There’s still a lot of work to be done, especially in the second half of the book, where many of the scenes strayed from the overall story structure, either forgetting things that had already happened or assuming things that hadn’t yet. There’s also quite a bit of worldbuilding that I would like to add in, and a handful of small hallucinations that need to be cut out, as well as a major change that I made in one of the characters and need to smooth out in other scenes… but overall, I found myself really enjoying this book, and was frankly surprised at how well it fleshed out the setting and characters, making them really stand out. It also added some really great dialogue that is probably going to make it to the final draft.

I was originally planning to lay this WIP aside while I switch to another project, but I think now the best course will be to work on the AI draft until I make it as good as I can. That way, I can tinker with the AI prompts while they are still fresh in my mind. Once I’ve gotten the AI draft as good as I can make it, I’ll lay it aside for awhile to let my subconscious work on the story, so that when I pick it up again, I’ll be better able to do an awesome human draft.

I would have added a mock-up for the book cover, but you would not believe how hard it is to get an AI image generator to give you a picture of David and Goliath that doesn’t have David holding a bow and arrow! Seriously—every time I prompt it for a fantasy illustration of David and Goliath, where Goliath is a giant horned monster, it shows David with a bow and arrow instead of a sling. It’s almost as bad as the strawberry problem! But let’s see if WordPress can do it…

Nope. Yet another AI image fail. I even specifically said he was wielding a balearic sling. Sigh.