What if it’s all hallucination?

I’ve been thinking a lot recently about something my wife said about AI. She’s finishing up her PhD in computer science, and knows more about generative AI and computational linguistics than just about everyone I know IRL (and most people I follow on the internet, too). So when she speaks on the subject, I do my best to listen.

Ever since OpenAI and ChatGPT took the world by storm, she’s been telling me that she doesn’t think the hallucination problem (where LLMs make stuff up) will ever be solved. Indeed, she doesn’t think it’s a “problem” in a technical sense at all, because every response from a generative AI is a hallucination—and that’s kind of a point. These aren’t really thinking machines, they’re hallucinating machines, replicating patterns in human language and thought. What difference does it make if the answer is false or true?

We call it “artificial intelligence,” but that’s really a misnomer, because these machines have no “intelligence” at all—at least, not in the human sense. Instead, they are like mirrors of our own intelligence, parroting back things that sound like they involve real thought, when really it’s all just pattern replication. They aren’t trained to recognize truth, they’re trained to recognize patterns. So, in reality, everything an AI generates is a “hallucination.”

This is why she thinks that we will never fully solve the hallucination “problem.” Indeed, the whole effort is a bit like trying to turn a lion into a vegan. And until we can train an AI on absolute truth—a thing that humanity has never been able to agree upon, much less reduce to zeroes and ones—then all we will really be able to do is create better and better plumage for our stochastic parrots.

What are the implications of this? First of all, we can safely ignore the worst of the AI doom porn, because a machine that cannot fundamentally recognize truth from falsehood is probably not capable of taking over the world and exterminating or enslaving humanity, even if it does qualify as a “general” intelligence.

We can also lay aside the fear (or the pipe-dream) that AI will 100% replace humans in all or most or really any fields. Even if they can do 90% of the work, recognizing truth is still an essential part of just about everything we as humans do. We can give it jobs and tasks—perhaps even some genuinely complex tasks—but so long as these machines cannot fundamentally distinguish between truth and falsehood, we will still need a human to oversee them.

That doesn’t mean that most humans are safe from being replaced by AI, though. If an AI-augmented person can accomplish the work of 10x or 100x the number of other human workers, we’re still going to face a massive disruption in the labor market and society as a whole. The question, then, is one of ownership and distribution. Who owns the AI? How do we distribute the productivity gains from AI? These are some of the difficult problems we need to solve in the next few years.

But the real problem—and the scariest implication of all of this—is the question of truth itself. After all, if AI is fundamentally incapable of recognizing truth, and all AI output is hallucination on some level, then who determines what is true and what is not? Sam Altman? OpenAI? Congress? Some three-letter government agency?

I think this is going to be the defining question of the rising generation, which is growing up in an AI-native world. What is truth? How can we recognize it? How do we distinguish between what is true and what is false? Increasingly, we are going to find that these are questions that AI cannot answer. And in a world saturated by deep fakes, bots, and sock puppets, where the internet is dead and all the most powerful players are constantly fighting a 5th gen war with each other, truth will be the thing we are all starving for.

The tragedy of the millennial generation is that everything in our world conspired to starve us of the three things we needed most. More than anything else, we hungered for meaning, authenticity, and redemption—and for the most part, we never got it. You can blame social media, the boomers, capitalism, student loan debt, the Republicans, the Democrats—it really makes no difference. All of those things and more came together to hobble our generation and make it almost impossible for us to launch.

Will the same thing happen with the zoomers and gen-alpha over the question of truth? It appears that things are moving in that direction. In a world saturated with AI, truth becomes a scarce and valuable commodity.

So what do we do? First, I think it’s important to recognize that AI cannot and never will be an authority on truth. At best, it only mirrors our own thoughts and ideas back to us—and at worst, it feeds us the thoughts and ideas of those who seek to control us. But AI itself is neutral, just like a gun or a knife lying on a table is neutral. What matters is how it is used.

Beyond that, I don’t really know what to say. Only that this is something I need to think about a lot more. What are your thoughts?

Without AI, I would probably not be writing

I recently got another anti-AI one-star review that I want to pull apart, because it’s pertinent to what I want to say. I actually came up with the title for this post before I received the one-star review, so I’m not just fisking this one for the sake of fisking. With that said, though, there is definitely a lot to pull apart.

I was prepared to rate this as 2 stars. It is repetitive with no real character depth or development and a sincere lack of dynamic or engaging writing. 

Two stars… so magnanimous! In all seriousness, though, it’s worth pointing out that in spite of all the book’s flaws, she did read it all the way through. That’s important for later.

Then I read the “author” note at the end of the book that was defending their use of generative AI in their writing process…. not only that but also seemingly insulting other writers who are anti-AI claiming that readers dont seem to care about it.

You know what’s insulting to any author, whether or not they are “anti-AI”? Putting scare quotes around the word “author” when referring to them Though I suspect that she did that on purpose, fully intending to insult me, whereas I did not intentionally insult anyone. For the record, this is the passage from the author’s note that she claims is “insulting” to authors by saying that “readers dont [sic] seem to care about [AI writing]”:

Besides which, after sharing The Riches of Xulthar with lots of readers, I’ve found that most of the rage and vitriol against AI-assisted writing is on the writer side of things, not the reader side.

The other thing is that I was not trying to “defend” my pro-AI stance through the author’s note, just explaining my writing process and sharing the story behind the story like I do in the author’s notes I write in the back of all my books. That’s not me being “defensive,” that just me sharing my story.

But there is something profoundly narcissistic about the way this reader is framing her review. Because I stated something about readers that contradicts her anti-AI worldview, I must be intentionally “insulting” her (or the anti-AI authors she’s white knighting for, which amounts to the same thing). Because I wrote about how I used AI to help write the book, I must be “defending” myself against her anti-AI views. This kind of narcissism can only really come from someone who lives in an echo chamber and is not used to having their worldview challenged.

Well Joe, you are wrong. This book was lifeless and dull and the use of AI showed. Everything was one dimensinal and flat. Word choises were even static. We (readers) get it… FMC had auburn hair. There are other words besides auburn to describe it….

I’m not going to deny, there is some legitimate criticism here. Rescuer’s Reward was one of my earlier AI-assisted books, when I was still experimenting a lot and learning how to incorporate AI into my creative process while still preserving my voice and writing multi-“dimensinal” [sic] characters and stories. So it doesn’t surprise me all that much that I missed the mark with this particular reader for this particular book. Lesson learned. Thanks for the feedback and the useful data point.

With all of that said, though… I can’t help but notice that she read the whole book.

I have yet to hear a compelling AI argument in the reralm of artistic expression and this “book” just exemplified everything yet again. No heart. No depth. Not good.

This is the crux of the issue, and the reason I wanted to frame this post as a line-by-line response to this review. Is there “a compelling AI argument in the reralm [sic] of artistic expression”? Or is any author who uses AI committing an unforgivable transgression against their art?

Here’s the thing: most of the other authors I know gave up writing a long time ago. We all started out with bright-eyed dreams about telling great stories and creating great art, but the hard truth is that it’s almost impossible to make it as an author.

There are many reasons for this: people don’t read very much in today’s culture (I personally blame the public school system for that), and the publishing industry has always been brutally rapacious and exploitive of writers (just read The Untold Story of Books by Michael Castleman—it’s a really fantastic history of the written word).

But the writing itself is also very hard. There’s a reason why even many succesful writers are like this guy, single and living in what amounts to a glorified shack. Most of my writing friends quit when they got married and starting having kids. I sincerely hope that they’re just on a 20+ year hiatus, and plan to get back to writing again someday, because some of the stuff they wrote was really, really good (I’m looking at you, Nathan Major!) But sadly, that won’t make up for the stuff they would have written, but never did.

My wife and I just had our third child. Writing with small children is very difficult, especially when your wife has a full-time job. I love them all to death, though. If I had to choose between being a single writer, or putting my writing on hold for 20+ years and having to restart my whole writing career from zero, just to be able to raise a family, I wouldn’t hesitate for a moment to make that choice. But it would put a huge burden of guilt on my wife, because my writing was one of the key things that drew her to me back when we were dating. And while our marriage is probably strong enough to survive that, I can’t deny that it would be an incredible strain.

Without AI, I probably would be facing this choice right now. Even though I had managed to streamline my writing process in the last few years, I’ve never been an especially fast writer. Without AI, it took me about a year to write each novel—and that’s before all the demands on my time and energy that come with having small children.

But AI has enabled me to continue to pursue my career and my art, even through this period of life. Not only does this help me to be a better husband and father (which is ultimately the most important thing), but it also means that my readers don’t have to wonder about the things I would have written, but never did. I can write those books now. I can give those stories to the world.

I’m not talking about AI slop. I’m talking about incorporating AI into the creative process deeply enough that it enhances, rather than replaces, my human creativity. We don’t have to be afraid of AI. It makes so many things possible—including running a profitable indie author business while raising (and soon homeschooling) 3+ small children. But it takes a lot of practice to get to that point. And generative AI is still so new that I don’t think there’s anyone who’s truly mastered the art of AI-assisted writing.

My Sea Mage Cycle books are mostly for practice. They’re meant to be fun, light reading. If it gives my readers a satisfying respite from all the doom and gloom in the world these days, I consider that book a success. The experience of writing each of them has helped me to be a better AI-assisted writer. And while the earlier ones may read like AI slop, that won’t be the case for long.

My spicy take on the ethics of AI art

There is nothing unethical about using generative AI to write or make art. Those who say otherwise either haven’t thought through their position, or they are lying for rhetorical effect. Or both.

If Andrew Tate wrote a book titled How To Enslave Your Woman For Fun and Profit, would he be within his rights to demand that no woman ever read that book? If you believe that AI is unethical because it was trained on writers’ and artists’ work without their consent, congratulations—that is exactly the position you have taken. You can’t pick up one end of the stick without also picking up the other.

Whether or not writers and artists were fairly compensated for the use of their work is a separate issue. Many of these AI companies obtained their training data by indescriminately scraping the internet, which means the used a lot of pirated work. But if using copyrighted material to train an AI system is fair use—and here in the US, the courts have ruled that it is—then all that they owe you is the cost of your book. So if your book is $2.99 on Kindle, that is what OpenAI owes you. Congratulations.

Does Brandon Sanderson owe Barbara Hambly royalties? Brandon Sanderson has sold something like $45 million in books, comics, and other media. Barbara Hambly struggles to pay her bills. Barbara Hambly wrote Dragonsbane, the young adult book that inspired Brandon Sanderson to write fantasy. Clearly, her work had a deep and lasting influence on him. So does he owe her?

If you believe that AI companies owe artists and writers more than simply the price of their own published work, this is a question that you must wrestle with. If it counts as “stealing” to train an AI on artists’ and writers’ work, then every artist and writer is also a thief, and owes royalties to the people who inspired them. Which is why the word “plagiarism” has a tight definition, and why our legal code recognizes fair use.

There is nothing unethical about using generative AI to write or make art. Almost everyone who says otherwise is either lying to themselves about that fact, or lying to you.

Why would someone lie about that? For the same reason people accuse you of being a racist, or a sexist, or a fascist, or a white supremacist, or a Christian nationalist… because using that term gives them power. They don’t actually want to make a reasoned argument. They just want to “win” the argument without ever having to make it in the first place. They use words that they know will get the reaction that they want, and they scream them as loudly as they can until they get it. That’s what the public discourse looks like in 2025.

To be fair, this is not just something that happens on the left. Plenty of people on the right will scream “woke” or “based” or “demonic” to cow people into accepting their point of view. These words do have meaning, and can be used to make a well-reasoned argument—just like “racist” and “fascist” have meaning. But most of the people who use these words are just wielding them like rhetorical clubs to bully their way around.

There is nothing unethical about using generative AI to write or make art. Most of the people who say otherwise are just using the word “ethical” to mean “things I don’t like.” They don’t believe in objective good or objective evil, and instead believe that things like truth and morality are relative. In other words, they think that good and evil change depending on who’s looking at it. This is why so many writers today can’t write a compelling villain (or a compelling hero, for that matter). They just don’t understand how good and evil work.

So why should you listen to them when they scream at you for using AI? You shouldn’t. They don’t know what they’re talking about. Or worse, they do, but they’re lying to you, because they want to compel you not to use AI in your art. Why? Because they’re afraid that if you do, you’ll create something better than what they can create. And on that point, they’re probably right.

Soulbond and the Sling AI draft complete!

Last week, I finished the AI draft of The Soulbond and the Sling, the first book in my new epic fantasy series! Here are the stats:

  • 20 chapters (including prologue and epilogue)
  • 80 scenes
  • 136,294 words

So it’s a little short for an epic fantasy novel, but this is only the AI draft. As I rewrite it into the human draft, I will add more details and nuance that will hopefully flesh it out, bringing it closer to the 150k – 180k word range.

I started the AI draft in March, but I wasn’t working on it continuously all that time. I worked on it in about four separate bursts, each one lasting a few weeks. In total, it took 70 working days, or approximately 12 working weeks to write it.

The next stage is the human draft, where I rewrite the whole thing from scratch to make sure it’s entirely in my own words. I’ll keep the AI draft on-screen as a reference, and may use some turns of phrase that I like, but I’m not going to copy-paste from it. This way, the resulting work will be entirely my own.

I don’t know how long it will take to finish the rough human draft, but I expect it will take longer than the AI draft, perhaps even 2x or 3x as long. Then again, if the AI draft is clean enough, it might even take less time than that. I’ve been getting pretty good at these AI drafts, and it’s already at least partially in my own voice, given how my personal taste guided which parts of which AI generated iterations I decided to keep. I also did a revision pass with no AI whatsoever, mostly just to smooth out inconsistencies.

But the AI draft is complete! This is the longest book I have ever written with AI, and one of the longest ones I have written in general. I hope it is the first of many more to come!

(And also, I really need to get a better book cover!)

Get ready for the anti-AI witch trials…

Really interesting news story about an artist at Dragoncon who was forcibly removed from the convention, with the cops getting called and everything, for allegedly selling AI-generated art. So even though the artist bought the table and everything, the convention threw them out. It’s unclear whether the table was refunded, but I’m guessing it probably wasn’t.

Of course, as JDA rightly points out, if you do AI well then it’s impossible for anyone to tell whether it’s AI or human, so what this really does is set the precedent that merely accusing someone of using AI in their art is sufficient to cause serious damage to someone’s business. A lot of these artists make most of their money at conventions like this, and Dragoncon is one of the top-tier sci-fi media conventions, right up there with San Diego Comic-con and FanX Salt Lake.

How long is it going to take before an artist is falsely accused of using AI and ruined because of it? Has that already happened? How many more artists are going to be thrown out of conventions like this? How many artists are going to decide that it just isn’t worth it to attend these sorts of conventions, whether or not they use AI? How long before we find that some of the artists leading these witch hunts are themselves using AI to create their art?

In the end, when AI has been normalized and no one (not even in fannish circles) blinks an eye at AI-assisted art, we’re going to look back at this time with much the same dismay that we look back at the Salem witch trials. But that may not be for another ten or twenty years. Do we really need to go through all this madness first? This is why we can’t have nice things.

Fisking 1-star reviews bashing AI

They say that authors should never respond to one-star reviews. That’s generally good advice, and for most of my career, I’ve studiously kept it. However, I’ve recently begun to get a new kind of one-star review that baffles me—reviews that essentially say: “the book was good, but it was written with AI so I hate it.”

Here’s an example:

This book is written with AI. Incredibly disappointing as a reader to give a book/author a chance and then to get to the end of the book only for the “author” to then announce the AI card. If I could give zero stars, I would for this alone. I also didn’t appreciate that this use of AI was not announced until the ending Author’s Note. If “authors” are going to cut corners and put their name to computer-generated mush, they should be willing to put that information on the front cover. The book struggled to find its pace, and some parts read as though they were written for a child’s short story competition while others felt as though the writer was snorting crushed up DVDs of Pirates of the Caribbean as they wrote.

Let’s break it down:

This book is written with AI. Incredibly disappointing as a reader to give a book/author a chance and then to get to the end of the book only for the “author” to then announce the AI card.

Yes… but I can’t help but notice that you got to the end of it. In other words, you finished the book. Also, from the way you tell it, it seems that you didn’t realize the book was written with AI until you got to the very end. So based on your own behavior, it doesn’t seem that quality was the issue.

I also didn’t appreciate that this use of AI was not announced until the ending Author’s Note. If “authors” are going to cut corners and put their name to computer-generated mush, they should be willing to put that information on the front cover.

Okay… but if my book was just “computer-generated mush,” why did you finish it? And why were you surprised when you learned that it was written with AI-assistance?

I can understand the objection to books that were written solely with AI, with little to no human input. But that’s not how I write my AI-assisted books. Instead, I outline them thoroughly beforehand, write and refine a series of meticulously detailed prompts (usually using Sudowrite), and generate multiple drafts, combining the best parts of them to make a passable AI draft. And then I rewrite the whole thing in my own words, using the AI draft as a loose guide with no copy-pasting.

Why would I go through so much trouble? Because of how the AI drafting stage gives me a bird’s eye view of the book, allowing me to identify and fix major story issues before they metastasize and give me writer’s block. Before AI, that’s where 80% of my writer’s block came from, and it often derailed my projects for months, so that it took me well over a year to write a full-length novel. But with AI, I’m no longer so focused on the page that I lose sight of the forest for the trees. So even though generating and revising a solid AI draft adds a couple more steps to the process, it’s worth it for the time and trouble that it saves.

That’s the way I use generative AI in my writing process. But there are many other ways—and I hate to break it to you, but most authors use AI in one way or another. If an author uses Grammarly to fix their spelling and grammar, should they disclose that on the cover? If they use MS Word? What if they used a chatbot to brainstorm story ideas, but went on to write it entirely themselves? Should that also be disclosed?

The book struggled to find its pace, and some parts read as though they were written for a child’s short story competition while others felt as though the writer was snorting crushed up DVDs of Pirates of the Caribbean as they wrote.

Yes… but again, I can’t help but notice that you finished the book. And after you finished it, you were surprised to learn that it was written with AI. So with all due respect, I’m going to call BS on your objections here. I think you only decided you hated the book after you learned it was written with AI, and you came up with these objections after the fact. Whatever.

I think a lot of the people who object to AI are really just scared and angry. They claim to have principled, ethical objections to the technology, but few of them follow through to implement that principled stance into every area of their lives. After all, if you use Grammarly, Google Docs, or MS Word, you are using generative AI just as surely as I am using ChatGPT and Sudowrite. For most people, the ethical objections are just a smokescreen for their general fear of change. They’re fine with embracing the convenience the technology offers them in their own personal lives, but they insist that everyone else—including me—live according to their principles, no matter how inconvenient or difficult it may be.

As an example of that, check out this one-star review:

The arts! Whether visual, performance, or literary—my haloed experience has been the act of creating and sharing a connection to the profound or sublime. Why, then, would any artist—musician, dancer, sculptor, painter, or author—offload (abdicate) the act of creation to AI? Process versus product. Mr. Vasicek included an afterword for this volume, describing his workflow and the efficiency of collaboration with AI: a 6,624-word day! another volume completed! Mr. Vasicek obviously owns the skills to weave rich character development and scenes. Perhaps Mr. Vasicek’s AI collaboration explains why these characters, the plot, the narrative—and subsequently, the entire story— are so flat and undeveloped. Although his lead male shows some undeveloped promise, the mother’s too-oft used “dear” and “my love,” and the daughter’s clutching at her mother’s apron are cringe-inducing. Perhaps Mr. Vasicek might eschew AI-assisted writing, seeking a future of quality over quantity.

Let’s break it down:

The arts! Whether visual, performance, or literary—my haloed experience has been the act of creating and sharing a connection to the profound or sublime. Why, then, would any artist—musician, dancer, sculptor, painter, or author—offload (abdicate) the act of creation to AI?

Because for some of us, writing is more than a “haloed experience”—it’s an actual job. It’s what we do for a living. And if you want to do your best work, you need to use the best tools. We used to build houses with plaster and lath and wrought-iron nails, using hand tools and locally-sourced lumber. But today, you’d be a fool not to use power tools and materials sourced from a building supply store, or your local Home Depot. If that makes your building experience less profound or sublime, so be it.

Process versus product. Mr. Vasicek included an afterword for this volume, describing his workflow and the efficiency of collaboration with AI: a 6,624-word day! another volume completed!

I’m not gonna lie: there is a certain degree of tension between art-as-product and art-for-art’s-sake. But the two are not mutually exclusive. A house can still be a beautiful work of art, without taking as long as a cathedral to build it. Likewise, a book can still be a beautiful work of art, without taking as long as Tolkien’s Lord of the Rings.

Again, you’re trying to pidgeon-hole me into your “haloed” idea of what a “true artist” should be. Which would make it absolutely impossible for me to make a living at this craft. If all of us writers followed that path, there are a lot of wonderful books that would never get written. And I doubt that the overall quality of the books that do get written would rise.

Mr. Vasicek obviously owns the skills to weave rich character development and scenes.

Now we get to the interesting part. I checked this reviewer’s history, and this was the only review they’ve written for any of my books. Therefore, I can only assume that this is the only book of mine that they’ve read. But if that’s the case, how do they know that I have “the skills to weave rich character development and scenes”? If the book I wrote with AI was pure trash, why would they say that I obviously have some skill?

Once again, we’ve got a case of “I enjoyed this book, but it’s written with AI so I hate it.” In other words, it’s not the book itself that you hate, so much as the way I wrote it. You object to the idea of authors using AI, not to what they actually write with AI.

Perhaps Mr. Vasicek’s AI collaboration explains why these characters, the plot, the narrative—and subsequently, the entire story— are so flat and undeveloped. Although his lead male shows some undeveloped promise, the mother’s too-oft used “dear” and “my love,” and the daughter’s clutching at her mother’s apron are cringe-inducing.

Finally, some specific and legitimate criticism. And while I do think there’s a degree of retroactively looking for faults after enjoying the book, I’m totally willing to own that these criticisms are valid. This particular book (The Widow’s Child) was one of my first AI-assisted books, and I was still learning to use these AI tools as I was writing it. I did the best I could at the time, but if I were to write it today, I could probably do a lot better, smoothing out the annoying AI-isms that you’ve pointed out here.

But the book is currently sitting at 4.4 stars on Amazon (4.1 on Goodreads). And the other readers do not share your objections. Here is another review, pulled from the same book:

Since waiting a year or more to read the next book in a sequel is hard on my stress levels, I’m liking this AI. It means talented authors like Joe Vasicek can churn out an outline faster. Then he can bring in his talented ideas, such as the content of this heart-stopping adventure of The Widow’s Child, to fill out the nitty gritty in record time.

Clearly, it’s not the case that all (or even most) readers feel the same way about AI as you do.

Perhaps Mr. Vasicek might eschew AI-assisted writing, seeking a future of quality over quantity.

Why can’t we have both? Why can’t we have quantity with quality? Why can’t AI make us more creative, instead of replacing our human creativity?

This is all giving me flashbacks to the big debate between tradition vs. indie publishing, back in the early 2010s. Back then, the debate was between purists who said that indie publishing would destroy literature by flooding the market with crappy books. Indies argued that removing the industry middlemen would create a more dynamic market that would give readers more choices and allow more writers to make a living. Both were right to some degree, and both were also wrong about some things. In the end, we reached a middle ground where “hybrid publishing” became the norm.

The same kind of debate is happening right now between human-only purists and AI-assisted writers. The biggest difference is dead internet theory. In the early 2010s, the ratio of bots to humans on the internet was still low enough to allow for a lively debate. Today, there’s so much bot-driven outrage on the internet that most of us are just quietly doing our own thing and avoiding the debate.

That same bot- and algorithm-driven outrage is driving a lot of peole to be irrationally angry or afraid of AI. With that said, I can understand why so many people are upset. And I do think there are a lot of valid criticisms about this new technology, including its environmental impact, copyright considerations, how the models were trained, and the societal impact it’s already starting to have. But if we don’t have an honest and good-faith debate about these issues, we can’t solve any of them. And we can’t have a good-faith debate if one side is coming at it from a place of irrational anger or fear.

In any case, I find it super annoying when readers who clearly found some value or enjoyment in my books turn around and give it a one-star review merely because they don’t like how I used AI. And at the risk of going viral and soliciting more one-star anti-AI reviews, I think its worth voicing my views on the subject and opening that debate. So what are your thoughts on the subject? How do you feel about using AI as a tool to help write books? Can we have quantity with quality? Can AI help us to be more creative, not just more productive? What has been your experience?

Fantasy from A to Z: U is for Unicorns

If you were expecting a post on unicorns or other mythical beasts, I hate to disappoint you again, but that’s not what this is going to be. Instead, I want to write a bit about that most mythical of all human creatures: the full-time fiction writer.

Okay, perhaps we’re not that mythical. After all, Brandon Sanderson estimates that of all his students over the years, perhaps as many as 10% of the ones who set out to become full-time writers actually make that dream into a reality. I sometimes wonder: would Brandon count me as one of those 10%? Should he? The answer to that is… complicated. 

One of the first questions I get whenever I tell people that I’m a writer is “oh, wow—how is that working out for you?” Which is really a roundabout way of asking how much money I make, and whether I’ve been able to turn it into a full-time career. I am not (yet) a major bestselling author, and the closest thing I’ve had to a breakout thus far has been my (now unpublished) Star Wanderers novella series, which managed (mostly by accident) to hit the algorithms correctly back when a permafree first-in-series with lots of direct sequels was the best path to success. Then the publishing landscape changed, the algorithms shifted to favor pay-to-play advertising, and my books got left behind.

I will admit that if it weren’t for my wife’s income, I wouldn’t be able to pursue writing full-time. As a family, we’re following a path very similar to my Scandinavian ancestors, where the wife tends the farm while the husband goes off a-viking. In other words, my wife has the stable, traditional career that provides our family with some degree of security, while I have the more risky career that has the potential to catapult us into transformative levels of wealth and prosperity. We’re doing just fine, but it does sometimes feel like my Viking ship has yet to land ashore.

Because here’s the thing: something like 90% of the money in book publishing (after the booksellers and publishers and other middlemen take their often-exorbitant cuts) goes to less than 1% of the writers who actually make any money (and something like 30% of kindle books never sell a single copy). 

For every Brandon Sanderson, there are thousands—perhaps hundreds of thousands—of published authors who write on nights and weekends while holding down a day job to pay the bills. My writing contributes enough to the family budget to justify pursuing it, but if I were still single, I would need at least a part-time job.

Indie publishing has created a lot of opportunity for authors to make a career out of their writing, and there are many successful indies who are making a decent living at it. At the same time, indie publishing has also massively exploded the number of books that are published, so the proportion of full-time to still-aspiring authors is probably about the same (and may have actually tilted the other way). 

In recent years, it has very much turned into a zero-sum pay-to-play game, especially with advertising. From what I can tell, most authors lose money on advertising, and most of those who are making money are spending upwards of $10,000 each month to make $11,000. The elite few who learn how to successfully game the algorithms to blow up their books often put their writing on the backburner to launch their own companies or provide publishing services, leveraging their expertise to make a lot more than they otherwise would.

The algorithms are changing books in some very strange ways. If J.R.R. Tolkien or Roger Zelazny or Robert E. Howard were writing today, would they be able to make it in today’s publishing environment? 

Howard’s Conan stories would either have to be a lot sexier, or else would have to include the sort of tables and character stats you find in LitRPG. His covers would also be a lot more anime, and show a ridiculous amount of cleavage (which he actually might not have had a problem with, judging from some of the old Weird Tales covers). 

Zelazny’s Chronicles of Amber would all be far too short to make it in Kindle Unlimited—to make it in that game, you have to have super long books that max out on page reads, in order to maximize advertising ROI so that you can outbid your competitors. And if you aren’t winning the pay-to-play advertising game, your KU books will sink like rocks. Also, Zelazny took way too much time between books. Gotta work on that rapid release strategy, Roger.

As for Tolkien… hoo boy, there’s an author who did everything wrong. Decades and decades spent polishing his magnum opus, with a short prequel novel that falls squarely in the children’s category (totally different genre) as the only other fantasy book published in his lifetime. I suppose he could have serialized Lord of the Rings, except nothing really happened in episode 1: A Long-Expected Party. Certainly not anything that would adequately foreshadow all the dark and epic battles to come. Perhaps if he followed a first-in-series permafree strategy, and just gave away Fellowship of the Ring for free… and then made The Hobbit his reader magnet for signing up for his email list… maybe that could have worked? After all, there’s always BookBub…

I jest, of course. Each of these authors’ books became classics, not because of their marketing strategy, but because they hit the cultural zeitgeist in exactly the right way. But is it possible for an author to do that today without also getting a boost from the algorithms? Or do the algorithms have more power to shape our culture than anything else? Those are disturbing questions, and I honestly do not know the answer.

And then there’s the question of AI, which is massively disrupting all of the creative fields. In the interest of full disclosure, I am actually quite sanguine about generative AI, and have already been working to incorporate it into my creative process. I’m not a fan of AI slop, but I don’t feel particularly threatened by it. I decided a long time ago that if AI ever became good enough to write an entertaining book, it still would never be able to write a Joe Vasicek book. That’s insulated me from most of the doom porn out there.

Right now, there is a HUGE fight happening between authors like me who are embracing AI, and authors who treat it all as anathema, and have vowed to never use any sort of AI in any of their books (except Grammarly, of course, because… reasons. And Microsoft Word. And…) Frankly, it reminds me of the big debate between indie and traditionally published authors, back before self-publishing had lost its stigma. The biggest difference is that the level of online outrage has been ramped up to 11, mostly as a result of the social media algorithms (which weren’t as robust or as powerful back in the early 2010s). I suspect that we will ultimately settle on a “hybrid” approach, much like we did with publishing, but the sheer level of vitriol has made me wonder about that. 

On the reader end of things, though, it seems like most readers don’t really care if a book was written with or without AI assistance, so long as it’s actually a good book. Which means that there is a real opportunity for authors who 1) know how to tell great stories, 2) have already found and honed their voice, and 3) know how to strike the right balance between the AI and the human elements. 

Which describes my own position almost perfectly. Over the last fifteen years, I’ve read, written, and published enough books that I have a pretty good handle on what makes a great story. I’ve also honed my voice well enough that I can write in it quite comfortably. And as for the balance between AI and human writing, I’ve been working hard on that since ChatGPT burst onto the scene in 2022. Half a dozen books and about a million words later, I’ve learned quite a lot about how to best strike that balance.

Will AI replace authors entirely, making this particular unicorn extinct? I don’t think so. But AI may radically change our concept of what “books,” or “writers,” or “writing” really are. A long time ago, I realized that even if AI became good enough to write a decent book, it would never be able to write a Joe Vasicek book. Only I can do that. Whether or not that’s worth something is up to the readers to decide.

The dangers of relying too much on AI

I saw this really interesting video last week, and it made me think: am I relying too much on AI?

In my personal life, this probably isn’t an issue. I do occasionally ask ChatGPT to make me a recipe, or to advise me on a particular topic, but I always do a gut check and assume that it’s hallucinating if it doesn’t pass. If it gives me something that I can quickly and easily verify, I always do that… and half of the time, it turns out to be a hallucination to some degree. So yeah, I don’t rely on it nearly as much in my personal life as some of the characters in this video.

What about blogging? Don’t be too scandalized, but with my new blogging schedule, I have experimented a bit with using ChatGPT to write some of these blog posts. It’s not like I’ve been copy-pasting everything straight from the chatbot, but I have relied on it a little more heavily than I do in my own writing.

After trying that a couple of times, though, I decided to cut that out and write all of these blog posts by hand. Why? Because I felt like it was creating too much distance between myself and the people who read this blog, and the purpose for writing this blog is to foster a human connection. So it kind of defeats the purpose to rely on a chatbot to generate most of the content I post here. For that reason, I plan to keep writing all these blog posts entirely myself, with only minimal AI input.

So what about my fiction? This is where things get a little tricky. While I totally agree that simply copy-pasting from AI is a piss-poor way to write a book, I do think that AI can be a very useful tool in writing and crafting a novel, provided that you understand the limitations of the AI and don’t rely on it too much. But how much is too much? That is the question.

The biggest way that AI has helped to enhance my own writing is in giving me a birdseye view of the story as I generate a “crappy first draft.” This birdseye view allows me to see and fix major story issues before they metastatize and give me writer’s block, which is what tends to happen if I write these drafts out entirely by hand. When I’m focused on the page, I tend to lose sight of the forest for the trees, so I don’t notice that there’s a problem with the story until I’m several chapters in and find that I just can’t write.

This has happened with basically every project that I write on my own, and is the main reason why it took me anywhere from six to eighteen months (or longer) to write even a short novel, before I started using AI. However, since I began incorporating AI into my writing process, this problem has basically gone away, and I no longer experience this form of writer’s block at all.

However, while I do rely on AI to help me to craft my “crappy first draft,” that isn’t the draft that I publish. Once the AI draft is as good as I can make it, I will then go through scene-by-scene and rewrite the entire book in my own words. The purpose for this step is to make sure that I’m telling the story in my own words, and to make the story my own. I will still have the AI draft open on another screen, and refer to it as I write out the story, but I don’t do any copy-pasting. It’s all written out by hand.

Is this enough, though? Or do I need to add more steps to make sure that I’m not relying too much on AI, and thus losing my own voice? Recently, I’ve been spending a lot more time on the AI draft, generating multiple iterations and combining the best parts to (hopefully) boost the quality. I’ve also been doing a revision pass over the AI draft, tweaking it to smooth over some common AI-isms and (hopefully) adding a bit of my own voice before I move on to the human draft and rewrite the whole thing to make sure it’s all in my voice.

But while this might be enough to keep the book in my own words, is this enough to keep my own writing skills from atrophying? Or do I need to occasionally pick up a WIP that is 100% human writing, with no AI at all, just to make sure I don’t lose these writing skills? That is the question that I’m currently pondering. Perhaps this is the sort of thing that short stories could serve really well to help with. Perhaps I should go back to writing short stories again, just as a way to keep my writing skills sharp.

If I were starting out right now as a new writer, I would definitely avoid writing with AI until I’d written enough to find my own voice. And I would also make sure to write at least one novel 100% without AI-assistance, just for the experience, and to prove to myself that I could do it. Otherwise, I think there would be a very real danger in becoming over-reliant on AI to write my books, and thus risk losing my own unique voice, so that none of the books that I write ever truly become my own.

Anyhow, those are some of my current thoughts on the subject. What do you think of this problem?

Remember how I said that AGI is a pipe dream?

A couple of weeks ago, I posted my thoughts on AGI (artificial general intelligence) and all of the doom-porn floating around that we are years, or possibly even months, away from the emergence of an artificial superintelligence that will either usher in an Edenic post-scarcity utopia, or exterminate all of mankind. Believe it or not, this is a big fear in Silicon Valley, among the people who are building these systems (though I suspect that the top-level executives don’t really believe it and are instead exploiting that fear to serve their own ends).

My view, in a nutshell, is that we will not see the emergence of AGI or superintelligence under the current research paradigm, because the current paradigm is based on pure materialism, assuming that intelligence itself is merely an emergent phenomenon, and that if the conditions for that emergence can be replicated, a human-level (or superhuman-level) intelligence will be created. My prediction is that in the next 1-3 years, AI development will run up against a wall, and all of the scaling in the world will fail to produce the sort of drastic gains that the doomsayers are predicting.

Well, it seems that we may be much closer to that wall than I supposed. I’m not super familiar with this YouTuber, but I’ve been following a lot of his content recently, and he seems to be very intelligent and also very keyed into what’s currently happening in AI development. And in this video, he may have just pointed out the wall that we’re about to run up against—if indeed, we haven’t already.

In any case, it’s worth watching, especially if you are looking to incorporate AI into your work life. Lots of practical advice, too.

Anti-AI is the new virtue signaling

According to Merriam-Webster, “virtue signaling” is:

the act or practice of conspicuously displaying one’s awareness of and attentiveness to political issues, matters of social and racial justice, etc., especially instead of taking effective action.

Because it is much easier to signal your virtue than it is to actually be virtuous, the people who virtue signal the loudest also tend to be the ones who have something they’re trying to cover up. This hypocrisy is a big part of what makes virtue signaling so obnoxious.

Time for me to spill a little tea. A couple of years ago, after I wrote “Christopher Columbus: Wildcatter,” I got an acceptance from the editor of Interzone. It wasn’t formalized yet, but he expressed over email that he was interested in purchasing the publishing rights for that story, the sequel, and possibly others after. It got far enough along that we were going back and forth on editorial details, our vision for the stories, etc.

Then the time came for him to send me a contract. Aaand… he ghosted me. Flat out ghosted me. A month went by without any correspondence at all. I didn’t want to seem too forward, but I also was starting to get a little concerned. So I sent out a brief follow-up email, asking about the contract… and I got a response that read like something copy-pasted from a form rejection.

Now, as far as literary transgressions go, that’s kind of tame. It’s not like the editor owed me money and refused to pay. And as far as I know, Interzone is prompt with all of their payments and pays all of their authors in full. After all, everyone deserves the benefit of the doubt.

But that sort of unprofessionalism really wasn’t cool, either. In fact, it was enough that I stopped sending Interzone any submissions. After all, if the editor saw nothing wrong with yanking my chain around before he published me, that’s kind of a yellow flag. Not to mention that it left a very sour taste in my mouth.

So when I saw this story from Jon Del Arroz, with the editor of Interzone accusing Asimov’s of using AI art, and using that as a pretext to blacklist all of their authors, I immediately recognized that sort of behavior for what it is: virtue signaling. Which made me wonder: how much of the anti-AI vitriol that’s ubiquitous in online writing communities these days really just a new form of virtue signaling?

Think about it. It explains so much about the insane anti-AI faux controversies that have been blowing up around 2025 WorldCon. For more than a decade now, the people chasing the Hugo Award have been among the worst offenders of gratuitous virtue signaling (especially Scalzi). It also explains why so much of the anti-AI content on YouTube is less about presenting well-reasoned arguments, and more about sighing dramatically or making snide, sarcastic remarks. Virtue signaling always appeals to pathos before it appeals to reason.

I expect this phenomenon is going to get a lot worse in the next few years, at least until AI-assisted art and writing become normalized (which is going to happen eventually, it’s just a matter of time and degree). So the next time you see someone publicly posting about how horrible it is for creatives to use AI, take a good, hard look at the person leveling the accusations. Chances are, they’re just virtue signaling.