Anti-AI is the new virtue signaling

According to Merriam-Webster, “virtue signaling” is:

the act or practice of conspicuously displaying one’s awareness of and attentiveness to political issues, matters of social and racial justice, etc., especially instead of taking effective action.

Because it is much easier to signal your virtue than it is to actually be virtuous, the people who virtue signal the loudest also tend to be the ones who have something they’re trying to cover up. This hypocrisy is a big part of what makes virtue signaling so obnoxious.

Time for me to spill a little tea. A couple of years ago, after I wrote “Christopher Columbus: Wildcatter,” I got an acceptance from the editor of Interzone. It wasn’t formalized yet, but he expressed over email that he was interested in purchasing the publishing rights for that story, the sequel, and possibly others after. It got far enough along that we were going back and forth on editorial details, our vision for the stories, etc.

Then the time came for him to send me a contract. Aaand… he ghosted me. Flat out ghosted me. A month went by without any correspondence at all. I didn’t want to seem too forward, but I also was starting to get a little concerned. So I sent out a brief follow-up email, asking about the contract… and I got a response that read like something copy-pasted from a form rejection.

Now, as far as literary transgressions go, that’s kind of tame. It’s not like the editor owed me money and refused to pay. And as far as I know, Interzone is prompt with all of their payments and pays all of their authors in full. After all, everyone deserves the benefit of the doubt.

But that sort of unprofessionalism really wasn’t cool, either. In fact, it was enough that I stopped sending Interzone any submissions. After all, if the editor saw nothing wrong with yanking my chain around before he published me, that’s kind of a yellow flag. Not to mention that it left a very sour taste in my mouth.

So when I saw this story from Jon Del Arroz, with the editor of Interzone accusing Asimov’s of using AI art, and using that as a pretext to blacklist all of their authors, I immediately recognized that sort of behavior for what it is: virtue signaling. Which made me wonder: how much of the anti-AI vitriol that’s ubiquitous in online writing communities these days really just a new form of virtue signaling?

Think about it. It explains so much about the insane anti-AI faux controversies that have been blowing up around 2025 WorldCon. For more than a decade now, the people chasing the Hugo Award have been among the worst offenders of gratuitous virtue signaling (especially Scalzi). It also explains why so much of the anti-AI content on YouTube is less about presenting well-reasoned arguments, and more about sighing dramatically or making snide, sarcastic remarks. Virtue signaling always appeals to pathos before it appeals to reason.

I expect this phenomenon is going to get a lot worse in the next few years, at least until AI-assisted art and writing become normalized (which is going to happen eventually, it’s just a matter of time and degree). So the next time you see someone publicly posting about how horrible it is for creatives to use AI, take a good, hard look at the person leveling the accusations. Chances are, they’re just virtue signaling.

Thoughts on the Worldcon 2025 AI “scandal”

I’ll just come out and say it: I predict that the world’s last Worldcon will happen before 2034, and that after that, the convention (and possibly the Hugo Awards themselves) will be permanently disbanded. That’s what I think will be the ultimate consequence of the latest “scandal” regarding Seattle Worldcon’s use of ChatGPT, and the anti-AI madness currently sweeping the science fiction community on Bluesky.

If you haven’t been following the “scandal,” you ought to check out Jon Del Arroz’s coverage of it. He’s definitely partisan when it comes to politics and fandom, but he’s neutral on the subject of AI, or as neutral as you’re going to find, especially in writerly circles.

But here’s the TL;DW: the people organizing Worldcon 2025 in Seattle decided to use ChatGPT to help them decide which authors and panelists to put on which panels. This triggered a bunch of authors and panelists who are opposed to generative AI, simply on principle. Some of these authors—including Jeff VanderMeer, who is up for a Hugo award—have bowed out, while others have called for resignations and apologies. Many of the volunteer staff have also stepped down, exacerbating the staffing shortage—which is why the convention relied on ChatGPT in the first place. And apparently over on Bluesky, the scandal is taking on a life of its own, with everyone working themselves up to a massive frenzy over the subject.

My own opinion of the “scandal” is this: it isn’t a freaking scandal! Whatever your opinion on AI-assisted writing, using ChatGPT as an aid to research panelists is totally above-board and a legitimate use of AI. To disagree with that is to say that there is no ethical use-case for generative AI whatsoever, which is hypocritical and absurd—unless, of course, you’re still writing your books on a manual typewriter and submitting them to your publisher via the US postal service. Or using WordStar, if your name is G.R.R. Martin and you’re the last person on earth who “writes” with that defunct software (putting “writes” in quotation marks, since we all know by now that Martin isn’t actually writing anything).

But it isn’t the “scandal” itself that interests me, so much as what the fallout will likely be. Ever since the Sad Puppies debacle in 2015 (and arguably long before that), Worldcon has been dominated by the wokest fringe of SF&F fandom, and it’s been an open secret that the Hugo awards themselves are controlled by the publishers, largely for marketing purposes.

So at this point, the only things really keeping the whole Worldcon/Hugo charade going are 1) woke authors who use the convention to manufacture clout for their failing careers, because they wouldn’t otherwise have a platform, and 2) woke publishers who use the awards to manufacture clout for their poorly-selling books, because they don’t actually know how to market books effectively (at least, not to readers—libraries are a whole other subejct deserving of its own discussion, because there is a genuine scandal there). Once those two things dry up, and all of the ruin has been exhausted from these institutions (ie Worldcon and the Hugos), I really do think they will collapse and go away.

That’s what I find so fascinating about this scandal: it is so utterly toxic and absurd on its face that it’s going to do permanent damage to Worldcon and the Hugos. The writers of the rising generation who will one day dominate the field are all playing around with these AI tools right now, and doing really interesting things with them. Meanwhile, most of the authors who are screaming about AI on Bluesky right now will either be dead or irrelevant (or both) in the next 20 years. And yes, Mike Glyer, you can quote me on that.

Seriously, though: if the Worldcon community is so vociferiously opposed to a legitimate use-case of ChatGPT—namely, to alleviate the already overwhelming burdens being carried by the volunteer staff—AND they continue to be absolutely toxic about it online… who in their right mind would want to be a part of that community? And since the only thing keeping the whole charade going is its ability to manufacture clout, that’s why I think its years are numbered—and likely in the single digits.

On the plus side, if/when the Hugos finally die, I won’t have to read any more crappy woke books to be able to say I’ve read (or DNFed) every Hugo award-winning novel.