This one was a really great read. Check it out!
Category: Uncategorized
Our world makes a lot more sense…
…when you realize that the internet is a factory for creating cults, and that social media and smart devices are force multipliers for this effect.
Before the internet, your “community” was a geographically bound group of people, who were diverse enough (that’s “diverse” with a lower-case d) to give you an interesting variety of perspectives and worldviews. Also, you typically interacted with each other while physically in person. If you said or did something extremely embarrassing, it typically didn’t get beyond your immediate circle of associates, or the people you decided to tell about it.
The internet changed everything by turning “community” into something that was bound by interests, hobbies, perspectives, or worldviews. Now, every person with a weird and perverse fetish, who before kept it hidden because they were the only person in their community who held it, now could find all the other people in the world who held the same weird and perverse fetish, and create a “community” around that thing. Same with crazy political views. Same with radical ideology.
At the same time, if you said or did something embarrassing, and it went viral, your embarrassing moment would be broadcast far beyond your immediate circle of associates, to people you had never before met—as well as to people whom you would never want to hear about it. This effect was multiplied by the development of social media, and it led people to self-censor and conform to whatever “community” they were a part of, in the fear of standing out and going viral.
At the same time, all these “communities” turned into echo chambers that warped the various members’ view of reality. And because anger and outrage are the things that are most likely to get spread on the internet (see the video above), these echo chambers starting to become paranoid and break off from the rest of the world, taking the dimmest and least charitable view of everyone who wasn’t a member of their “community.”
As these online communities came to take a more prominent place in the average person’s life than their own families and communities, then the average person’s sense of identity increasingly became caught up in whatever hobby, fetish, or ideology united the “community.” And because of how paranoid these communities became, they increasingly came to demand absolute and preeminent allegiance. Is this starting to sound like a cult yet?
But it goes deeper than that, because the devices through which we connect with these “communities” actually make us more physically isolated from each other, while giving us the illusion of a genuine connection. When you’re holding up your smart device to capture a fireworks show, you’re not actually enjoying the fireworks. And when you’re lying in your bed, posting updates on your social media or chatting with your friends, you are still, in reality, lying alone in your bed. Combine with the internet’s penchant to drive outrage, and you have the two key ingredients for a mass formation psychosis: a large group of atomized and isolated individuals suffering from free-floating anxiety.
Before the pandemic, (that’s the Covid-19 pandemic of 2020, for future readers who may be wondering “which one?”) I think that we lived in a world where the majority of our countrymen—the members of our “community” in the traditional sense—were not caught up in one of these cults. Either the majority of people weren’t caught up in one of these echo chambers, or the majority of echo chambers hadn’t yet reached cult-status, but people were still generally reasonable, on the whole. But with the pandemic, I think we passed through some sort of a threshold, to the point where now the best way to make sense of our world is to assume that the majority of people around you are trapped in some sort of a cult—which may literally be the case, considering the theory of mass formation psychosis.
So what does this mean for where the world is headed? Nothing good. I suppose that in an optimistic scenario, a critical mass of people manages to break themselves and their friends out of this mess, and go on to build a new society with proper safeguards in place to prevent this sort of mess from happening again. But I think it’s much more likely that this thing runs its course, and large swaths of our civilization drink the proverbial Kool-Aid.
Fortunately, there is a script that we can run, as individuals and (more importantly) as families, to get through this mess. It’s the same script that we use to get ourselves or our loved ones out of a dangerous cult. I’m not yet an expert on that script, but I know that it’s out there, because cults have been a thing for a very long time. But I’m pretty sure it involves putting your family first, getting off of social media, limiting the amount of time that you spend on your smart devices, and becoming more involved in your real “community”—the real-life one where you actually live.
1001 Parsecs Books: Last of the Breed by Louis L’Amour
Hey! Go check out my review on 1001 Parsecs Books of Last of the Breed by Louis L’Amour, which I think is one of his best books!
How I would vote now: 2022 Hugo Award (Best Novel)
The Nominees
Light from Uncommon Stars by Ryka Aoki

The Galaxy and the Ground Within by Becky Chambers

A Master of Djinn by P. Djeli Clark

A Desolation Called Peace by Arkady Martine

She Who Became the Sun by Shelley Parker-Chan

Project Hail Mary by Andy Weir

The Actual Results
- A Desolation Called Peace by Arkady Martine
- Light from Uncommon Stars by Ryka Aoki
- A Master of Djinn by P. Djeli Clark
- The Galaxy and the Ground Within by Becky Chambers
- She Who Became the Sun by Shelley Parker-Chan
- Project Hail Mary by Andy Weir
How I Would Have Voted
- Project Hail Mary by Andy Weir
- No Award
Explanation
Project Hail Mary was a fun read, and a really good hard SF novel. There were a couple of minor things that made me roll my eyes, but the story itself was solid, and the science was fascinating. Also, the ending really stuck with me for several days. I don’t think it was better than Hyperion or Ender’s Game, but it certainly was deserving of a positive vote for best novel.
I DNFed everything else on the ballot. Normally, that alone wouldn’t be a reason for voting No Award, but some of these books were just insanely woke: in particular, Light from Uncommon Stars was full of transgender madness (and judging from the author bio in the back, the author herself is caught up in the madness as well).
I didn’t read A Desolation Called Peace or The Galaxy, and the Ground Within because I’d already DNFed the first book in the series, mostly for the “all true love is LGBTQ love” trope (I should do a blog post dissecting that particular trope), so I can’t speak to the relative wokeness of either of those titles. But it says something that I tried and DNFed the series.
But the most infuriating read for me was She Who Became the Sun, since by all indications it should have been right up my alley, what with all the steppe nomad warriors and all. The writing was pretty good too, and the setup was fantastic. Yes, there was some gender bending stuff, but for the first half of the book I generally didn’t find it any more offensive than Mulan. I forget why I decided to skip to the last chapter, but the ending was so infuriating that it put this author solidly on my blacklist, just like The Fifth Season did for N.K. Jemison (more on that when we get to 2016’s Hugo ballot). I can’t say much without spoiling the book, but it has to do with what many conservative and alternate media commentators rightly call the death cult. Really infuriating.
As for A Master of Djinn, having traveled across Egypt and the Middle East, the worldbuilding was so fundamentally broken that I just couldn’t swallow it. The author basically created a steampunk Middle East that embraces several tenets of modern wokism. The only alternate reality in which the main character wouldn’t be tossed off of a high building for being a lesbian is a reality where the source code of Islam has been rewritten so entirely that it isn’t really Islam anymore. Which I suppose is fine for a pulpy escapist fantasy, but this one just didn’t appeal to me.
1001 Parsecs Books: The Storm Testament IV by Lee Nelson
If you haven’t read my book blog yet, you should go check it out! I’m posting over there twice a week, with reviews and ruminations on the books I read. This particular one is on Lee Nelson’s The Storm Testament IV, which I think is the best in the series so far.
Why Nick Cave is wrong about human creativity and generative AI
First of all, I don’t think that Nick Cave is entirely wrong. Laying aside how ChatGPT is just one of the many LLMs that are publicly available, and that using it as a stand-in for all of generative AI is like saying “AOL Online” when you mean “the internet,” he does make a fair point that using generative AI as a replacement for basic human creativity is wrong.
What he doesn’t understand is that using AI this way is also counterproductive. He blithely assumes that it takes not skill or effort whatsoever to use these AI tools—that all one has to do is tell ChatGPT what to write, and it will magically produce something if not great, then at least publishable. But as someone who has written several AI-assisted novels and short stories, I can assure you that it does take effort to produce something more than merely passable. Indeed, with longer works like novels, I can assure you that our current AI models are incapable of producing even passable work without considerable human intervention.
This is why I call it AI-assisted writing, as opposed to AI writing. When you do it right, the AI tools don’t replace your inner human creativity, but augment and enhance it, making things possible that were either impossible before, or that required a prohibitive degree of struggle. Writing with AI is still a form of creativity, though it might not look exactly like previous forms. But isn’t that also true of writing on a computer vs. writing longhand? Does it take any less creativity to write a novel on Microsoft Word than it does to write it on parchment with a fountain pen?
Granted, the technological leap from word processor to generative AI is much more profound and fundamental than the leap from pen and paper to typewriter, or from typewriter to MS Word. Speaking from experience, I can say that writing a novel with ChatGPT or Sudowrite feels a lot more like directing a play with an amateur (and very stupid) actor than it feels like wrestling with the empty page, at least in the early generative stages. But it’s still, fundamentally, a creative act—and that’s the main thing that Nick Cave misses in his rant. Anyone can ask ChatGPT to write them a novel, just like anyone can bang their hands on a piano or strum their fingers across the strings of a guitar. But to produce something good—that requires effort.
However, there is an even deeper level where Nick Cave is wrong here, and that is in the unspoken assumption that the difficulty in creating something is the thing that gives it value. It’s the same principle that Karl Marx expounded in his labor theory of value: that the economic value of a good or service is determined by the amount of labor required to produce it, or in this case, the creative and artistic value. That’s just wrong.
Do we love J.R.R. Tolkien’s Lord of the Rings because it took him several decades to write it, and largely represents the greatest product of his life’s work? Obviously not—otherwise, every amateur writer who’s been polishing and repolishing the same unfinished novel for the last twenty years must necessarily be the next Tolkien, no matter the fact that their book reads more like the Eye of Argon than The Fellowship of the Ring.
So if it’s not the creative struggle or the amount of human effort that ultimately gives art its value, what does? The same thing that gives a product or service its economic value: the utility that it provides to the person who consumes it. In other words, the thing that gives art its value is the goodness, truth, and beauty that it brings into the lives of those who receive it.
This is especially true of writing, which is perhaps the most collaborative of all the arts. Without a reader to read it, a book is nothing more than processed and flattened wood pulp full of meaningless squiggles (even less than that for an ebook). When I read a book, I care not a whit for how much work it took for the author to come up with it. Same with the music I listen to, or the games that I play. What I care about is how it makes me think, feel, or experience the world.
And if it’s possible to bring more goodness, truth, and beauty into the world by using generative AI, so what? If it’s easier than writing a novel the old way, does that somehow mean it’s “cheating”? If the answer to that question is yes, please tell me why you don’t churn your own butter, or hunt your own food, or chop your own wood and burn it to heat your house—because all of those applications of modern technology are “cheating” in exactly the same way. Also, I hope all the books in your personal library are handmade, illuminated manuscripts, because the printing press is far more of a “cheat” than generative AI, as the last few hundred years of history clearly shows.
Nicholas Cave is wrong. ChatGPT is not the most “fiendish” thing “eat[ing] away at [our] creative spirit.” Our humanity is far more resilient and anti-fragile than he gives it credit. Those who try to replace human creativity with AI will fail, not because of artists like Cave who stubbornly resist the “temptation” to use these tools, but because of those who embrace the new technology with an open mind, and discover that our humanity is not a liability, but our greatest asset—a premise that Cave ironically rejects with his fearmongering about our fundamental replaceability.
How I would vote now: 1955 Hugo Award (Best Novel)
The Nominees
They’d Rather Be Right by Mark Clifton and Frank Riley (also published as The Forever Machine)

The Actual Results
- They’d Rather Be Right by Mark Clifton and Frank Riley
How I Would Have Voted
- They’d Rather Be Right by Mark Clifton and Frank Riley
Explanation
Things worked a little differently back in 1955. This was only the second time the Hugo Awards were given out (the first was in 1953), though it was the thirteenth Worldcon. As far as I can tell, there was no formal ballot or nominating process, just the organizers of the convention getting together and deciding which winners to award.
They’d Rather Be Right was serialized in four issues of Astounding Science Fiction, before it was published as a novel by Gnome Press in 1957. Among the fans who regularly attend Worldcon, it is largely panned as the worst book to ever win a Hugo Award. For that reason, it is very difficult to find a copy (I was fortunate enough to find a used copy on Amazon that a small-town library in California happened to be selling, but I had to keep an eye out for a couple of months).
But does the book really merit the distinction of being the worst? Personally, I don’t think so. Don’t get me wrong—it’s nowhere near the caliber of Dune, Hyperion, or Ender’s Game, but it does tell a fun story with an interesting sci-fi premise and some entertaining twists. It wasn’t the greatest book I’ve ever read, but I did genuinely enjoy it.
So why does this book get panned so hard? Probably because of its underlying message, which is that 1) Malthus was wrong, 2) Freud was wrong, 3) most self-styled scientists are actually charlatans and quacks, and 4) the best way to safeguard a new technology from evil and conspiring men is to make it open source, even if that technology grants the user with god-like powers.
In short, this book gives a glorious middle finger to would-be authoritarian statists everywhere. For that reason, it will always have a special place in my heart. If the 1955 Hugos were held today, I would happily vote They’d Rather Be Right for best novel.
2024 Predictions for the Publishing Industry
I started working on this post over the Christmas break, but then things got so busy that I never got around to writing anything more than the section headings. The year is still young, though, so I figured it was worth posting it anyway, even if only as a list of bullet points.
- The courts will side against authors and publishers, in favor of OpenAI and generative artificial intelligence.
- Amazon will use self-published content to create an LLM or other generative artificial intelligence.
- We will not see an AI-assisted novel break out and become a bestseller this year…
- …but we will see generative AI used to power a new book recommendation engine that will outperform everything currently out there.
- Censorship and book banning will accelerate and become more flagrant.
- The gap between bestsellers and midlisters will grow.
- Book sales overall will decline, unless a new pandemic is declared.
- A surprising number of authors will find success with their online stores, though we probably won’t hear about that.
- The long, slow decline of Amazon’s prominence in the book industry will become a talking point.
- By the end of the year, AI-assisted stories will garner public interest as more than just a novelty.
How Not To Write An AI-Assisted Novel
The worst way to write a novel with generative AI is to make the AI do all the work.
In fact, thinking of it in terms of “how much of the work can I get the AI to do?” is pretty much guaranteed to give you a really crappy book by the end of it. The AI’s job isn’t to “do the work,” any more than a power tool’s job is to build a house. You do the work. AI is just a tool to multiply your efforts.
But let’s take a step back. Who am I to talk about all of this? My name is Joe Vasicek, and I’m an indie author who’s been writing and publishing regularly since 2011. At this point, I have several dozen novels under my belt, including about half a dozen AI-assisted novels, the first of which is published under my Joe Vasicek pen here on this blog. Also, my wife is a PhD student and research assistant who works with generative AI and large language models. Her thesis is on using generative AI to create interactive cross references for any body of text, customized to the user. We talk a lot about generative AI and share what we’ve learned, so we’re both fairly knowledgeable on the subject.
At this point, it’s still very much the wild west of writing with AI-assistance. The technology is new enough that there really are no experts on the subject, though I expect that that will change rapidly over the next few years. And while I can’t (yet) say that I’ve made gazillions of $$$$ from my AI writing methods, I can say that I’m one of the first professional writers to develop a method for writing with AI-assistance.
And that’s not a boast. Whenever I get together with other writers, I wish there were more of them (really, any of them) that I could talk with about this stuff. There are some online communities that come at it more from the AI side than the professional writing side, and I probably ought to spend more time in those, because it’s probably only a matter of time before one of them has a runaway bestseller and shakes up the publishing industry in the same way that Amanda Hocking shook things up when the indie publishing revolution was just getting underway.
Maybe that someone will be you. Who knows? We’re still very much in the wild west of AI writing, and probably will be for a while.
It’s that very loneliness that makes me want to blog about AI-assisted writing—that, and the fact that I’m still trying to figure it out for myself, so I would love to hear what’s working for other writers. But one thing that I’ve learned from my own experience is that the worst way to write an AI-assisted novel is to dump all the work on the AI and expect anything good to come out.
The main reason for this is that LLMs and generative AI do not think—at least, not in any meaningful way that’s similar to the way you and I think. Instead, these models analyze human language for patterns, and replicate those patterns according to the parameters and instructions give by the user. It’s much closer to how your phone is able to predict your next word when you go to write a text, except that instead of writing the next word, ChatGPT or Sudowrite or whatever LLM you happen to be using is instead predicting the next 5-10 paragraphs.
So really, it’s not very useful to think of an AI as being able to “write” anything. Instead, it’s much more useful to think of it as “simulating” the thing that you’ve told it to write, or producing a simulation of the kind of work that a human would produce, given your parameters and instructions. The AI isn’t “doing the work” for you, it’s merely simulating the end product of that work. You still have to make it your own.
And how do you make it your own? Personally, I’ve found that the best way to do that is to open up a new document on my second monitor and type it all out by hand, occasionally referring to the AI-generated text when I don’t know what to write next, but largely trusting in myself to create the real, non-simulated draft. No copy-pasting! The mental exercise of writing it all out, word for word, stimulates something in the creative mind, and in most cases I end up writing something completely different, using the simulated version of the novel merely as a stepping stone.
So why do I go through all the trouble of generating a whole novel, when I’m probably going to throw out most of that text anyway? That’s a very good question—so good, in fact, that it needs to be the subject of its own post.
Thoughts on the 2023 Hugo Awards
This video gives a pretty good recap of the endless fountain of scandals surrounding the 2023 Chengdu Worldcon and Hugo Awards. Larry Correia also gives an interesting take on it on his blog, and in his writing podcast.
My initial thoughts:
- Schadenfreude is one hell of a drug.
- Accusation = confession = projection, no exceptions.
- This scandal vindicates the Sad Puppies 110%. Remember how they called us the racists? How they said we were the ones manipulating the system? …yeah.
- Wow, schadenfreude is one hell of a drug.
Laying aside all of the knee-jerk internet outrage (and schadenfreude), though, I do find it tragic that there doesn’t seem to be a way to recognize excellence in the SF&F genres that isn’t totally given over to in-group politics and petty fannish controversies. At the end of the day, I think that’s really what the Sad/Rabid Puppies was all about: a small and exclusive group of insiders (aka “true fans”) refusing to give any space to outsiders who also wanted to be part of the awards process. The fact that most of these outsiders happened to be politically conservative was incidental; we might as well have been Chinese, for how the in-group treated us.
With all of that said, though, I don’t necessarily think that the best solution is to burn the Hugos to the ground. For all the scandals, and how terribly woke the Hugos have swung in the last few years, the system itself is still pretty good. I mean, can you imagine how much different things would be if our national elections were decided by ranked-choice voting, with “none of the above” as an available option? As much as I have a problem with the people who organize and run the Hugo Awards—the people who are rightly being slammed for arbitrarily discounting hundreds of Chinese ballots and arbitrarily disqualifying several titles from the final ballot—the system itself is actually a pretty good one.
A couple of years ago, I read every Hugo and Nebula award-winning book. It was an enlightening exercise, to say the least. Since then, I’ve dabbled with doing something similar with other wards, like Goodreads Choice, but I’ve never really made the plunge, since most of these other awards are either too young to really give a comprehensive overview of the genre, or too narrow or cliquish. Many of them are thinly-veiled popularity contests, where the author with the most rabid fanbase wins.
Is it possible to have an award that recognizes true excellence that doesn’t devolve into a thinly-veiled popularity contest on the one hand, or else isn’t taken over by a small and snobbish group of elites on the other? I can see how these sorts of concerns might have driven many of the concerns about “slate voting” during the Sad Puppies controversy in 2015. Unfortunately, they clearly took it to the opposite exteme, turning the Hugos into their own exclusive club, un-personing conservative and Chinese fans alike.
In the end, it probably comes down to who we are as a people more than what systems have been put in place. If fandom really was the kind of place where people could come together over their shared love of science fiction and fantasy, regardless of politics, religion, nationality, or anything else, then perhaps the Hugos actually would be a marker of excellence, and not just identification with a very small (and snobbish) in-group. And I do think there have been times in the past where that has been the case.
So, in a funny way, this whole controversy around the 2023 Hugo Awards actually makes me want to go back and read a bunch of the older Hugo-nominated books from previous years, to see how I would have voted (and how my own vote differs from the votes that were actually cast). I think it could be a useful exercise, not so much in determining how useful or authoritative the Hugos ought to be (I figured that out a couple of years ago), but in determining my own reading tastes, and how they may or may not have fit in with previous generations of Hugo-award voting fans.
One of the most difficult things I’ve recently had to wrestle with is the realization that my own tastes and values run almost completely contrary to the culture in which I live. I don’t think the Hugo Awards have ever represented mainstream culture, but it is still an interesting bellweather of a subculture that I love, plagued as it might be by in-group politics and petty infighting. And I do think there were periods where my own tastes and values aligned pretty well. I’m curious to see which periods those are.
All of this is to say that I’ve been going through all of the old Hugo nominations for Best Novel, reading through them to see how I would have voted for each year. I’m mostly just doing it for myself, but I may post it here if you guys are interested, since it probably would make for some good blogging content.