Thoughts on the Charlie Kirk assassination

I heard the news shortly after dropping off my daughter at BYU kindergarten. The shooting apparently happened while we were on the road. Utah Valley University is only a couple of miles from our house, and the hospital where he died is only a mile from us.

I saw the videos of the assassination, including the now-censored one that showed it up close. I also saw the videos of the alleged shooter being hauled off in police custody, though now it appears that the University is saying that he wasn’t the shooter. This is such a fast-moving story that we probably won’t know exactly what happened until at least 24 hours from now, and there may be some things that we never know. And since I wasn’t there when it happened, I can’t comment on the shooting itself.

I just have to say, this is not who we are here in Utah. The shooter may turn out to be a Utah man, but that is not who we are—the rest of us. And I don’t just mean right vs. left, conservative vs. liberal. Most of us here in Utah swing MAGA (in fact, I’ve got a couple of neighbors who are still proudly flying their Trump flags), but we’ve also got some neighbors with rainbow flags and decals, and I’m sure that the vast majority of them are just as horrified that this assassination happened in our community. In fact, they’re probably afraid of how the rest of us will react.

My thoughts and prayers go out to Charlie Kirk’s family. I can’t imagine how horrible that must be, not just to lose your husband and father, but to have the footage of his violent death plastered all over the internet. I hope that more good than evil ultimately comes of this national tragedy, and that Charlie Kirk’s work will live on for many years to come.

Fantasy from A to Z: Z is for Zeitgeist

What is the future of fantasy literature? Where is the genre headed, based on current cultural trends?

For a long time, epic fantasy was basically Tolkien-light. There were exceptions, of course, but most readers wanted something that felt a lot like Lord of the Rings, and the most successful writers were the ones who gave it to them. There was a little bit of innovation, probably culminating in Robert Jordan’s Wheel of Time series, but if you picked up a random epic fantasy off the shelf, you could have a pretty good idea of what you were getting into.

Then, in the 90s and 00s, fantasy started to get dark and gritty, with writers like Joe Abercrombie and George R.R. Martin setting the tone. This new subgenre or flavor of fantasy, called grimdark, really came to dominate during this time, to the point where some were calling Martin an “American Tolkien” (though all that talk more or less died with the terrible finale of the show). Grimdark is still quite dominant, though an increasing number of readers are turning to “cozy” fantasy or slice-of-life in subgenres like litRPG. And of course, romantasy is taking off like crazy, though as we’ve already discussed, most romantasy is basically just porn.

So where are we going from here?

Our culture tends to pass through a cycle of seasonal turnings, where each season is the length of a generation, and the cycle itself is the length of a long human life. Reduced to its simplest form, the cycle follows a pattern like this:

Strong men create good times (first turning).

Good times create weak men (second turning).

Weak men create hard times (third turning).

Hard times create strong men (fourth turning).

We are currently living in a fourth turning, which is the period when all of the major wars and catastrophes tend to happen. In other words, the fourth turning is basically a grimdark world—or rather, when the full consequences of a grimdark world become manifest. But the grimdark subgenre really took off in the third turning, when dark and grim fantasy worlds resonated with the “hard times” that we all were starting to live through. This is also why dystopian YA became so popular in the 90s and 00s.

(As a side note, I have to say that I find it both perplexing and hilarious how so many zoomers think of the 90s as a simple and wholesome time, to the point where they think they experience nostalgia for it. Those of us who lived through the 90s remember it very differently, as an era of school shootings, political scandals, collapsing churches, teenage pregnancies, and ever-escalating culture wars. There’s a reason why Smells Like Teen Spirit was the decade’s anthem. Though in all fairness, I suppose that if someone from the middle ages were to visit our own time, they would find the nostalgic yearning on which the whole fantasy genre is based to be just as perplexing and hilarious.)

I believe we are on the cusp of a major cultural wave that is going to change everything, to the point of making our world almost unrecognizable to those who lived through the 90s and 00s. And just as the grimdark authors like Martin and Abercrombie rose to prominence by riding the wave in their part of the generational cycle, there are a lot of noblebright authors who stand to benefit from riding this next wave, which is only now beginning to break.

After all, there is another way to formulate the generational cycles. It looks something like this:

Complacent men create a spiritually dead culture (first turning).

A spiritually dead culture creates awakened men (second turning).

Awakened men create a spiritually vibrant culture (third turning).

A spiritually vibrant culture creates complacent men (fourth turning).

In the summer of 2024, I think we passed through a critical fork in the current timeline. If the generational cycle had followed its usual course, then our current crisis period would have ended with a period of unification under a new order, based upon the spiritual foundations that were laid during the 60s and 70s. In other words, the woke left would have won, and we’d be living under the sort of regime that would enforce woke values. Dissent would not be tolerated, because dissent is never tolerated in a first-turning world.

The second most likely outcome would have been a complete shattering of the generational cycles. In other words, we would have fallen into some sort of national divorce or hot civil war, with the United States splitting apart and the Western world completing its cultural suicide, which has been ongoing for several decades now. There has never been a time when such a major cultural rift has been accomplished by peaceful means. It is always accompanied by a terrible, bloody war.

But when President Trump survived the assassin’s bullet at the rally in Butler, Pennsylvania, that’s the point where I think our timeline diverged—and it followed the least likely path, which has only ever happened once in the history of modern generational cycles. We skipped from a fourth-turning straight into a second-turning, skipping straight from crisis to revival.

The last time this happened was with the US civil war. Usually, after a culture survives an existential crisis, you get a period of national unity, which often results in a brief golden age (or at least, an age that is remembered as such, often by those who did not live through it). But after the civil war, there was no national unity. Instead, we skipped right to the second turning, which is typically characterized by a major spiritual awakening.

Whatever your opinions of President Trump, the fact that he survived the assassination attempt in Pennsylvania and went on to win the 2024 election in a landslide means that we have (for the moment) avoided the first two scenarios. At this point, it’s difficult to imagine the woke left taking back the culture and leading us into a first-turning world in their own image. And though the US may yet fall into a hot civil war, from where I’m standing in flyover country that no longer seems quite so imminent.

Don’t get me wrong, though. We are not about to enter a period of national unity anytime soon. Certainly not a period of national unity whose foundations were laid by the previous spiritual awakening, which is what the generational cycle requires. At the same time, because President Trump survived the Butler assassination attempt (thank God), I think we avoided a hot civil war.

Because of all this, I think that we are about to experience a major cultural upheaval, the likes of which have never been seen in living memory. We will not get a period of unification. We will not experience a golden age period of material prosperity (though there may be a few years of plenty before the years of famine begin in earnest). But we will experience a cultural and spiritual revival that will burn through our culture until it has utterly demolished the woke worldview and values laid down during the 60s and 70s, and built something entirely new in its place.

What will that look like? And how will it affect the trajectory of fantasy literature?

Culturally, it will be a period of incredible dynamism. We will see an explosion of creative expression in every field, including in literature. Books and movies and games that are cultural mainstays now will be totally forgotten within a couple of decades, and everything that is popular now will feel dated and out of touch in the space of just a few years.

The authors and artists who will do the most to shape this new culture are today almost completely unknown, but they will become household names in surprisingly short order. Others will take decades to become known, but they will write their most important works in just the next few years.

The country will hold together. There will be no civil war, though there may be a global one. And there will almost certainly be an economic collapse, like the Great Depression, except much deeper and much longer. But all of this will only serve to fuel the spiritual revival, and the revival in turn will fuel the cultural dynamism, until the country and ultimately the world have been entirely transformed.

In more practical terms, I think we are going to see a lot of publishing houses fold, and a lot of popular authors fall out of favor. Many of them will keep their core group of fans, but they won’t be nearly as culturally relevant moving forward. New authors will rise from unexpected places to replace them, especially as the old institutions (publishers, conventions, magazines, review sites) collapse.

Romantasy will ultimately be recognized as the pornography that it is, though not until after it’s done great damage to the fantasy genre as a whole. The damage will be healed by a return to the genre’s spiritual roots. Grimdark will fade, and noblebright will rise, though it will ultimately take a different name and be recognized for other characteristics. It all depends on which of the thousand blooming flowers get picked.

LitRPG will mature into a long-term stable subgenre, and capture most of the innovation in the field. It may spin off into multiple long-term stable subgenres. Meanwhile, epic fantasy will return to its roots and grow as the spiritual revival takes hold. But instead of getting Tolkien clones, we’re going to see a lot of original and innovative work.

That’s the zeitgeist as I see it. The next few years are going to be a wild ride. Are you up for it? I hope that I am.

Fisking 1-star reviews bashing AI

They say that authors should never respond to one-star reviews. That’s generally good advice, and for most of my career, I’ve studiously kept it. However, I’ve recently begun to get a new kind of one-star review that baffles me—reviews that essentially say: “the book was good, but it was written with AI so I hate it.”

Here’s an example:

This book is written with AI. Incredibly disappointing as a reader to give a book/author a chance and then to get to the end of the book only for the “author” to then announce the AI card. If I could give zero stars, I would for this alone. I also didn’t appreciate that this use of AI was not announced until the ending Author’s Note. If “authors” are going to cut corners and put their name to computer-generated mush, they should be willing to put that information on the front cover. The book struggled to find its pace, and some parts read as though they were written for a child’s short story competition while others felt as though the writer was snorting crushed up DVDs of Pirates of the Caribbean as they wrote.

Let’s break it down:

This book is written with AI. Incredibly disappointing as a reader to give a book/author a chance and then to get to the end of the book only for the “author” to then announce the AI card.

Yes… but I can’t help but notice that you got to the end of it. In other words, you finished the book. Also, from the way you tell it, it seems that you didn’t realize the book was written with AI until you got to the very end. So based on your own behavior, it doesn’t seem that quality was the issue.

I also didn’t appreciate that this use of AI was not announced until the ending Author’s Note. If “authors” are going to cut corners and put their name to computer-generated mush, they should be willing to put that information on the front cover.

Okay… but if my book was just “computer-generated mush,” why did you finish it? And why were you surprised when you learned that it was written with AI-assistance?

I can understand the objection to books that were written solely with AI, with little to no human input. But that’s not how I write my AI-assisted books. Instead, I outline them thoroughly beforehand, write and refine a series of meticulously detailed prompts (usually using Sudowrite), and generate multiple drafts, combining the best parts of them to make a passable AI draft. And then I rewrite the whole thing in my own words, using the AI draft as a loose guide with no copy-pasting.

Why would I go through so much trouble? Because of how the AI drafting stage gives me a bird’s eye view of the book, allowing me to identify and fix major story issues before they metastasize and give me writer’s block. Before AI, that’s where 80% of my writer’s block came from, and it often derailed my projects for months, so that it took me well over a year to write a full-length novel. But with AI, I’m no longer so focused on the page that I lose sight of the forest for the trees. So even though generating and revising a solid AI draft adds a couple more steps to the process, it’s worth it for the time and trouble that it saves.

That’s the way I use generative AI in my writing process. But there are many other ways—and I hate to break it to you, but most authors use AI in one way or another. If an author uses Grammarly to fix their spelling and grammar, should they disclose that on the cover? If they use MS Word? What if they used a chatbot to brainstorm story ideas, but went on to write it entirely themselves? Should that also be disclosed?

The book struggled to find its pace, and some parts read as though they were written for a child’s short story competition while others felt as though the writer was snorting crushed up DVDs of Pirates of the Caribbean as they wrote.

Yes… but again, I can’t help but notice that you finished the book. And after you finished it, you were surprised to learn that it was written with AI. So with all due respect, I’m going to call BS on your objections here. I think you only decided you hated the book after you learned it was written with AI, and you came up with these objections after the fact. Whatever.

I think a lot of the people who object to AI are really just scared and angry. They claim to have principled, ethical objections to the technology, but few of them follow through to implement that principled stance into every area of their lives. After all, if you use Grammarly, Google Docs, or MS Word, you are using generative AI just as surely as I am using ChatGPT and Sudowrite. For most people, the ethical objections are just a smokescreen for their general fear of change. They’re fine with embracing the convenience the technology offers them in their own personal lives, but they insist that everyone else—including me—live according to their principles, no matter how inconvenient or difficult it may be.

As an example of that, check out this one-star review:

The arts! Whether visual, performance, or literary—my haloed experience has been the act of creating and sharing a connection to the profound or sublime. Why, then, would any artist—musician, dancer, sculptor, painter, or author—offload (abdicate) the act of creation to AI? Process versus product. Mr. Vasicek included an afterword for this volume, describing his workflow and the efficiency of collaboration with AI: a 6,624-word day! another volume completed! Mr. Vasicek obviously owns the skills to weave rich character development and scenes. Perhaps Mr. Vasicek’s AI collaboration explains why these characters, the plot, the narrative—and subsequently, the entire story— are so flat and undeveloped. Although his lead male shows some undeveloped promise, the mother’s too-oft used “dear” and “my love,” and the daughter’s clutching at her mother’s apron are cringe-inducing. Perhaps Mr. Vasicek might eschew AI-assisted writing, seeking a future of quality over quantity.

Let’s break it down:

The arts! Whether visual, performance, or literary—my haloed experience has been the act of creating and sharing a connection to the profound or sublime. Why, then, would any artist—musician, dancer, sculptor, painter, or author—offload (abdicate) the act of creation to AI?

Because for some of us, writing is more than a “haloed experience”—it’s an actual job. It’s what we do for a living. And if you want to do your best work, you need to use the best tools. We used to build houses with plaster and lath and wrought-iron nails, using hand tools and locally-sourced lumber. But today, you’d be a fool not to use power tools and materials sourced from a building supply store, or your local Home Depot. If that makes your building experience less profound or sublime, so be it.

Process versus product. Mr. Vasicek included an afterword for this volume, describing his workflow and the efficiency of collaboration with AI: a 6,624-word day! another volume completed!

I’m not gonna lie: there is a certain degree of tension between art-as-product and art-for-art’s-sake. But the two are not mutually exclusive. A house can still be a beautiful work of art, without taking as long as a cathedral to build it. Likewise, a book can still be a beautiful work of art, without taking as long as Tolkien’s Lord of the Rings.

Again, you’re trying to pidgeon-hole me into your “haloed” idea of what a “true artist” should be. Which would make it absolutely impossible for me to make a living at this craft. If all of us writers followed that path, there are a lot of wonderful books that would never get written. And I doubt that the overall quality of the books that do get written would rise.

Mr. Vasicek obviously owns the skills to weave rich character development and scenes.

Now we get to the interesting part. I checked this reviewer’s history, and this was the only review they’ve written for any of my books. Therefore, I can only assume that this is the only book of mine that they’ve read. But if that’s the case, how do they know that I have “the skills to weave rich character development and scenes”? If the book I wrote with AI was pure trash, why would they say that I obviously have some skill?

Once again, we’ve got a case of “I enjoyed this book, but it’s written with AI so I hate it.” In other words, it’s not the book itself that you hate, so much as the way I wrote it. You object to the idea of authors using AI, not to what they actually write with AI.

Perhaps Mr. Vasicek’s AI collaboration explains why these characters, the plot, the narrative—and subsequently, the entire story— are so flat and undeveloped. Although his lead male shows some undeveloped promise, the mother’s too-oft used “dear” and “my love,” and the daughter’s clutching at her mother’s apron are cringe-inducing.

Finally, some specific and legitimate criticism. And while I do think there’s a degree of retroactively looking for faults after enjoying the book, I’m totally willing to own that these criticisms are valid. This particular book (The Widow’s Child) was one of my first AI-assisted books, and I was still learning to use these AI tools as I was writing it. I did the best I could at the time, but if I were to write it today, I could probably do a lot better, smoothing out the annoying AI-isms that you’ve pointed out here.

But the book is currently sitting at 4.4 stars on Amazon (4.1 on Goodreads). And the other readers do not share your objections. Here is another review, pulled from the same book:

Since waiting a year or more to read the next book in a sequel is hard on my stress levels, I’m liking this AI. It means talented authors like Joe Vasicek can churn out an outline faster. Then he can bring in his talented ideas, such as the content of this heart-stopping adventure of The Widow’s Child, to fill out the nitty gritty in record time.

Clearly, it’s not the case that all (or even most) readers feel the same way about AI as you do.

Perhaps Mr. Vasicek might eschew AI-assisted writing, seeking a future of quality over quantity.

Why can’t we have both? Why can’t we have quantity with quality? Why can’t AI make us more creative, instead of replacing our human creativity?

This is all giving me flashbacks to the big debate between tradition vs. indie publishing, back in the early 2010s. Back then, the debate was between purists who said that indie publishing would destroy literature by flooding the market with crappy books. Indies argued that removing the industry middlemen would create a more dynamic market that would give readers more choices and allow more writers to make a living. Both were right to some degree, and both were also wrong about some things. In the end, we reached a middle ground where “hybrid publishing” became the norm.

The same kind of debate is happening right now between human-only purists and AI-assisted writers. The biggest difference is dead internet theory. In the early 2010s, the ratio of bots to humans on the internet was still low enough to allow for a lively debate. Today, there’s so much bot-driven outrage on the internet that most of us are just quietly doing our own thing and avoiding the debate.

That same bot- and algorithm-driven outrage is driving a lot of peole to be irrationally angry or afraid of AI. With that said, I can understand why so many people are upset. And I do think there are a lot of valid criticisms about this new technology, including its environmental impact, copyright considerations, how the models were trained, and the societal impact it’s already starting to have. But if we don’t have an honest and good-faith debate about these issues, we can’t solve any of them. And we can’t have a good-faith debate if one side is coming at it from a place of irrational anger or fear.

In any case, I find it super annoying when readers who clearly found some value or enjoyment in my books turn around and give it a one-star review merely because they don’t like how I used AI. And at the risk of going viral and soliciting more one-star anti-AI reviews, I think its worth voicing my views on the subject and opening that debate. So what are your thoughts on the subject? How do you feel about using AI as a tool to help write books? Can we have quantity with quality? Can AI help us to be more creative, not just more productive? What has been your experience?

Fantasy from A to Z: U is for Unicorns

If you were expecting a post on unicorns or other mythical beasts, I hate to disappoint you again, but that’s not what this is going to be. Instead, I want to write a bit about that most mythical of all human creatures: the full-time fiction writer.

Okay, perhaps we’re not that mythical. After all, Brandon Sanderson estimates that of all his students over the years, perhaps as many as 10% of the ones who set out to become full-time writers actually make that dream into a reality. I sometimes wonder: would Brandon count me as one of those 10%? Should he? The answer to that is… complicated. 

One of the first questions I get whenever I tell people that I’m a writer is “oh, wow—how is that working out for you?” Which is really a roundabout way of asking how much money I make, and whether I’ve been able to turn it into a full-time career. I am not (yet) a major bestselling author, and the closest thing I’ve had to a breakout thus far has been my (now unpublished) Star Wanderers novella series, which managed (mostly by accident) to hit the algorithms correctly back when a permafree first-in-series with lots of direct sequels was the best path to success. Then the publishing landscape changed, the algorithms shifted to favor pay-to-play advertising, and my books got left behind.

I will admit that if it weren’t for my wife’s income, I wouldn’t be able to pursue writing full-time. As a family, we’re following a path very similar to my Scandinavian ancestors, where the wife tends the farm while the husband goes off a-viking. In other words, my wife has the stable, traditional career that provides our family with some degree of security, while I have the more risky career that has the potential to catapult us into transformative levels of wealth and prosperity. We’re doing just fine, but it does sometimes feel like my Viking ship has yet to land ashore.

Because here’s the thing: something like 90% of the money in book publishing (after the booksellers and publishers and other middlemen take their often-exorbitant cuts) goes to less than 1% of the writers who actually make any money (and something like 30% of kindle books never sell a single copy). 

For every Brandon Sanderson, there are thousands—perhaps hundreds of thousands—of published authors who write on nights and weekends while holding down a day job to pay the bills. My writing contributes enough to the family budget to justify pursuing it, but if I were still single, I would need at least a part-time job.

Indie publishing has created a lot of opportunity for authors to make a career out of their writing, and there are many successful indies who are making a decent living at it. At the same time, indie publishing has also massively exploded the number of books that are published, so the proportion of full-time to still-aspiring authors is probably about the same (and may have actually tilted the other way). 

In recent years, it has very much turned into a zero-sum pay-to-play game, especially with advertising. From what I can tell, most authors lose money on advertising, and most of those who are making money are spending upwards of $10,000 each month to make $11,000. The elite few who learn how to successfully game the algorithms to blow up their books often put their writing on the backburner to launch their own companies or provide publishing services, leveraging their expertise to make a lot more than they otherwise would.

The algorithms are changing books in some very strange ways. If J.R.R. Tolkien or Roger Zelazny or Robert E. Howard were writing today, would they be able to make it in today’s publishing environment? 

Howard’s Conan stories would either have to be a lot sexier, or else would have to include the sort of tables and character stats you find in LitRPG. His covers would also be a lot more anime, and show a ridiculous amount of cleavage (which he actually might not have had a problem with, judging from some of the old Weird Tales covers). 

Zelazny’s Chronicles of Amber would all be far too short to make it in Kindle Unlimited—to make it in that game, you have to have super long books that max out on page reads, in order to maximize advertising ROI so that you can outbid your competitors. And if you aren’t winning the pay-to-play advertising game, your KU books will sink like rocks. Also, Zelazny took way too much time between books. Gotta work on that rapid release strategy, Roger.

As for Tolkien… hoo boy, there’s an author who did everything wrong. Decades and decades spent polishing his magnum opus, with a short prequel novel that falls squarely in the children’s category (totally different genre) as the only other fantasy book published in his lifetime. I suppose he could have serialized Lord of the Rings, except nothing really happened in episode 1: A Long-Expected Party. Certainly not anything that would adequately foreshadow all the dark and epic battles to come. Perhaps if he followed a first-in-series permafree strategy, and just gave away Fellowship of the Ring for free… and then made The Hobbit his reader magnet for signing up for his email list… maybe that could have worked? After all, there’s always BookBub…

I jest, of course. Each of these authors’ books became classics, not because of their marketing strategy, but because they hit the cultural zeitgeist in exactly the right way. But is it possible for an author to do that today without also getting a boost from the algorithms? Or do the algorithms have more power to shape our culture than anything else? Those are disturbing questions, and I honestly do not know the answer.

And then there’s the question of AI, which is massively disrupting all of the creative fields. In the interest of full disclosure, I am actually quite sanguine about generative AI, and have already been working to incorporate it into my creative process. I’m not a fan of AI slop, but I don’t feel particularly threatened by it. I decided a long time ago that if AI ever became good enough to write an entertaining book, it still would never be able to write a Joe Vasicek book. That’s insulated me from most of the doom porn out there.

Right now, there is a HUGE fight happening between authors like me who are embracing AI, and authors who treat it all as anathema, and have vowed to never use any sort of AI in any of their books (except Grammarly, of course, because… reasons. And Microsoft Word. And…) Frankly, it reminds me of the big debate between indie and traditionally published authors, back before self-publishing had lost its stigma. The biggest difference is that the level of online outrage has been ramped up to 11, mostly as a result of the social media algorithms (which weren’t as robust or as powerful back in the early 2010s). I suspect that we will ultimately settle on a “hybrid” approach, much like we did with publishing, but the sheer level of vitriol has made me wonder about that. 

On the reader end of things, though, it seems like most readers don’t really care if a book was written with or without AI assistance, so long as it’s actually a good book. Which means that there is a real opportunity for authors who 1) know how to tell great stories, 2) have already found and honed their voice, and 3) know how to strike the right balance between the AI and the human elements. 

Which describes my own position almost perfectly. Over the last fifteen years, I’ve read, written, and published enough books that I have a pretty good handle on what makes a great story. I’ve also honed my voice well enough that I can write in it quite comfortably. And as for the balance between AI and human writing, I’ve been working hard on that since ChatGPT burst onto the scene in 2022. Half a dozen books and about a million words later, I’ve learned quite a lot about how to best strike that balance.

Will AI replace authors entirely, making this particular unicorn extinct? I don’t think so. But AI may radically change our concept of what “books,” or “writers,” or “writing” really are. A long time ago, I realized that even if AI became good enough to write a decent book, it would never be able to write a Joe Vasicek book. Only I can do that. Whether or not that’s worth something is up to the readers to decide.

The key to understanding the Middle East (and possibly the world)

I just finished Douglass Murray’s latest book, On Democracies and Death Cults, and wow, is it an incredible book. Difficult to read, simply because of the grim nature of the subject, but a very powerful and very timely book.

My own thinking on Israel and the Middle East has changed a lot since the October 7th attacks. For the record, I studied Middle Eastern Studies and Arabic in college in the 00s, traveled throughout Jordan, Egypt, Israel, and Palestine / Judea & Samaria while I was pursuing my degree. I’ve kept up with geopolitical developments over the years, including during the Arab spring, and have helped some of my Arab friends navigate those developments.

The apocryphal Churchill quote that “if you’re not a liberal by your 20s, you have no heart, but if you’re not a conservative by your 50s, you have no brain” very much describes my own experience. I used to be very sympathetic toward the Palestinians, but after the October 7th attacks, my position has shifted almost 180 degrees.

The thing about the Middle East is that even though it’s complex, it’s not really that complicated. Within the Middle East, there are basically three kinds of people:

  • the Jews,
  • the people who want to kill the Jews, and
  • the people who really don’t care.

This dynamic has defined the politics of the region since at least the Babylonian sack of Jerusalem in 600 BC, and possibly quite longer. Possibly, in fact, since the very first Hebrews migrated to the region during the Bronze Age Collapse.

(As a side note, there has been a continuous Jewish presence in the Levant since our first historical records of the Jews. In other words, this is the one place in the world where the Jews are indigenous. Therefore, anyone who argues that the Jewish State of Israel is a “colonist” state is, in effect, arguing for the extermination of the Jews, because there is no other place in the world where the Jews can live and not be considered colonists. At the very least, they are laying the foundation for the ideological position that the Jews should always and everywhere be treated as subhuman.)

With the above dymanic in mind, there are only two configurations that possess any sort of inherent stability. The first is that the Jews are the people in charge of the region AND constitute the majority of the population. That way, even if all of the non-Jews fall into the kill-the-Jews camp, they are still not powerful enough to carry out their plans.

This was the state of affairs from the days of Ezra and Nehemiah basically to the Roman siege of Jerusalem. Following the Babylonian exile, the Jews returned to their homeland under the (mostly) benevolent rule of King Darius of Persia, who allowed them to rebuild the temple, which the Babylonians had destroyed. When Alexander took over the region and the Greeks began to Hellenize it, the Maccabees and other Jewish rulers still managed to hold their own.

But all of that changed when the Romans destroyed Jerusalem in 70 AD. They put down the Jewish revolt with utter ruthlessness, making a desert and calling it peace. They drove the main body of the Jews out of their ancestral homeland, making sure it would never be such a hotbed of rebellion again. They also renamed the region “Palestine,” after the ancestral enemies of the Jews, the Phillistines. The name “Palestine” was originally an insult to the conquered Jewish people, just like the name “Britain” (ie “land of the painted people”) was originally an insult to the conquered Celts. And just like the British came to own the term, the Jews also came to own the term “Palestine” until it was appropriated from them by the Levantine Arabs who wanted to kill all the Jews.

From 70 A.D. until the early 20th century, the Jews were a minority in their own homeland. And so long as their numbers didn’t get too large, things were relatively stable. Sure, there were plenty of people who still wanted to kill them all, but so long as the Jews mostly stayed out of sight, most of the non-Jews frankly didn’t care. It was only when their numbers began to grow that the I-don’t-care faction bled into the kill-them-all faction, leading to pogroms and mass rapes and all sorts of insane atrocities.

But then, in the 19th century, the Jews began to migrate back to the region in large numbers. This led to an inherently unstable configuration which persists to this day, where the Jews and non-Jews are roughly equal in number. The Jews formed the State of Israel with help from their Western patrons, who provided a degree of metastability. But the situation is not long-term stable, and hasn’t been for the last 150 years.

The Americans tried to solve this problem by bringing together the Jews and the people who want to kill the Jews—as if they could ever make peace. This was incredibly naive. So long as there are Jews, there will be people who want to kill them. Individuals may be persuaded to change their positions, but the ideologies of antisemitism are as persistent as the Jewish people themselves. The death cult will never be satisfied until all of the Jews are dead.

What October 7th showed us is that the three-way dynamic of the region is still very much in play, and that the kill-the-Jews faction is still far too strong. And given the way things are changing here in the United States, I suspect that the Jews have, at best, another generation before their Western patrons become unreliable, and the metastable nature of the current configuration begins to deteriorate.

The Abraham Accords are changing things in a very positive way. For once, instead of trying to get the Jews to make a deal with the people who want to kill them, we are moving away from that silly nonsense and cutting those people out of the equation by making a deal with everyone else (like we should have done in the beginning). And with the way that Iran was utterly defeated in the latest war, it looks like that might actually work. But even then I don’t think the situation is going to be long-term stable unless it ultimately leads to a mass resettlement of the Palestinians, because that’s the only thing (aside from the senseless massacre of millions of Israeli Jews) that puts us into a stable configuration.

I think the Israelis know this. And I think that Israel is going to get a lot more aggressive in the coming years, much to the consternation and perplexity of their friends here in the West who do not understand this three-way dynamic (or who think that the key to peace is for the Jews to play nice and not fight back, so that most of the non-Jews fall into the I-don’t-care camp).

Because here’s the thing that almost no one is talking about: the impetus for the October 7th massacre was the transportation of several red heifers to Israel from a ranch in Texas. In order to build the third Jewish temple, the land of the Temple Mount (where the Dome of the Rock and the Al-Aqsa Mosque currently stand) needs to be ritually cleansed by the ashes of a pure red heifer. The reason Hamas called their operation the “Al-Aqsa Flood” was to appeal to their Muslim brothers to defend the temple mount.

From what I understand, most Jews do not currently want to rebuild the temple, and the State of Israel itself has taken strong measures to suppress those who do. But every time the Jews have had a commanding presence in their own ancestral homeland, they have built or maintained a temple on the Temple Mount. So once they feel they’re strong enough, they will probably do it again. And when that happens (or as it is beginning to happen, perhaps even now), I think that this three-way dynamic will become much more of a global phenomenon.

The dangers of relying too much on AI

I saw this really interesting video last week, and it made me think: am I relying too much on AI?

In my personal life, this probably isn’t an issue. I do occasionally ask ChatGPT to make me a recipe, or to advise me on a particular topic, but I always do a gut check and assume that it’s hallucinating if it doesn’t pass. If it gives me something that I can quickly and easily verify, I always do that… and half of the time, it turns out to be a hallucination to some degree. So yeah, I don’t rely on it nearly as much in my personal life as some of the characters in this video.

What about blogging? Don’t be too scandalized, but with my new blogging schedule, I have experimented a bit with using ChatGPT to write some of these blog posts. It’s not like I’ve been copy-pasting everything straight from the chatbot, but I have relied on it a little more heavily than I do in my own writing.

After trying that a couple of times, though, I decided to cut that out and write all of these blog posts by hand. Why? Because I felt like it was creating too much distance between myself and the people who read this blog, and the purpose for writing this blog is to foster a human connection. So it kind of defeats the purpose to rely on a chatbot to generate most of the content I post here. For that reason, I plan to keep writing all these blog posts entirely myself, with only minimal AI input.

So what about my fiction? This is where things get a little tricky. While I totally agree that simply copy-pasting from AI is a piss-poor way to write a book, I do think that AI can be a very useful tool in writing and crafting a novel, provided that you understand the limitations of the AI and don’t rely on it too much. But how much is too much? That is the question.

The biggest way that AI has helped to enhance my own writing is in giving me a birdseye view of the story as I generate a “crappy first draft.” This birdseye view allows me to see and fix major story issues before they metastatize and give me writer’s block, which is what tends to happen if I write these drafts out entirely by hand. When I’m focused on the page, I tend to lose sight of the forest for the trees, so I don’t notice that there’s a problem with the story until I’m several chapters in and find that I just can’t write.

This has happened with basically every project that I write on my own, and is the main reason why it took me anywhere from six to eighteen months (or longer) to write even a short novel, before I started using AI. However, since I began incorporating AI into my writing process, this problem has basically gone away, and I no longer experience this form of writer’s block at all.

However, while I do rely on AI to help me to craft my “crappy first draft,” that isn’t the draft that I publish. Once the AI draft is as good as I can make it, I will then go through scene-by-scene and rewrite the entire book in my own words. The purpose for this step is to make sure that I’m telling the story in my own words, and to make the story my own. I will still have the AI draft open on another screen, and refer to it as I write out the story, but I don’t do any copy-pasting. It’s all written out by hand.

Is this enough, though? Or do I need to add more steps to make sure that I’m not relying too much on AI, and thus losing my own voice? Recently, I’ve been spending a lot more time on the AI draft, generating multiple iterations and combining the best parts to (hopefully) boost the quality. I’ve also been doing a revision pass over the AI draft, tweaking it to smooth over some common AI-isms and (hopefully) adding a bit of my own voice before I move on to the human draft and rewrite the whole thing to make sure it’s all in my voice.

But while this might be enough to keep the book in my own words, is this enough to keep my own writing skills from atrophying? Or do I need to occasionally pick up a WIP that is 100% human writing, with no AI at all, just to make sure I don’t lose these writing skills? That is the question that I’m currently pondering. Perhaps this is the sort of thing that short stories could serve really well to help with. Perhaps I should go back to writing short stories again, just as a way to keep my writing skills sharp.

If I were starting out right now as a new writer, I would definitely avoid writing with AI until I’d written enough to find my own voice. And I would also make sure to write at least one novel 100% without AI-assistance, just for the experience, and to prove to myself that I could do it. Otherwise, I think there would be a very real danger in becoming over-reliant on AI to write my books, and thus risk losing my own unique voice, so that none of the books that I write ever truly become my own.

Anyhow, those are some of my current thoughts on the subject. What do you think of this problem?

Will super-intelligent AI take over the world?

I’ve been reading a lot of non-fiction books about AI recently. Basically, whenever a nonfiction audiobook that has anything to do with AI comes into my audiobook library app, I jump on the waiting list and listen to it right away. I’ve also been following AI news podcasts and watching lots of YouTube channels that discuss the recent developments… and boy, is there a lot of doom porn out there.

People who are closely watching this stuff believe that AGI (Artificial General Intelligence) is imminent, ie within the next 6 to 72 months, and that when AGI gets mainstreamed, it will either usher in a golden age of post-scarcity, or the ultimate extinction of all mankind (or both, weirdly). The main crux of their thesis is that once we achieve an AGI that can rewrite its own code, it will quickly turn into a superintelligence, and then it will either work to serve humanity or else work to eliminate humanity as a threat, either by outright exterminating us, or putting us into some kind of zoo.

This is all very science fictional stuff—but now more than ever, we are living in a science fictional world. So what is actually going to happen? Do I believe we going to enter the singularity, and give birth to a new species of superintelligent AI that will ultimately replace us? Or, in the lingo of Silicon Valley, what is my P(doom)?

TL;DR: I have two P(doom) values, one of which is 0%, the other of which is 90%. My P(doom) for basically all of the scenarios that involve a runaway superintelligence is 0%, but my P(doom) for massive catastrophic social upheaval due to the disruptive nature of AI technology is 90%.

For the last century or so (basically ever since Turing’s work during WWII), the field of artificial intelligence has followed a cyclical pattern. First, researchers make some sort of breakthrough, which leads to rapid technological advancements and a brief AI boom. During this boom, futurists and technologists rave about how this technology will keep scaling up forever until it ushers in a sci-fi utopia/dystopia and utterly changes what it means to be human. Then, the technological development stalls as researchers run up against a hard barrier that makes further scaling impossible, at which point most of the investors sours on the technology and we fall into an “AI winter” for a decade or two.

The problem with the futurists and technologists who promote AI technology is that the vast majority of them are transhumanists who believe that intelligence is purely an emergent phenomenon that is 100% materialistic in nature. In other words, they believe that the human mind is little more than an organic machine created through the process of evolution, and that 100% of our intelligence, emotions, spirituality, and experience can be explained and understood through purely material processes. Therefore, if they can build a machine that replicates the same biological processes as the human brain, and subject it to similar conditions that evolution subjected us to, intelligence will naturally emerge from such processes and conditions.

But what if they’re wrong? What if there are more things in heaven and in earth than are dreamed up in our modern philosophies? I’m not saying that evolution didn’t play a role in the creation/emergence of intelligence—only that it’s insufficient. And why wouldn’t it be? Science, by definition, can only explain what it can measure. And what about the questions that we can’t ask? The things about this universe that are as foreign to our own understanding as quantum physics is to a German Shepherd?

For these reasons, I do not think that these generative AI models are going to keep scaling upward until we achieve a general superintelligence. At some point in the next 0-18 months, I think that the researchers and developers are going to start hitting hard limits that we don’t understand, because of the limitations of our understanding of the human brain and how our own intelligence emerged or was created.

I am extremely skeptical of all of the doom porn floating around out there, that we are months away from achieving AGI, and that a superintelligence will shortly thereafter replace us as the dominant species on this planet. For one thing, the goalposts for AGI are constantly moving—by the standards two or three decades ago, we have already achieved it—and for another, the transhumanists have turned this concept of AGI into a sort of Messianic savior / world-ending destroyer. And I just don’t buy into that religion.

So if I’m right, all of this doom porn about a world-ending superintelligence is utterly misguided. Which, on a certain level, is somewhat comforting. But on the other hand, that also means that we shouldn’t expect AI to save us—and that anyone who tries to tell us otherwise is ultimately trying to sell us something.

The big AI developers like OpenAI, Anthropic, etc. have every incentive to hype up the doom porn. It makes them look powerful, which in turn attracts investment capital. At the same time, they also have every incentive to promote this idea that a superintelligent AI can be our savior, since if AGI is inevitable, shouldn’t we put everything we have into making sure that our AI overlords are benevolent and have humanity’s interest at heart? But again, if we take that view, we also end up pumping lots of investment capital into these AI companies, turning them into massive cultural behemoths without really questioning their ultimate aims.

What if instead of building a superintelligent AI savior, we ultimately end up with a new form of techno-feudalism, powered by AI? What if a true superintelligence never emerges, and all of the energy and resources we’re pumping into AI is really just going to create a new class of elites, with the rest of us dependent on some sort of universal basic income and totally at the mercy of the owners, controllers, and operators of AI?

To me, this seems like a much more likely scenario—and from what I can tell, we are already in the opening phases of it. Generative AI has already become so powerful that it will likely replace a large number of jobs or render them obsolete—which may or may not be a problem in the medium- to long-term, but will certainly be a problem in the short-term. As increasing numbers of people find themselves unemployed, it will put a tremendous strain on our welfare safety nets, and drive calls for increased government spending on social problems. But our governments are already so deep in debt that these pressures can only lead to some combination of (hyper)inflation, soveriegn debt crisis, and austerity-driven political instability.

Some people think that the solution to all of this is a universal basic income (UBI). But every time a UBI has been introduced, it has always led to negative outcomes, including wealth outcomes. Unfortunately, if AI is truly going to be a huge driver of unemployment (which doesn’t require AGI or a superintelligence—our current models are already powerful enough to drive massive disruption in the labor market), then I don’t see how we can avoid a massive push toward UBI. Certainly not with how our current investments in AI are so centralized—but again, all of the AGI doom-porn is driving us to centralize things even more. So while all of the benefits of this new technology accrue to Sam Altman, Elon Musk, Dario Amodei, etc, and they keep holding out the promise of a messianic superintelligent AI that never truly emerges, the rest of us end up in a world where we have very little agency or control over our lives, with or without a UBI.

It doesn’t have to be this way. But if we all keep buying into the doom porn without looking critically at these AI companies and their transhumanist messianic promises, I think that this is the future we’re most likely going to get.

Thoughts on the Worldcon 2025 AI “scandal”

I’ll just come out and say it: I predict that the world’s last Worldcon will happen before 2034, and that after that, the convention (and possibly the Hugo Awards themselves) will be permanently disbanded. That’s what I think will be the ultimate consequence of the latest “scandal” regarding Seattle Worldcon’s use of ChatGPT, and the anti-AI madness currently sweeping the science fiction community on Bluesky.

If you haven’t been following the “scandal,” you ought to check out Jon Del Arroz’s coverage of it. He’s definitely partisan when it comes to politics and fandom, but he’s neutral on the subject of AI, or as neutral as you’re going to find, especially in writerly circles.

But here’s the TL;DW: the people organizing Worldcon 2025 in Seattle decided to use ChatGPT to help them decide which authors and panelists to put on which panels. This triggered a bunch of authors and panelists who are opposed to generative AI, simply on principle. Some of these authors—including Jeff VanderMeer, who is up for a Hugo award—have bowed out, while others have called for resignations and apologies. Many of the volunteer staff have also stepped down, exacerbating the staffing shortage—which is why the convention relied on ChatGPT in the first place. And apparently over on Bluesky, the scandal is taking on a life of its own, with everyone working themselves up to a massive frenzy over the subject.

My own opinion of the “scandal” is this: it isn’t a freaking scandal! Whatever your opinion on AI-assisted writing, using ChatGPT as an aid to research panelists is totally above-board and a legitimate use of AI. To disagree with that is to say that there is no ethical use-case for generative AI whatsoever, which is hypocritical and absurd—unless, of course, you’re still writing your books on a manual typewriter and submitting them to your publisher via the US postal service. Or using WordStar, if your name is G.R.R. Martin and you’re the last person on earth who “writes” with that defunct software (putting “writes” in quotation marks, since we all know by now that Martin isn’t actually writing anything).

But it isn’t the “scandal” itself that interests me, so much as what the fallout will likely be. Ever since the Sad Puppies debacle in 2015 (and arguably long before that), Worldcon has been dominated by the wokest fringe of SF&F fandom, and it’s been an open secret that the Hugo awards themselves are controlled by the publishers, largely for marketing purposes.

So at this point, the only things really keeping the whole Worldcon/Hugo charade going are 1) woke authors who use the convention to manufacture clout for their failing careers, because they wouldn’t otherwise have a platform, and 2) woke publishers who use the awards to manufacture clout for their poorly-selling books, because they don’t actually know how to market books effectively (at least, not to readers—libraries are a whole other subejct deserving of its own discussion, because there is a genuine scandal there). Once those two things dry up, and all of the ruin has been exhausted from these institutions (ie Worldcon and the Hugos), I really do think they will collapse and go away.

That’s what I find so fascinating about this scandal: it is so utterly toxic and absurd on its face that it’s going to do permanent damage to Worldcon and the Hugos. The writers of the rising generation who will one day dominate the field are all playing around with these AI tools right now, and doing really interesting things with them. Meanwhile, most of the authors who are screaming about AI on Bluesky right now will either be dead or irrelevant (or both) in the next 20 years. And yes, Mike Glyer, you can quote me on that.

Seriously, though: if the Worldcon community is so vociferiously opposed to a legitimate use-case of ChatGPT—namely, to alleviate the already overwhelming burdens being carried by the volunteer staff—AND they continue to be absolutely toxic about it online… who in their right mind would want to be a part of that community? And since the only thing keeping the whole charade going is its ability to manufacture clout, that’s why I think its years are numbered—and likely in the single digits.

On the plus side, if/when the Hugos finally die, I won’t have to read any more crappy woke books to be able to say I’ve read (or DNFed) every Hugo award-winning novel.

In defense of AI art & AI writing

If Andrew Tate wrote a book about how to make your wife or girlfriend into your slave, would he be within his rights to demand that no woman reads that book without his consent?

Brandon Sanderson was inspired to become a fantasy writer when, as a child, he read Dragonsbane by Barbara Hambly. Sanderson is now worth some seven or eight figures, while Hambly, who is still alive and still writing, struggles to pay her bills*. Should Hambly be entitled to a portion of Sanderson’s earnings, for inspiring him to become a fantasy writer?

Every mother who has ever lived gives tremendously of herself to her children, even if only in the physical act of giving birth. Should mothers have a legal claim on their children, for monetary compensation for all of the sacrifices they make?

These might seem like crazy questions, but when you consider them in the context of the ethical arguments about AI art and AI writing, they really aren’t. They illustrate just a few of the unintended consequences of the regime that many disgruntled and resentful creators are arguing for, when really what they want is a world in which AI doesn’t exist.

One of the most difficult parts of being a creator is putting your work out into the world and letting it go. At that point, you really have little control over what it does and how it impacts the world. Many artists who labor in obscurity dream of making an impact on the world, not realizing that success—even artistic success—can be far more devastating and traumatic than obscurity. After all, just ask Rachel Zegler about that now.

I’m not saying that artists shouldn’t be paid for their work. Certainly they should be paid—and certainly there are valid ethical concerns with how AI is disrupting art and literature. But unhinged people who rant online about how AI is “stealing” artists’ work, or how it is “plagiarizing” writers’ books, simply because the LLM’s training data includes free online content (much of which was posted online by said artists and writers)—I don’t think those people really care about the ethical nuances of the debate. I think they just want to force us all to go back to a world where generative AI doesn’t exist.

Did David Weber steal from Star Trek when he wrote the first Honorverse novel? Did John Scalzi steal from Robert A. Heinlein and Joe Haldeman when he wrote Old Man’s War? Did Terry Brooks steal from Tolkien? How about George R.R. Martin?

Where exactly is the line between the “stealing” that should get you thrown in prison, and the “stealing” that people wink and nod at when they say that good artists copy and great artists steal? And how do we know that we’ve drawn the line in the right place? Would we have worse art, or better art if Star Wars had gone into the public domain in the 80s or 90s? Would artists be making less money, or more?

I don’t have the answers to these questions, but I ask them because I think they are worth considering. And I think that most of the artists who think they have the answers are really just acting out of fear.

Will AI outright replace artists and writers? Will it make it impossible for artists and writers to make a living? I remain skeptical, though I acknowledge that there are some ways in which AI art appears to be doing exactly that. For example, I’ve been playing around with OpenAI’s new image generator, making some cover mock-ups, and I’ve been very impressed. But I will still seek out James at GoOnWrite.com for my covers, because he has a much better eye for this sort of thing, and my sales data reflects that his covers sell more of my books than my own covers do.

Should writers and artists expect to be paid whenever their art is used to train an LLM? Aside from the impracticality of enforcing such a law, I don’t think that we should—at least, not for general training data. Fine tuning is a different matter. If an AI is going to be fine-tuned to write in my particular style, I think I have a right to be recompensed for that—and I’d be willing to license that right for a reasonable fee. Perhaps this is a path that artists could pursue as well. But demanding that every AI company pay every artist for training their LLMs is kind of like Barbara Hambly demanding that Brandon Sanderson pay her a portion of his earnings. Likewise, whenever artists or writers demand that their intellectual property is excluded from the training data, it smacks to me of the first question with Andrew Tate and his hypothetical book.

I will admit that I’m biased in favor of AI, since for the last two years I’ve been working to incorporate it into my own creative process. But I’ve been doing this out of a recognition that these things we call “writing” or “making art” is going to change because of these new technologies. In a world saturated with AI, will it still be possible to make a living as an artist or a writer? Yes, I believe it will, but at the same time, I believe that our conception of what it means to be an “artist” or a “writer” will almost certainly change. That’s why I’ve chosen to embrace these tools, rather than fight them—and why I think my fellow artists and writers should as well.

*At CONduit 2010 in Salt Lake City, Barbara Hambly was the guest of honor, and in her keynote address she talked about her struggles to pay her bills with writing. I assume that things haven’t changed much in the years since then, though I would be delighted to learn that I’m wrong.