Why I no longer consider myself to be a libertarian

I’ve been going back and forth on this post for almost a year now, wondering how exactly to express my thoughts. Some of the positive reviews on my fiction have expressed that I write “libertarian fiction,” and in some ways, I think that’s accurate: certainly, I value liberty very strongly, and support those government policies that are designed to safeguard our liberties while opposing those that seek to destroy it. That has not changed. But my views of libertarianism more generally have, perhaps in some ways that might surprise my longtime readers.

First, a little bit of my personal history. I grew up in one of the most liberal parts of the country, Pioneer Valley, Massachusetts, and considered myself a conservative while I lived there. Then, after serving a two-year mission for my church in Silicon Valley, California—what is probably the most progressive, leftist part of the country—I went to college at Brigham Young University, in the most Republican county of the most Republican state in the United States. At that point, I considered myself to be a sort of left-leaning classical liberal. When Dick Cheney spoke at BYU’s commencement, I blogged about the protests and attended the alternate commencement where Ralph Nader spoke.

I graduated in 2010, in the middle of the Great Recession, and made the fateful decision not to go to grad school at that time. To this day, I count that as the single best decision I ever made in my life (right up there with deleting my Facebook and Twitter accounts). Not only did this force me to learn how to navigate the real world, but it also got me out of the indoctrination factory that the national university system has become, even to a degree at my alma mater, BYU.

About five years after I graduated, I got red-pilled and started listening to right-wing commentators like Glenn Beck, Ben Shapiro, and Dennis Prager. I also looked seriously into Ron Paul and the libertarian movement, and became something of a libertarian. As fractitious as libertarianism is as a political philosophy, it seemed like the most logically coherent and intellectually honest way of understanding the world, whereas leftism and conservatism were both riddled with internal contradictions.

But then I got married and started a family. That experience has changed me in a lot of ways, perhaps even more than all the rest of my life experiences combined. But politically, the biggest thing it has caused me to rethink is this question:

What is the fundamental unit of society?

I’d always played lip service to the belief that the family is the fundamental unit of society, but starting a family of my own has made that real for me—indeed, has made me realize—in a way that simple bumper-sticker slogans never could. Before, I was living for myself. Now, I live for my children. Before, I was the hero of my own story, and that story was a single volume. Now, my story is just a single volume in an ongoing saga, a link in the chain of the generations that came before and will go on after me.

Libertarians believe that they stand in opposition to authoritarians of all stripes, be they communists, fascists, socialists, etc. But both libertarians and authoritarians operate on the unspoken assumption that the individual, not the family, is the fundamental unit of society. Leftists want to destroy the family and put the state in charge of raising and educating children, in order to make them obedient to government authority. Libertarians, on the other hand, romanticize this idea of the atomized individual who follows his own path and eschews all forms of collectivism, including the family. Ayn Rand’s books are populated by ubermensch who seem like they’ve sprung forth from the head of Zeus, and the children in her novels are basically just adults in miniature.

Allow me to put it this way: Margaret Thatcher had a brilliant quote about socialism that libertarians love to repeat. And from a purely economic standpoint, I believe that the libertarians are correct. But change that quote just a little, and you get this:

The problem with socialism libertarianism is that you eventually run out of other people’s money families.

Families don’t just happen. They take a lot of work to build and to maintain, and unless they are planted in a culture that nourishes them, they will wither and die. Libertarianism does not foster that kind of a culture, yet it depends on families in order to raise the kind of people who can make a libertarian society work. People from broken families often lack the mental and emotional maturity to take upon themselves the personal responsibilities that come with personal liberty—in other words, they lack the capacity for personal independence which libertarianism depends on. Growing up in a healthy family isn’t the only way to develop that sort of independence, but a society of broken families will invariably fail to produce such a people.

This is why libertarianism ultimately leads to authoritarianism. We aren’t all characters in an Ayn Rand novel: we aren’t all ubermensch all of the time, reshaping the world by the strength of our will. And when we inevitable fail, where can we turn to for help? If society is nothing more than a group of individuals, then ultimately the only place to turn to is the state. Perhaps there may be churches, companies, or other private civic organizations to which a person may turn, but any form of libertarianism that rejects altruism as a moral good will fail to foster these organizations as well. So, in the absense of anywhere else to turn, individuals will, over time, turn increasingly to the state, trading their libertarian freedoms for economic and social security. A society that exalts the individual at the expense of the family will always, in the end, devolve into a statist tyranny.

If you want to create a stable society that recognizes individual freedom, you have to recognize the family as the fundamental unit of that society, and you have to proactively enact policies that will foster a culture of strong families. Not only does this give you a social safety net that is totally apart from the state, but it also ensures that your society will be self-perpetuating, since one of the central purposes of the family is to create and raise children.

In fact, the family is perhaps the best antidote to government power creeping into every facet of society, which also makes it the best way to push back against woke leftism, ESG, and the Great Reset. Hence why everything about leftist progressivism is calculated to destroy the family. Parents concerned about CRT in their schools? Domestic terrorists. Kids who say that they’re transgender? Transition them without telling the parents, and take them away from their families if the parents object.

But it’s not just a partisan issue. If the family is the fundamental unit of society and needs to be strengthened, then there are things on both the left and the right that need to change. For example, poverty is a huge issue for families, since poor families are much more likely to break up due to the stress. But conservatives often ignore the issue of income inequality, mouthing platitudes about the free market while giving us socialism for the rich and capitalism for the poor. And the libertarians are little better, what with how they push for the legalization of drugs, prostitution, abortion, and pornography. Few things have done more to destroy the family than widespread substance abuse and the hypersexualization of our society.

This is why I’ve mostly given up on reading Heinlein anymore. He’s a brilliant writer with a fascinating take on some of science fiction’s most fundamental tropes, but whenever he writes about sex or sexuality, all I can think of is “the problem with libertarianism is that you eventually run out of other people’s families.” Heinlein and his boomer readership took the family for granted, neglected their own, and gave us a world of widespread sexual promiscuity, where society is falling apart.

So that’s why I don’t consider myself a libertarian anymore, even though there are many tenets of libertarianism that I still admire and believe, especially on the economic side. I suppose you say that I’m a conservative, but that isn’t really accurate either, because most strains of conservatism in 2024 really seem more about conserving the leftism of two or three generations ago. So I guess that means I’m politically homeless—just like most of my fellow Americans these days.

Our world makes a lot more sense…

…when you realize that the internet is a factory for creating cults, and that social media and smart devices are force multipliers for this effect.

Before the internet, your “community” was a geographically bound group of people, who were diverse enough (that’s “diverse” with a lower-case d) to give you an interesting variety of perspectives and worldviews. Also, you typically interacted with each other while physically in person. If you said or did something extremely embarrassing, it typically didn’t get beyond your immediate circle of associates, or the people you decided to tell about it.

The internet changed everything by turning “community” into something that was bound by interests, hobbies, perspectives, or worldviews. Now, every person with a weird and perverse fetish, who before kept it hidden because they were the only person in their community who held it, now could find all the other people in the world who held the same weird and perverse fetish, and create a “community” around that thing. Same with crazy political views. Same with radical ideology.

At the same time, if you said or did something embarrassing, and it went viral, your embarrassing moment would be broadcast far beyond your immediate circle of associates, to people you had never before met—as well as to people whom you would never want to hear about it. This effect was multiplied by the development of social media, and it led people to self-censor and conform to whatever “community” they were a part of, in the fear of standing out and going viral.

At the same time, all these “communities” turned into echo chambers that warped the various members’ view of reality. And because anger and outrage are the things that are most likely to get spread on the internet (see the video above), these echo chambers starting to become paranoid and break off from the rest of the world, taking the dimmest and least charitable view of everyone who wasn’t a member of their “community.”

As these online communities came to take a more prominent place in the average person’s life than their own families and communities, then the average person’s sense of identity increasingly became caught up in whatever hobby, fetish, or ideology united the “community.” And because of how paranoid these communities became, they increasingly came to demand absolute and preeminent allegiance. Is this starting to sound like a cult yet?

But it goes deeper than that, because the devices through which we connect with these “communities” actually make us more physically isolated from each other, while giving us the illusion of a genuine connection. When you’re holding up your smart device to capture a fireworks show, you’re not actually enjoying the fireworks. And when you’re lying in your bed, posting updates on your social media or chatting with your friends, you are still, in reality, lying alone in your bed. Combine with the internet’s penchant to drive outrage, and you have the two key ingredients for a mass formation psychosis: a large group of atomized and isolated individuals suffering from free-floating anxiety.

Before the pandemic, (that’s the Covid-19 pandemic of 2020, for future readers who may be wondering “which one?”) I think that we lived in a world where the majority of our countrymen—the members of our “community” in the traditional sense—were not caught up in one of these cults. Either the majority of people weren’t caught up in one of these echo chambers, or the majority of echo chambers hadn’t yet reached cult-status, but people were still generally reasonable, on the whole. But with the pandemic, I think we passed through some sort of a threshold, to the point where now the best way to make sense of our world is to assume that the majority of people around you are trapped in some sort of a cult—which may literally be the case, considering the theory of mass formation psychosis.

So what does this mean for where the world is headed? Nothing good. I suppose that in an optimistic scenario, a critical mass of people manages to break themselves and their friends out of this mess, and go on to build a new society with proper safeguards in place to prevent this sort of mess from happening again. But I think it’s much more likely that this thing runs its course, and large swaths of our civilization drink the proverbial Kool-Aid.

Fortunately, there is a script that we can run, as individuals and (more importantly) as families, to get through this mess. It’s the same script that we use to get ourselves or our loved ones out of a dangerous cult. I’m not yet an expert on that script, but I know that it’s out there, because cults have been a thing for a very long time. But I’m pretty sure it involves putting your family first, getting off of social media, limiting the amount of time that you spend on your smart devices, and becoming more involved in your real “community”—the real-life one where you actually live.

Writing and Publishing Plans moving forward

Over the past few months, I’ve been spending a lot of time experimenting with AI writing and finding ways to incorporate it into my writing process. The goal so far has been twofold:

  1. Develop the ability to write one novel per month.
  2. Get to a level where I can write 10k words per day.

I’ve accomplished both of those things, but I can’t hit them consistently without burning out. Writing with AI has proven key to both of them, but I feel like I need a lot more practice with AI-assisted writing before I’ve achieved any level of mastery. Once I have mastered AI-assisted writing, however, I should not only be able to achieve both goals consistently, producing a much higher quantity of work, but should also be able to maintain or exceed the current quality of my writing as well.

However, I was thinking about it from a reader’s perspective on my morning walk last week, wondering what I would think if, say, David Gemmell was still alive and writing Drenai books, or Roger Zelazny was still alive and writing Amber books. What would I think if either of them announced that they had found a way to incorporate AI into their writing process, so that they could produce a new Drenai/Amber book once every month, instead of once every year? Better yet, what if Andrew Klavan—who is both still alive and still writing Cameron Winter books—announced that he would start publishing new books monthly. As a fan of all these writers, what would I think of that?

Assuming that there was no drop-off in the quality of these new, AI-assisted books, I would find this really exciting, and would probably become a much bigger fan, simply from the fact that I’m reading so much new stuff. However, after a while this might become too costly to me to keep up, leading me to fall away and not be quite so current on what they’re producing. I would still love them as authors, but if they published too quickly, I might have to take a break after a while—and if they continued to publish at that rate, I might never catch up. After all, there are lots and lots of authors that I love, and I can’t dedicate more than a fraction of my reading time to any particular one of them.

So there’s probably a sweet spot, between publishing too much and publishing too little. Most authors are probably on the Patrick Rothfuss / George R.R. Martin side of that line, where fans wish they would write more and write more quickly. But at a certain point, it is possible to overwhelm most readers by writing too much. Of course, there will always be a core group of fans who will read everything much faster than you could ever possibly write, even with AI assistance, but if that’s the only group you’re catering to, then you probably won’t ever have more than a cult following, because you won’t be able to convert casual readers into superfans.

With all of that said, I feel like I’ve gotten to a good place right now, where I’m publishing a free short story every month. I think that’s actually been a really effective way to turn casual readers into fans, and to keep my name fresh in the minds of my readers. And if Gemmell, or Zelazny, or Klavan were producing a free short story every month, I would definitely subscribe to their newsletters and drop everything to read it.

So keeping up the free short story per month is probably a good idea. But for novels, it might be better to release a new one every two or three months instead. Free short stories are much less of a time and money burden on the readers, and thus are effective at turning fans into superfans. But with the novels, which do take more time and money to read, it’s probably better to throttle that back a little bit.

The interesting thing to me is what that means for my creativing process, especially once I’ve reached the point where it takes less than a month for me to produce a novel. If I’m only publishing a novel every 2-3 months, that means that I can—and probably should—take a break between each novel WIP. Which means that the thing I should be shooting for isn’t to maintain a writing speed of one novel per month, month after month after month, but to hit that speed in creative bursts, taking some down-time to replenish the creative well and prepare for the next project.

It’s a very different writing paradigm from the one I’ve been following for the past decade. Until now, I’ve basically always had a novel WIP that I’ve actively been working on, and whenever I feel like I need a break, I usually move on to a different novel WIP. From time to time, I’ll “take a month off” to work on short stories, but the goal there has always been to write X number of stories in no more than a month or two, once again making writing the focus instead of recharging the creative well.

How would things be different if instead, I told myself “I’m taking a break in order to prepare myself to write my next novel,” with a plan for books and other media to consume in order to get things ready for it? And then, instead of taking several months or even years to write the project, to produce it in just a few weeks of white-hot creative heat, afterwards necessitating a break for a while just to cool down? Until now, I’ve never tried anything like that, because I haven’t thought myself capable of producing work that quickly. Indeed, the very thought of taking an extended break from having an active writing WIP has struck me as being lazy. But now that I know I can produce that quickly, perhaps this is a new paradigm that I ought to at least explore.

For my current WIP, Captive of the Falconstar, I’m not stressing out about finishing it in less than a month. But I am following all the benchmarks that I developed, and watching closely to see what takes more time to write than I thought, and what takes less. And it may very well turn out that the best way to improve quality is to get into that white-hot creative heat that comes from producing quickly, so that’s something that I’m watching closely as well.

Why Nick Cave is wrong about human creativity and generative AI

First of all, I don’t think that Nick Cave is entirely wrong. Laying aside how ChatGPT is just one of the many LLMs that are publicly available, and that using it as a stand-in for all of generative AI is like saying “AOL Online” when you mean “the internet,” he does make a fair point that using generative AI as a replacement for basic human creativity is wrong.

What he doesn’t understand is that using AI this way is also counterproductive. He blithely assumes that it takes not skill or effort whatsoever to use these AI tools—that all one has to do is tell ChatGPT what to write, and it will magically produce something if not great, then at least publishable. But as someone who has written several AI-assisted novels and short stories, I can assure you that it does take effort to produce something more than merely passable. Indeed, with longer works like novels, I can assure you that our current AI models are incapable of producing even passable work without considerable human intervention.

This is why I call it AI-assisted writing, as opposed to AI writing. When you do it right, the AI tools don’t replace your inner human creativity, but augment and enhance it, making things possible that were either impossible before, or that required a prohibitive degree of struggle. Writing with AI is still a form of creativity, though it might not look exactly like previous forms. But isn’t that also true of writing on a computer vs. writing longhand? Does it take any less creativity to write a novel on Microsoft Word than it does to write it on parchment with a fountain pen?

Granted, the technological leap from word processor to generative AI is much more profound and fundamental than the leap from pen and paper to typewriter, or from typewriter to MS Word. Speaking from experience, I can say that writing a novel with ChatGPT or Sudowrite feels a lot more like directing a play with an amateur (and very stupid) actor than it feels like wrestling with the empty page, at least in the early generative stages. But it’s still, fundamentally, a creative act—and that’s the main thing that Nick Cave misses in his rant. Anyone can ask ChatGPT to write them a novel, just like anyone can bang their hands on a piano or strum their fingers across the strings of a guitar. But to produce something good—that requires effort.

However, there is an even deeper level where Nick Cave is wrong here, and that is in the unspoken assumption that the difficulty in creating something is the thing that gives it value. It’s the same principle that Karl Marx expounded in his labor theory of value: that the economic value of a good or service is determined by the amount of labor required to produce it, or in this case, the creative and artistic value. That’s just wrong.

Do we love J.R.R. Tolkien’s Lord of the Rings because it took him several decades to write it, and largely represents the greatest product of his life’s work? Obviously not—otherwise, every amateur writer who’s been polishing and repolishing the same unfinished novel for the last twenty years must necessarily be the next Tolkien, no matter the fact that their book reads more like the Eye of Argon than The Fellowship of the Ring.

So if it’s not the creative struggle or the amount of human effort that ultimately gives art its value, what does? The same thing that gives a product or service its economic value: the utility that it provides to the person who consumes it. In other words, the thing that gives art its value is the goodness, truth, and beauty that it brings into the lives of those who receive it.

This is especially true of writing, which is perhaps the most collaborative of all the arts. Without a reader to read it, a book is nothing more than processed and flattened wood pulp full of meaningless squiggles (even less than that for an ebook). When I read a book, I care not a whit for how much work it took for the author to come up with it. Same with the music I listen to, or the games that I play. What I care about is how it makes me think, feel, or experience the world.

And if it’s possible to bring more goodness, truth, and beauty into the world by using generative AI, so what? If it’s easier than writing a novel the old way, does that somehow mean it’s “cheating”? If the answer to that question is yes, please tell me why you don’t churn your own butter, or hunt your own food, or chop your own wood and burn it to heat your house—because all of those applications of modern technology are “cheating” in exactly the same way. Also, I hope all the books in your personal library are handmade, illuminated manuscripts, because the printing press is far more of a “cheat” than generative AI, as the last few hundred years of history clearly shows.

Nicholas Cave is wrong. ChatGPT is not the most “fiendish” thing “eat[ing] away at [our] creative spirit.” Our humanity is far more resilient and anti-fragile than he gives it credit. Those who try to replace human creativity with AI will fail, not because of artists like Cave who stubbornly resist the “temptation” to use these tools, but because of those who embrace the new technology with an open mind, and discover that our humanity is not a liability, but our greatest asset—a premise that Cave ironically rejects with his fearmongering about our fundamental replaceability.

Navigating Woke SF, Part 5: Where do things stand now?

So it’s been almost exactly two and a half years since I posted my first “Navigating Woke SF” blog post, where I demonstrated an anti-conservative bias in the responses I was getting to my traditional short story submissions, and predicted a cultural backlash against the woke moral panic of our times. Those predictions are now playing out all around us, from the Bud Light boycott to the last few Disney/Pixar bombs to the unlikely success of movies like The Sound of Freedom, which is still showing in theaters in my area.

To no one’s surprise, the institutions like Disney that have already been captured by the woke intersectional left have been tripling- and quadrupling-down on their woke insanity, as we see in movies like The Marvels and Disney’s live action Snow White. Which has opened up some wonderful opportunities for conservative-minded publishers and creators to outflank them, as we see with the Daily Wire’s competing release of Snow White:

Indeed, the anti-woke backlash in the mainstream culture has gotten so bad that South Park recently lampooned it with an episode where all of their characters were replaced by “diverse women.” I didn’t watch the full episode, but the clips I saw from it were absolutely hilarious—and directly over the target.

So with all of that brewing in the cultural mainstream, where do things stand in our particular little corner of it? Namely, science fiction publishing and the traditional short story markets?

Well… let me tell you a story. It begins earlier this year, when I decided that I wanted to take some of the money I’ve been earning with my indie-published book sales and subscribe to one of the traditional science fiction magazines. For a writer like me, it’s a legitimate business expense, and it seemed like a nice way to support the genre, as well as build my science fiction collection.

I decided to go with Clarkesworld, because even though they are woke, they seemed to be less woke than most of the other major magazines. The particular brand of diversity they like to emphasize is on publishing non-US authors, especially Chinese authors, who tend to write stories that are neither woke nore anti-woke, which can be a real breath of fresh air. Seriously, there is some really fascinating science fiction coming out of China these days, which is definitely worth checking out, and Clarkesworld, to their credit, tends to publish a lot of good Chinese authors.

So I subscribed to Clarkesworld magazine and began to receive a physical issue each month, which I added to my currently-reading pile and slowly read through. But I began to notice something disturbing with each issue: namely, that even if the story itself wasn’t particularly woke, there would always be some woke element thrown into it. For example, the story might be a weird western adventure tale, but one of the characters would randomly mention their LGBTQ wife. Or the story would be a far future space opera, and one of the characters would casually drop that they were trans, even though it had nothing to do with the story.

At the same time as all of this was happening, I discovered this interesting podcast where a former Dreamworks animator discusses how he left the company after learning that the Dreamworks executives were explicitly trying to use their movies as a form of social engineering for the woke agenda. The mechanism for this social engineering was what I found particularly interesting: namely, that they would associate the movie’s villain with some specific aspect of culture/religion that they were trying to villify, and associate the good guys with those aspects of the woke agenda that they were trying to push. In the example given in the podcast, they literally had the villain shout “the family is the basic unit of society!” at the climax of the story.

According to the former Dreamworks animator, this is especially true of sequels for popular franchises and IPs. For example, Wreck-it Ralph is a really fun and well-told story about a “bad guy” from a video game trying to become a hero, and becoming one when he sacrifices himself to save a misfit character from another video game, who turns out to be that video game’s queen. Really charming, really good story. But Wreck-it Ralph 2 throws all of that out of the window, turning Ralph into a simp and Venelope into a liberated girl boss, and crapping on all the traditional Disney princesses at the same time. The message was laid on pretty thick, and the result was a garbage movie.

Which made me wonder about Clarkesworld, because that particular social engineering technique is EXACTLY what I was seeing in almost all of the Clarkesworld stories. The thing is, I couldn’t tell if it was deliberate or unintended. I can totally believe that the Clarkesworld editors would tell their authors “we love your story, but we want you to add just this small woke element to it, and then we’ll publish it.” There are enough desperate authors out there who would probably do exactly that, if it meant receiving an acceptance from a pro-paying market after getting so many disheartening rejection letters.

But personally, I think it’s more likely that the authors are throwing in these elements themselves, without any explicit direction from the editors. That is, the authors are so desperate to be published by these woke traditional magazines that they’re not only self-censoring the stuff that they don’t think the editors will like, but they’re adding woke elements just because they know it will increase their odds of getting accepted. Which to me, is just sad.

Honestly, I wish that the more conspiratorial option were true, and that Clarkesworld has a devious social engineering agenda that they push onto their stories. That would be better than the alternative, which is that the literary science fiction field has been so thoroughly captured by the left that authors are adding woke elements to their stories without getting any feedback, direction, or urging from the editors and publishers, just because they know these stories won’t go anywhere without them.

So how is a conservative (or at this point, even a non-leftist liberal) supposed to navigate the field? At this point, I really don’t think there’s any way to do it except to go indie, or to go with Baen (which is itself independent of the Big 6 Big 5 Big 4+1 Big 3+1 whatever the New York book publishing establishment is called these days, after the Simon & Schuster sale). There may be some other small publishers that, like Daily Wire, are driving into the smoke of our cultural institutions’ Götterdämmerung, but within the science fiction field, I don’t think any of them are big enough to offer much more than what you’ll get by going indie—except, perhaps, with the opportunity to get in early with the up-and-coming next generation of editors and publishers, who will eventually replace the dinosaurs that currently dominate the field.

But that’s a big gamble that may never pay off, because the science fiction field has been dominated by leftists since at least the mid-60s, to the point where most subgenrese of science fiction are now synonymous with woke. After all, if the authors themselves are inadvertently telling stories that use social engineering techniques, not because the editors are making them, but because that’s the only way to get published, the rot runs very, very deep. And even during the “morning in America” moment in the 80s, when science fiction pulled back from the leftist crap to give us classics like Ender’s Game, there was still a thread of the wokeism in stuff like the sexuality in the Vorkosigan books, or the environmentalism in Hyperion (which I love, don’t get me wrong… but yeah, Dan Simmons is a bit of a tree-hugger).

The point that I’m trying to make with all of this is that, when it comes to the woke agenda, science fiction is a thoroughly captured field. That’s what this last episode in navigating woke SF says to me. If that ever changes, it will be after most of the traditional markets like Clarkesworld collapse and the major awards like the Hugos and Nebulas go defunct, because until that happens, everyone in this particular field is still going to be in denial about the anti-woke cultural backlash. That’s just how deep the woke goes. So until then, if you’re a non-woke author like me, the only way to navigate the field without compromising your values is to go full indie, at least when it comes to short stories.

What about supporting the arts? At this point, instead of subscribing to a particular publication or magazine, I’ve decided to make a short list of non-woke authors I want to support, and to buy their books as soon as they come out. One of those authors is Andrew Klavan, who writes more in the mystery/thriller genre than science fiction, though his Another Kingdom trilogy is quite good. I’m reading his latest Cameron Winter mystery right now, and it’s quite good. I highly recommend it.

The defining moral conflict of our times

In just ten days, this comedy skit has gotten about 1.2M views on YouTube, and probably a lot more on X. It’s gone viral for a couple of reasons: first, because it makes fun of celebrities, who most of us Americans now love to hate; and second, because most of us who have watched it feel like we’re in a similar position, thanks to the way social media makes celebrities and narcissists of us all.

I can sympathize with the confusion of most Americans, who feel like the recent escalation of the Israeli-Palestinian conflict came out of nowhere, and don’t really know who’s right. The last big “current thing” was probably the Russo-Ukraine war, and most of us have since come to the conclusion that there are no good guys in that conflict, only innocent civilians and impoverished taxpayers who’ve been bilked out of billions and billions of dollars while our insanely corrupt politicians vow to fight to the last Ukrainian.

Here’s the thing, though: you shouldn’t have to pick a side to be able to declare, without any misgivings or doubts, that this is evil:

Israeli Official: Hamas Raped ‘Women, Grandmothers, Children’ So Violently ‘They Broke Victims’ Pelvis’

Unlike most Americans, I am not unfamiliar with the Israeli-Palestinian conflict. I studied it for four years in college, interned briefly with a major K-street foreign policy think tank, and traveled both to Israel and the disputed territories of Judea and Samaria. At the time, I was very pro-Palestinian.

My school (Brigham Young University) was actually more conservative and a lot more fair to the Israeli side of the conflict than most universities, but even back in the 2000s the entire American academic establishment had a very anti-semitic bent, and the things I didn’t learn—the lies of omission, especially about the history of anti-semitism in the Arab world—could fill volumes.

The other thing that red-pilled me away from my pro-Palestinian stance was the realization that Islam teaches that it is virtuous to lie to the unbeliever in order to further the cause of Islam. This principle is called “taqiyya,” and when you realize that everything we as kaffirs think we know about Islam has been transmitted to us by someone who was taught to lie to us about Islam, it makes a lot more sense. Not all Arabs are Muslim, and within Islam there are a lot of sects and divisions, but all of them share this principle of taqiyya, and the overwhelming majority of Palestinians are Muslim.

Back in my pro-Palestinian days, there were a number of things that I had to either ignore or chalk off as anomalies in order to maintain my pro-Palestinian views. Things like the insane popularity of Hitler’s Mein Kampf all across the Arab world, perhaps only rivaled by the Qur’an. Things like the fact that generations of Palestinians who have never even set foot in the disputed territories demand the “right to return,” while Arabs displaced from other conflicts, such as the Syrian civil war, have no qualms about picking up and leaving their ancestral homelands. Things like the fact that Hamas, Fatah, Palestinian Islamic Jihad, and other Palestinian terror groups deliberately target civilians, whereas Israel goes to great lengths to avoid civilian casualties. Can you imagine what would happen if the Israelis used their own people as human shields the way that Hamas does? Hans… are we the baddies?

As someone who spent a significant portion of his life studying this conflict, and has since had a 180 degree change of view, the October 7th massacre was extremely clarifying. All those things that I used to chalk up as anomalies now fit into place in a way that makes me wonder how I didn’t see it before. The biggest of these has to do with the anti-semitic origins of Palestinian nationalism in the first place. Before the Balfour Declaration, which started the ball rolling for the formation of a Jewish state on historically Jewish lands, there was no concept of a Palestinian nation. Indeed, until the 20th century, the concept of the Westphalian nation-state was foreign to the Arabs, who instead tended to identify with their local community or tribal affiliation. From the beginning, Palestinian nationalism was created and deliberately cultivated as a means of accomplishing exactly what Hamas did on October 7th: the slaughter and ethnic cleansing of the Jews.

Which is not to say that the people we call “Palestinians” were not themselves violently displaced by the wars in 1948 and 1967. Unlike what some conservative commentators have said in recent weeks, these people were not “squatters,” but legitimate inhabitants of these lands. Indeed, many of them are descendants of the ancient Jewish people who converted to Christianity, and thus remained on the land after the Romans pacified Judea in the first century AD and drove their fellow Jews from their homeland. It’s a very ancient and complex conflict, which is why I can sympathize with Ryan Long’s comedy sketch.

But what’s happened with the Palestinians is the same thing that’s happened with the blacks and BLM, the American Indians and the decolonization movement, gender dysphoria victims and the transgender movement, same-gender attracted peoples and the LGBTQ+ movement, and women generally and radical feminism. It all follows the same pattern. First, the radical left identifies a minority which they can pretend to champion as an “oppressed class.” Then, once they have established themselves as representing that particular group, they redifine that group’s cause to fit into their grand goal, which is to overthrow Western civilization and establish a Marxist utopia.

Let’s be honest. There are only two ways that the Israeli-Palestinian conflict can end. The first is for every Israeli Jew to meet the same end as the victims of the October 7th massacre, or to be violently and permanently driven from their land. The second is for the vast majority of the Palestinians to be resettled somewhere other than the so-called Palestinian Territories, and for Israel to annex those lands. The October 7th massacre didn’t kill the two state solution, so much as it revealed that it was never a viable solution to begin with. How could it, when Hamas—and by extension, those who support Hamas—view the state of Israel itself as an “occupation” of their lands?

Of course, history never truly has an end, so the default is for the current state of affairs to continue in a metastable state until it is either displaced by an outside force, or ceases to be metastable. From 1973 to the present, the Israeli-Palestinian conflict was metastable, which allowed the myth of the two state solution to take hold. The so-called “peace process” itself became an industry, and a lot of people built profitable careers by propagating this myth. At the same time, the ant-semitic forces that want to cleanse the Holy Land (and ultimately the world itself) of all Jewish blood also propagated this myth, because so long as the Palestinian people remained in refugee camps instead of being resettled elsewhere, the conflict could continue.

But now, the situation has changed. We are living through the midst of a fourth turning, where conflicts such as this one are no longer metastable, and the old order itself comes crashing down. According to Strauss and Howe, who developed the theory of generational turnings and secular cycles, fourth turnings always start with a lot of chaos and confusion, but somewhere in the middle an event or development happens that brings moral clarity to the conflict, which in turn brings everything into focus.

In the Civil War cycle, this event was the Emancipation Declaration. Slavery was always a major underlying issue to the conflict, but until Abraham Lincoln clearly and unambiguously identified it as the war’s main cause, the war spiraled from a gentleman’s contest on the shores of the Manassas to a bloody chaotic conflageration engulfing the whole nation, and the Union lost almost every battle. After the Emancipation Declaration, the Union won almost every battle until the South was firmly defeated and the 13th amendment made every state a free state.

In the last fourth turning, this event was the holocaust. World War II started as a series of border disputes between the expansionist Axis powers and their neighbors, but after the conflict when global and it became clear that the Nazis wanted nothing less than the extermination of the Jews (and Roma and Slavs and…), moral clarity was achieved. That’s why the Great Power cycle ended with the establishment of the state of Israel in 1948.

I’ve been thinking about this a lot in the past two weeks, and now I firmly believe that the October 7th massacre was the event that brought moral clarification to our own fourth turning. Therefore, the moral conflict of our times comes down to this: should the Jews (and by extension all “oppressor” classes, including straight white males) be liquidated in the name of “justice,” “equity,” and “decolonization,” or should we reject the Marxist utopia, return to God, and preserve God’s ancient covenant people—the Jews?

The third world war has probably already begun. This is the defining moral conflict of our times. There will be no return to the status quo ante: the Israeli-Hamas war will continue to expand until there is a decisive victory on the one hand or the other. We are still in the early stages where this particular armed conflict can be contained, but make no mistake: the forces arrayed against Israel, both foreign and domestic, are also arrayed against the West. I hope that the Israel-Hamas war ends before it spirals into a global conflageration, but even if this particular conflict isn’t the volcano, it lies on the same moral fault line.

What should that mean for us, who aren’t directly involved in the Israeli-Palestinian conflict? Should we send over billions of dollars in military aid to Israel, and ultimately put boots on the ground in that conflict? I don’t know about that, but I do know that we need to repent and return to God, both as individuals and as a nation, and that we need to call out evil for what it is, especially what we saw on the October 7th massacre. But we shouldn’t stop there. We should call out the evil behind every element of the anti-semitic Leftist agenda, and not just those parts that have to do with the Jews. This includes (but is by no means limited to) the castration and mutilation of gender-confused children, the ongoing slaughter of the unborn, the naked racism of the so-called “anti-racists,” the LGBTQ+ grooming happening in our schools—basically, every social justice cause that has ever been championed by the people now championing the cause of Palestine and Hamas.

This is our moment of moral clarity. Will we stand against evil, or will we fail to call it out for what it is? The October 7th massacre of Israeli Jews by Hamas terrorists was evil—arguably, more evil than the holocaust itself. Whatever else you believe, if you can’t come out and say that, you are, indeed, one of the baddies.

Would you read an AI-written novel?

That is the question, more or less, that I posed in subject header of my last email newsletter. The goal was to be a bit provocative, of course, but I did genuinely want to hear from my subscribers on this topic, and in the author’s note I shared a lot more of my thoughts on the subject of AI-assisted writing.

So far, I’ve gotten about half a dozen responses—a lot less than I was expecting, honestly—but the responses I have gotten have been overwhelmingly negative. As in, “no way in HELL will I ever read an AI-written novel, and if you ever outsource your writing to an AI, I will never buy any of your books again!”

…which is awkward, because I’m currently working on an AI-assisted novel, with plans to write several more.

From this and other experiences, it’s become clear to me that there is a small but extremely vocal segment of the population that has strong and vitriolic opinions about AI. There’s probably a much larger but less vocal segment that thinks AI is terrible (though not terrible enough to shout about it from the rooftops), and another large segment that is AI-curious but doesn’t really have a strong opinion one way or another. At this point, the people like my wife who are proponents of AI are practically the lone voices in the wilderness, at least as far as the culture is concerned.

When it comes to books and reading, I suspect that people skew much, much harder to the “AI is so evil!” side of the spectrum than the “AI is so awesome!” side. This is especially true of science fiction and fantasy, since (1) most SF writers are actually luddites in real life, (2) SFWA is a vitriolic echo chamber of the most luddite of them all, and (3) readers of SF&F tend to skew older, tend to be higher educated, and tend to be higher earners than the general population—meaning that they have more to lose with the AI revolution than they have to gain.

But here’s the thing: as an indie author who has been struggling for the last twelve years to build a successful writing career, and has barely been able to keep it going for most of that time, AI-assisted writing represents either a potential game-changer that can help me achieve the kind of success that has eluded me for years, or else it represents an existential threat that will snatch those dreams of a writing career completely out of my reach. There really is no middle ground—at least, not in the long term.

In the old days, there were six major publishers, dozens of reputable small presses, several regional distribution networks, and thousands of local bookstores, where most readers went to buy their books. To have a writing career, you needed to get picked up by a publisher, who would usually give you five or six books to grow into an audience, at which point you were pretty well set up for the future.

Of course, any number of things could happen to torpedo your career, and very few bestselling authors (let alone authors generally) made enough to live comfortably off of their writing alone, but the hardest part of breaking in was breaking out of the slushpile and getting a publishing deal. At that point, you could expect a certain degree of career stability, at least for a few years.

Under this system, it was entirely possible for a genre-specific magazine like Locus to track all of the SF&F books that had been published in the last year. Places like Writer’s Market were able to track all of the publishers, and many readers could—and did—subscribe to all the SF&F short story magazines, and read all the stories. Among writers, there was still a lot of competition, but most of it took place in the slushpile, not after publication.

I’ll be honest: I never actually experienced this system, because it died a couple of decades before my first story was ever published (“Decision LZ1527,” Leading Edge Magazine December 2009). The only things I know about it are what I’ve been able to piece together from Brandon Sanderson’s writing class, Kris Rush and Dean Wesley Smith’s blogs, several now-defunct podcasts like Adventures in Sci-Fi Publishing, Locus Magazine itself, and various conferences and conventions that I’ve attended over the years, including Worldcon and World Fantasy. I’ve definitely done my homework on the subject, though admittedly, it can be difficult sometimes to separate the myth from reality.

From what I can tell, the old publishing system (which was really more of a 50-year aberration, when you look at how things were set up in the pulp era and before—but I digress) began to fall apart when the big box stores like Borders and Barnes & Noble began to take over from all the mom-and-pop indie bookstores. This led to a distributor collapse, since the big box stores only wanted to deal with one or two national distributors. Publishers responded by downsizing their marketing departments, since now they only had to sell to one or two distributors, which in turn led them to drop a lot of authors with small, regional followings in favor of the big name authors with big, national followings. Pretty soon, most of the major publishers were following a blockbuster model, where if your first book didn’t hit big, they dropped you. Then the global financial crisis happened, a bunch of editors got fired and decided to hang out their shingles as literary agents, and pretty soon the only way to get published was to go through an agent first, then make it through the slushpile, then have a national bestselling first novel… and if you couldn’t do that, tough luck.

That was the state of the industry when I first started indie publishing back in 2011. The rise of Amazon kindle and the epublishing revolution gave us an alternative to the soul-crushing, dream-killing system that traditional publishing had become. A lot of us jumped on the chance to “go indie,” becoming our own publishers and digitally publishing our own books.

…except then, Amazon launched Kindle Unlimited and gobbled up most of the market share, making it so you basically had to go through Amazon if you wanted to have a career. Which meant that your career was beholden to the whims of the Amazon algorithm, which favored new books over old books and books in KU over books that weren’t. Also, if anything you wrote fell under the Seattle-based Amazon’s definition of “hate speech,” you could fuggedaboutit. And then, Amazon launched Amazon Ads, which turned indie publishing into a pay-to-play game, where you either had to roll over most of your profits into advertising, or see your books languish with a ranking of 100,000 or higher.

By the end of the 10s, “going indie” was less about being truly independent and more about constantly trying to please the Amazon gods. That meant rapid-releasing, spending a lot on ads, publishing in KU, and writing to trends like reverse harem or werewolf dragon shifter pirates. Those of us who didn’t play that game soon found that we were voices in the wilderness, who were barely able to eke out a meager existence by publishing something new every month.

Now, it is impossible to keep up with everything coming out in your genre. Something like 2-3 million new books are published on Amazon every year, and the bottom third of those never sell a single copy. The competition has moved out of the slushpile and onto the internet, and while lots of great stuff is getting published, the stuff that gets pushed to the top is typically the stuff that aligns with the agendas of the people who control the aglorithms—and in areas outside of the publishing industry, this agenda involves things like drag queen story hour, pride paraphernalia for toddlers, medical assistance in dying, mask mandates and vaccine passports… the list goes on and on.

Which brings us to where we stand today, with generative AI poised to revolutionize the publishing industry yet again. Like it or not, AI is going to change everything—we can already see the wave beginning to swell. The only question is whether we, as authors are going to catch this next wave, or be crushed by it.

It could very well be that this wave is actually a tsunami. That is the pessimistic scenario. If it is, then all of us writers are toast, because the readers of the future will all be amateur prompt engineers who just tell an AI to write what they want to read. A handful of big-name authors will hang on for a generation or two, just on the strength of their brand, and a very small cottage industry will emerge for authentically human-written books, but it will mostly be for hobbyists, like crocheting and perler beads.

If the pessimists are right, then there’s nothing we as writers can do except roll over and die, maybe after vainly shaking our fists at the sky for a little while. That’s what most of the folks over at SFWA are doing right now. But having worked with some of these generative AI tools for several months now, I don’t think the pessimistic scenario is going to play out.

Instead, I think that most readers are going to find that the kind of books they want to read are not the kind of books that they can generate easily themselves. A lot of amateur prompt engineers will have fun with it, just like lots of fans have fun with fanfiction right now. A handful of these prompt engineers will get good enough to generate the kind of books they want to read, and will turn into writers, but that’s not going to be most readers.

Meanwhile, writers will divide into two camps: those who embrace AI-assisted writing, and those who reject it. Except for a few big-name authors who already have a big readership, those who reject AI-assisted writing will find that they cannot write fast enough to keep up with all of the AI generated books and stories that are going to flood the market—not a flood of crappy books, but a flood of passably fair to genuinely great books, as AI technology continues to get better.

Those authors who do embrace AI-assisted writing will find that the AI tools are surprisingly difficult to master, and require a complete retooling of their writing process in order to use them effectively—but after they do, they will find that these AI tools are incredible force multipliers that allow them to write significantly more, and write significantly better. They will be able to rapid release without burning out, and will thus find much more success in building their readership, since publishing a new book is the best way to market all of your old books.

But since (to my knowledge) no one has yet mastered these AI tools, for the next few months/years, most of the AI-assisted stuff that gets published is going to be pretty bad. The whole world is now on a curve, and when we reach the top of it, we will begin to see some really great stuff come out from those authors who are putting in the time and effort right now to truly master these AI tools, and to integrate them into their creative process.

This is why I personally am very excited about AI-assisted writing: because in a world where millions of books are published every year, discoverability is my biggest challenge, and the solution to the discoverability problem ultimately comes down to being more prolific. That is why I try to publish at least one thing every month, usually a free short story… but if I could publish a $2.99 novel every month, that would be so much better. By myself, I don’t write fast enough to do that—but with an AI, I probably could. And it’s not like the discoverability problem is going away—in fact, I expect it to become even more challenging, with tens of millions of books getting published each year as AI-assisted writing becomes mainstream.

Ultimately, though, I think that the key to a successful writing career in a post-AI world is going to involve building a community of fans around your books and your writing. Among other things, fannish communities help to humanize and personalize the connection readers feel with their favorite authors, which is probably why so many readers answered “NO!” to my initial question. The concept of a purely AI-written book probably feels just as threatening to those pessimistic authors as it does to those readers who love that human connection they feel with their favorite authors and fan communities.

But the fundamental reason I’m optimistic about this is because I don’t think there is, or ever will be, a book that is written purely with AI. Even if the author is more of a prompt engineer than a writer, there’s still got to be human involvement somewhere in the process. And if that person is also an experienced writer, who has successfully written several novels of their own, they’re going to be able to leverage that experience in a way that a pure prompt engineer can’t.

So it may turn out that the writers who are best positioned to succeed in the coming years are the ones who cut their teeth in the old world, before the AI revolution, because very few writers in the future are going to have the confidence and experience that comes from writing a novel entirely without AI. Thus, all of those writers who already have a few novels under their belt, and who take the time to truly master these AI tools and integrate them into their process, may be in the best position of all. That’s the optimistic scenario, and that’s the one I’m currently betting on, which is why I’m doing everything I can to master these AI tools.

More early thoughts on AI-assisted writing

It’s become something of a cliche that true writers write because they can’t not write, but as with so many other stereotypes and cliches, there’s a kernel of truth in it. I’ve been writing on and off since the 8th grade, and even during periods of my life when I wasn’t able to focus on writing, the writing itch would still come for me, and I would have to sit down and sketch out something, even if I never did anything with it.

Over the years, I’ve gotten pretty good at telling whether I’ve done enough to satisfy that creative urge that drives me to write, and whether that urge has been left unsatisfied. Yesterday, I realized that something felt off because that creative urge had not been satisfied—which is strange, because after only two weeks of working on this AI-assisted novel project, I’m already halfway done with the first draft. Indeed, yesterday I “wrote” (if that’s the right word for it) a little over 4.5k AI-assisted words.

Now, this should go without saying, but the point of writing professionally is not how good the creative process makes you feel, but how well and how quickly you produce a quality finished product. But I do think there’s a correlation between the two: that you are more likely to create a quality finished work the better your creative process satisfies your creative urges.

In the last three months of writing with AI, I’ve had some writing days that were better than almost any I’ve had in years. I’ve also had some very meh days, which is to be expected… but what isn’t so expected is this feeling of being creatively unsatisfied, which I usually don’t feel unless it’s been two or three weeks since I’ve done any writing. Something weird is going on.

How much of this feeling of creative dissatisfaction is due to the fact that I’m outsourcing a significant portion of the creative work to an AI, and how much of it is simply to be expected from trying to master a new and unfamiliar skill, which has kept me from satisfying that urge in the same way as I have in the past? At this point, it’s difficult to say. Probably a little of both.

Having worked on this for a while now, though, I think that the writers (and other creative types) who are going to succeed the most with AI-assisted creative work are the ones who figure out how to integrate the human element of their process with the AI element of their process, such that each one complements and enhances the other. Right now, everyone’s talking about how AI will replace us, but that’s really the wrong way to think about it if you want to learn how to master these tools.

I suspect that the way to master AI-assisted writing is not to try to get the AI to “do the hard stuff,” or replace some aspect of the creative process, but to integrate it within your creative process such that it enhances and magnifies your own, very human efforts. For that reason, I’m changing the way that I count my daily words so that I no longer make a distinction between words that I “write” myself, and words that the AI “writes” or generates, because the AI can’t generate words unless I give it enough to work with. Often, that means that I write a little, then generate a little, then tweak what the AI generates and write a little more. When the process is working well, it’s very difficult to say which parts were purely AI “written,” and which parts were purely human “written.”

But it’s still going to take a while to figure out exactly how to integrate AI into my writing process. As I continue to do that, I’m going to pay close attention to how it satisfies—or fails to satisfy—my creative urge to write, not because that is the end goal, but because I suspect that if my creative urge is not being satisfied, the AI-assisted stuff that I’m producing probably isn’t very good. It may not be very good even if my creative urges are satisfied, but if something about the process is missing, then something about the final product probably is missing as well.

Anyhow, those are some more of my random thoughts as I continue to experiment with AI-assisted writing. I was hoping to finish the rough/AI draft of The Riches of Xulthar before the end of the month, but I’m almost out of AI words for this billing cycle, so I’ll probably move on to the “humanizing” phase for what I’ve already written, which is where I retype the AI-generated stuff in order to pass it through what I like to call “the human filter.” Hopefully that helps to give the story a little more of my personal voice and style, and not read like something that could have been AI generated by anyone. But I’m still working out and experimenting with that part of the process, just like all the others.

Some early thoughts on AI-assisted writing

I remember the early days of indie publishing. Back in 2011, when self-publishing was still a dirty word (and Kindle Unlimited wasn’t yet a thing), there were a LOT of opinions about “indie vs. tradpub,” most of them heated opinions, and some of the arguments I witnessed at conventions like 2011 Worldcon Reno very nearly came to blows—which is to say nothing of the online discourse.

And yet, if I could go back to those times, I would tell my younger self not to spend so much time as a keyboard warrior on KBoards, because guess how much all that sturm und drang ultimately mattered? Not much. Instead, I would have told my younger self to focus on things like learning how to market myself and my books, learning how to build and run a business, and learning how to, for lack of a better word, network effectively, because the longer I stay in this business the more I realize that community is the thing that really drives everything.

(Though I still would have told him to delete his social media. That’s not community—it’s just a string of toxic echo chambers spreading toxic mind viruses that keep us all in the matrix.)

Anyhow, I say all of that by way of preface on my thoughts about AI-assisted writing, because I see many of the same things playing out right now. There are a lot of opinions floating around out there about AI, most of them heated, some of them to blows. And yet, I can’t help but get the sense that most of the noise right now isn’t going to matter much after the changes shake out. Therefore, my working assumption at this point is that most of the noise and argument is just a waste of time. The stronger the opinion, the less of a difference it will probably make.

For example, back in 2011, the groupthink consensus over at SFWA was that indie publishing was terrible, indie writers were going to destroy literature, and the publishing world needed gatekeepers in the form of agents and editors. Five years later, they amended their membership requirements to allow indies to join. Fast forward to today, and the groupthink consensus over at SFWA is that AI-assisted writing is tantamount to plagiarism, that AI is going to destroy literature, and that the publishing world needs to do everything it can to preserve the human element of writing. Are you beginning to see a pattern?

For the last two months, I’ve been experimenting with some of these AI writing tools. I’m still in the very early stages of integrating AI into my writing process, but I really do think that in the not too distant future, almost everyone who writes will use some form of generative AI in their process. After all, how many people still write with manual typewriters? A few, but not many.

And think of how much computers and word processors changed how we think about writing, compared to the old days when everything was done by typewriter and everyone knew what a SASE was. For one thing, revising was much more of a pain, since it required retyping everything—no small thing for a long book. Therefore, books and stories were probably much shorter, and writers much better at producing clean first drafts. Also, the revision draft was a genuine draft—no cycling through yesterday’s work to get yourself into the zone.

And how did the typewriter change how we thought about writing, back when it replaced longhand, or whatever the heck came before? It probably enabled skilled typists to write a lot faster—but that meant a lot less ruminating over the text. Was that why 19th century writing was so much more flowery and verbose: because the writing process involved spending more time with the language, and therefore immersing yourself more fully in it? Was the typewriter one of the major drivers of the pulp era, enabling fast writers like Hubbard, Howard, and Lovecraft to produce vastly more content for vastly more markets and publishers?

I really do think we’re on the cusp of a revolution in how we think about writing, and I suspect that generative AI is going to be at the heart of it. And it may turn out that the writers who cut their teeth in the previous era, when every word had to come from your own mind, will have an outsized advantage in the new, because we will be strong in areas where writers who come of age with generative AI will be weak.

Or it may be that most of us old-timers will cling to the old ways and never learn the new, excusing our willful ignorance in the name of “preserving our humanity.” It may be that we have so much to unlearn in order to make it in this new era that most of us won’t be able to make the transition.

What do you think, ChatGPT?

As an AI language model, I don’t have personal opinions or beliefs, but I can provide some analysis and insights based on the information you’ve shared.

It’s interesting to observe parallels between the early days of indie publishing and the current discussions surrounding AI-assisted writing. When new technologies or methodologies emerge, it’s common to see a range of opinions, often polarized, as people try to understand and adapt to the changes they bring. Over time, these opinions may evolve as the benefits and limitations of the new approaches become clearer.

AI-assisted writing tools are indeed gaining popularity, and it’s likely that they will become more integrated into the writing process for many authors in the future. Just as word processors changed the way we approach writing, AI tools have the potential to reshape the writing landscape, offering new possibilities and efficiencies.

However, it’s important to note that AI is a tool, and its impact on the writing process will depend on how it is utilized. While some fear that AI will lead to a loss of creativity or human touch, others see it as a valuable aid that can enhance productivity and offer new creative avenues. Ultimately, the extent to which writers embrace AI-assisted writing will vary, and different authors will find their own balance between leveraging AI tools and preserving their preferred writing methods.

Experience and skill acquired through traditional writing processes are likely to remain valuable, as they provide a foundation for storytelling and an understanding of narrative structure and language. Writers who have honed their craft in the pre-AI era may indeed have advantages in certain areas. However, it’s also important to acknowledge that new generations of writers growing up with AI tools may develop unique skills and approaches that complement the capabilities of these technologies.

The transition to any new era brings challenges and opportunities. It’s up to individual writers to explore and adapt to the changes that AI-assisted writing offers, determining how best to integrate these tools into their creative processes while staying true to their own artistic vision. The writing community, as you mentioned, plays a crucial role in sharing insights, experiences, and support as this evolution takes place.

In the end, the impact of AI on writing will be shaped by how writers choose to embrace and utilize the technology, and the value of community, collaboration, and individual creativity will continue to be important factors in the literary world.

What aspects of AI-assisted writing should we be focusing more on right now?

We should be focusing on exploring the capabilities and limitations of AI-assisted writing, understanding how it can be integrated into the writing process, and developing ethical standards and guidelines for its use. Additionally, we should be considering the potential impact of AI on the writing industry, such as changes to the job market and the possibility of increased accessibility to writing and publishing for marginalized voices.

Define “woke.”

Woke (WOHK): Adjective

Of or pertaining to the mass formation psychosis currently gripping the United States and most of the developed world. This mass formation psychosis is led by radical leftist ideologues and driven by social media addiction. Due to the collusion between major technology companies and the US government, there is also an element of state-sponsored propaganda and control.

The mass began to form in the late 2000s with the popularization of social media. As these technologies began to replace face-to-face human reactions, it created the pre-conditions of social isolation and free floating anxiety, in large part due to the addictive nature of the algorithms which promoted content most likely to induce outrage and anger in the end-user (see CGP Grey, “This Video Will Make You Angry”). Once these pre-conditions were in place, all that was necessary to create the psychosis was a target or series of events to focus the attention of the mass.

The 2010s were characterized by several of these focusing events, starting in 2012 with the shooting of Michael Brown and the subsequent riots in Ferguson, Missouri, and continuing with numerous mass shootings such as Orlando and Sandy Hook, several landmark Supreme Court decisions on gay rights such as United States V. Windsor and Obergerfel v. Hodges, and the rise of such controversial movements as Gamergate and the Sad/Rabid Puppies. The culminating event in the creation of this mass formation psychosis was the election in 2016 of Donald J. Trump as President of the United States.

Following Trump’s election, rogue elements of the bureaucracy, the administrative agencies, and the intelligence community (colloquially referred to as the “deep state”) successfully exploited this mass formation psychosis in an effort to hamstring the Trump administration and ultimately remove him from power. These deep state actors acted in collusion with the Silicon Valley technology companies that ran the social media platforms.

Because of the inherently left-leaning political bias of these Silicon Valley companies, this mass formation psychosis always had a leftist bent, and tended to promote radical leftist ideologues as its leaders. However, in any mass formation, the leaders are often just as caught up in the psychosis as the followers. This soon became manifest in the moral and rational incoherence of its leaders (see “What Is a Woman?”), and in the various internal contradictions of their own respective causes and beliefs. While “wokeism” is inherently political, it is not primarily characterized by a unified political ideology or movement.

The high water mark of the mass formation psychosis occured in 2020 during the covid-19 pandemic, during which it took on all of the defining characteristics of a cult (see “What is the Covid cult?”). The George Floyd riots were the major culminating event, but Trump’s ostensible defeat in the disputed 2020 elections and his subsequent removal from power in the January 6th color revolution removed the central focusing element necessary for the mass formation psychosis. Since then, the deep state and political establishment has attempted several times to find a new focusing element for the psychosis, with such issues as climate change or the Russo-Ukraine war, but thus far these efforts have proven unsuccessful (see: “I SUPPORT THE CURRENT THING!”)

At this time (March 2023), it is unclear how this mass formation psychosis will end. If Trump is re-elected in 2020, it may catch a second wind, or it may be replaced by the right-leaning mass formation psychosis characterized by Trumpism and the MAGA movement. It may fizzle out slowly, or it may be defeated by the growing demand for a religious revival in the United States. Alternately, it may prove to be the precursor of a much more dangerous mass formation psychosis, this time driven by AI and the outbreak of World War III. Regardless, the events of the next 12 to 18 months will determine which course our society will take.