Resonating with Charisma and Igniting Enthusiasms
Why I am not worried (but often irritated) about AI writing bots.
Last week I received a text message. It was clearly part of a mass text from… well I have no idea who it was from. I’ll come back to that. What struck me right away was that this text had clearly been written by an AI chatbot:
I will give you $100 if you can answer any of the following questions:
What person or organization sent this?
Why?
What are they wanting me to do?
This text clearly went out to a lot of people, almost certainly thousands. Someone paid to have it go out and, presumably, someone was paid to generate the copy. Worst of all, someone approved the copy! An actual human looked at the text above and said “that looks good. Send it.”
Friend of Matter at Hand Jacquayle Dailey suggested a few weeks ago that I should write about AI. I don’t know that I have much to say about AI in general, other than it seems like a classic example of people pursuing a technology without really understanding what they are doing, why, or what the consequences might be.
I do, on the other hand, have pretty strong opinions about AI chatbots, like the infamous ChatGPT. I’ve written before and given my opinion on pastors using ChatGPT to write sermons or generate email copy. In short: I wish they wouldn’t. Today, though, I wanted to write a bit about that text message above, and why I’m not really worried about AI replacing writers.
I think AI-generated copy is easily recognizable for a few reasons. First, because as the example above shows, AI-generated copy tends to use a lot of words to say absolutely nothing. In that way, chatbots somewhat resemble a college sophomore padding an essay to meet the length requirement.
Second, it usually sounds like what a bad writer thinks a good writer sounds like. Look again at the text above. Words like “resonates,” “charisma,” “passions,” and “ignites” all feel completely out of place. Then, those words are forced into phrases that sound forced and weird. The person responsible for this copy clearly thought differently. Why say “Goose Creek has a lot of heart”1 when you can say that its heart “resonates with charisma”? Why ask people “what interests you” when you can say, “what ignites your enthusiasms”?
Because you’re a human, that’s why. As a human, you know that if anyone ever asked you “what ignites your enthusiasms?” you would assume they were either (1) joking, (2) learning English, or (3) an idiot. But the person who generated this copy can’t tell the difference between good and bad writing. They almost certainly think that good writing means using big or unusual words. The person who generated this copy probably thinks that complicated sentences are better than simple ones, even if they are harder to understand.
What does the person who actually wrote this copy think? Let’s ask him:
AI chatbots have almost endless databases to pull from. They have access to infinitely more information and can draw from it infinitely faster than you or I. What they do not have, however, is judgment. An AI chatbot has no idea what good writing is because it doesn’t know what “good” means. One way that I describe AI chatbots is that they are like omniscient five-year-olds. They “know” almost everything there is to know, but that doesn’t mean they’re able to actually do anything worthwhile with it.
As a writer, I hate seeing people use these things even if I understand why they do. I don’t really worry about it, for the same reason a master craftsman doesn’t lose sleep over Walmart having their own line of furniture. There will always be cheap junk, and a lot of people will buy the cheap junk. There will also always be a market for people who do things with excellence.
“But Jack, pretty soon AI chatbots will be able to write better than humans.” I kind of doubt it, but suppose my response would be to ask what you mean by “better?” Perhaps the copy would need fewer edits, maybe the lines would be a bit neater. But in the end AI can only “generate” and never truly create. I believe actual creation requires a soul. I just don’t think true writing can emerge from a machine that doesn’t even have enthusiasms to ignite.
All due respect to Goose Creek, but even this would be bit of a stretch.
When it comes to technological savvy I'm a "tiny" bit above average. However, when it comes to AI I haven't a clue, which is why this peaked my interest. It's amazing that we can utilize the knowledge to accomplish much. It's frightening of the damage it could do when used for nefarious purposes. A thought that came to my mind regarding its soulless and cold writing, and your analogy of the five year old, is that AI writes without having the benefit of experience. The writings and songs that stir the emotions are those that reflect experiences that shape us and move us. The sorrow and joy, the hardship and pain, are things AI has not experienced and therefore renders it lifeless. Just a thought.
Thanks for grabbing my attention Jack.
You hit the nail on the head with the issue with "AI writing bots." The biggest thing with AI, though, is that I feel like terrible AI writing and generated art is making people opposed to the idea of AI in general. There are legitimate uses for AI, but they're only valuable when a human interprets the results.
For instance, I was recently having trouble coming up with ideas for a social media posting schedule. I then decided to use AI. I told it about the organization, defined the type of audience I'm working to reach, and some constraints on the types of posts. I then asked it to develop a 30 day posting plan. It proceeded to spit out a posting schedule that was...decent. If I blindly went with it's 30 day plan, it would have been terrible. Instead, I took that posting plan and went through each day and made tweaks to customize it for my audience. A few posts ideas it gave I threw in the trash. Others were ideas I hadn't thought of but were perfect for my audience.
The key with the 30 day plan was that I didn't have it actually generate the content. I essentially tasked it with the initial research for me to then build upon. It did the same task as if I had hired an assistant to pull research for me. The results weren't perfect, but they helped me to get past the "blinking cursor" part of ideation, and instead gave me things to build upon.
So is all AI bad? No, I don't think so. But when you try to make the AI do all of your work for you, then you're going to fail to ignite anyone's enthusiasm.