AI writing is easy…and that’s a problem
You’ve been working with AI a lot lately and have figured out how to use it to help improve your writing. Emails, blog posts, website text, social media, fiction, essays, resumes, white papers, slide decks, report summaries, book reports, homework, sales copy…you name it.
The basic process goes like this:
- Identify what you need to write.
- Ask the LLM AI to write a basic draft for you.
- Go through it afterwards to make sure it’s factual and makes sense.
- Adjust the draft to make it sound more individual.
- Done!!!
And it sorta works. It’s good enough, because that’s what LLM AIs do. They get you from “this is too much of a pain in the ass to do now, let alone regularly” to “good enough.”
There are all kinds of best practices being developed to help you improve this process; the best practices vary depending on what your goals are, but mostly it’s just tips to let the LLM do 80% of the work, then spice up the results with a little bit of your Actual Human Personality ™.
And that advice isn’t wrong.
Going from “I’m not getting my writing tasks done” to “I’m finishing my writing tasks with the help of AI” feels great. It’s easy to do and it’s easy to do consistently.
But…
It is easy, which means everyone can do it, or at least can talk themselves into thinking they can do it.
Lots of people get judgy about AI. We’re gonna set aside most of those concerns for now. AI is bad for the environment, and it does make things so easy that you stop thinking for yourself.
Right now we’re gonna focus on the fairly common criticism that using AI makes you sound sorta like everyone else.
AI is so easy that anyone can use it. Using AI doesn’t give you an advantage…it only gets you to average.
Which means that each and every one of your competitors is using it to help them target the same niches you do.
Or may be soon.
Improving AI writing is straightforward…and that’s a problem, too
I watched the rise and fall of a few small startups that wanted to jump onto the LLM AI bandwagon early on. I was looking for a marketing job, and I was really interested in what I might be able to do with writing marketing text using AI, while working with a business that was pro-AI.
I knew enough about the data analysis behind SEO to understand that using LLM AIs to make more SEO-optimized content and make it consistently would help give an advantage over competitors, and I was pretty sure that I could make a big difference in making sure the LLMs wrote good marketing content.
The basic process was supposed to go like this:
- Identify what keywords would be easiest to target while also attracting the most buyers for the business.
- Ask the LLM AI to write a basic draft that optimized for those keywords.
- Review the draft to make sure the marketing text was factual and made sense.
- Adjust the draft to make it sound more individual.
- Post it to the blog and/or social media, and…done!!!
My process worked. I could write relatively decent articles. (Trying to get the LLMs to analyze fiction was harder.) Being able to talk about the process of using LLMs from a marketing perspective got me some interest from LLM AI startups working on developing business tools and services. Their intent was to bring their tools and services to the market quickly and get a leg up before other startups trying to do the same thing. This meant that even though each startup wasn’t essentially different from other startups doing the same things, they were first.
Being first meant they would have the breathing space to develop something that actually provided value that other startups couldn’t—or so each of the startup founders separately assured me.
These talks were interesting but never led to a job offer or, as far as I know, an actually successful, ongoing LLM AI startup.
I’m pretty sure I know why I didn’t get a job offer.
I kept asking, “What is your unique selling proposition?”
(The unique selling proposition, or USP, is what makes your business unique to its market. Not necessarily unique to the whole planet or anything, but unique to the people you’re trying to sell stuff to. What sets you apart? Why should your customers buy from you and not someone else?)
“What’s your USP?” is a basic marketing question; it’s not like I was asking something that any other marketing professional wouldn’t have asked…after they got hired. But it was not a question that any of the startups were prepared to answer, other than to say that they were going to be first.
There was another pattern I noticed.
Each of their services were designed to help their potential customers navigate existing problems, such as getting to the top of search engine results by spamming articles about every possible variation of a topic, or automating business intelligence answers instead of having to manually produce multiple reports across a number of departments involving multiple meetings, or solving complex logistical problems in the field, for example, moving ripe tomatoes to restaurants on time.
And they were solid ideas in a world where it’s a huge pain in the ass to solve those problems, and an even bigger pain in the ass to automate the solutions to those problems.
But this is not that world anymore, and the fact that small startups without long-term experience in their respective fields could develop their solutions so quickly meant that, well, anyone could come up with those kinds of solutions.
What if your LLM AI startup’s cool idea used to be a great business idea, but now is a great idea that can be handled in-house in most businesses by their existing staff?
Or by an individual with a free or cheap LLM AI account?
Which ideas are good ideas, when the pain in the ass factor of solving a problem goes away?
Why would something that’s easy and straightforward be a problem?
I’m not going to say don’t use LLM AI to write for you!
I’m not that kind of person. Whenever I see a statement like that, I start questioning them.
Don’t use LLM AI to write for you? But what if you’re at the point where you’re not getting the writing done at all? What if you’re at the point where you just need a push to get started? What if you’re tired and the work is due and it doesn’t need to be brilliant or creative, it just needs to be done? What if it’s a type of writing you’re not really familiar with and you do better seeing examples before you start writing on your own?
I can think of a lot of questions that can poke holes in the statement don’t use LLM AI to write for you.
The trick is to also ask questions about the opposite statement, do use LLM AI to write for you.
Do use LLM AI to write for you? Won’t you sound the same as everyone else? Aren’t there potential issues with copyright? What if you write something effective but it gets devalued because people find out you used LLM AI to write it? What if everyone else uses LLM AI to write and nobody values anyone else’s writing because it’s so easy to come up with the same insights and entertainment on your own?
This process is called “questioning your assumptions.”
Most people are pretty good at questioning other people’s assumptions. It turns out questioning other people’s assumptions is a really great way to delude yourself into thinking your assumptions are correct—because if everyone else’s assumptions are wrong, then yours must obviously be right.
We all do it.
It’s really hard to let go of your own assumptions.
For me to admit that I’m wrong about even the smallest thing takes at least three hours. Bigger things take longer.
The truth is…usually a mess.
You know what LLM AIs are good at?
Simplifying concepts. Summarizing text. Following directions logically. Running complex calculations that are biased toward useful or desirable results (that is, telling people what they want to hear or giving them an easy, simple, or obvious answer). Mixing two existing ideas together. Systematic thinking, such as logic or programming.
Any situation where a user describes a situation, then asks a question in the context of that situation.
I mean, those things aren’t bad in and of themselves. But they’re easy for LLM AIs, and now they’re going to be easy for everyone who uses LLM AIs, too.
You know what LLM AIs are bad at?
They’re bad at systematically evaluating what their users tell them.
That is, at questioning assumptions.
AI-proofing your writing is simple in theory…but not easy or straightforward
Here’s my theory: if you want to AI-proof your writing, don’t go for the easy or straightforward answers. Ask questions instead.
Specifically, question your assumptions.
But instead of asking a scattershot of questions (the way I did above), try asking one specific question:
“Why this? Why not something else?”
Let’s look at an example.
Recently, I’d been working my way through an online programming course for Python; it was convenient, cheap, and in a familiar Duolingo-like format. I could answer a few questions and program a few lines of code every morning, with immediate feedback.
But…
I found myself constantly complaining about the course. The explanation given in the lessons would work well enough for the first few five-minute chunks, but as soon as the class moved into more complex ideas, I’d be yelling at the app because the class was built on the assumption that I already knew the assumptions underlying the Python programming language. (Which were never stated.)
Here’s an example.
I’d get an instruction to use parentheses or brackets…but never an explanation of why some commands necessitated parentheses and others brackets. Whenever a new technique was thrown at me, I wouldn’t understand why you’d use one or the other; I was expected to memorize old techniques and use them without understanding why. No explanations! Not even a “because nobody was thinking that far ahead and mistakes were made so just live with it, okay?” (This is a simplified explanation for how the English language works, by the way.)
I finished the course and got a cute little certificate and still have no idea how to code anything in Python.
I’m sure there’s a book out there that can walk me through the process and give me the explanations I want. But each time I consider picking up a book, I ask myself:
“Why is this the right solution? Why not something else?”
Let’s say someone recommended a particular book or course to me, one that they’d used and liked back when they’d first started, and in fact it was the book that made them realize that with Python, no book or class can take the place of actually getting hands-on experience making actual programs that perform real tasks, not carefully controlled learning spaces.
But I’d still be asking myself, “That seems right…but why not something else?”
Why not a different book, one more recent, for example? What if there’s a more current book on Python programming that’s more highly reviewed? Why not choose that instead?
Most people stop there, if not sooner: it’s too much work to answer the question.
But the people who keep questioning their assumptions tend to end up in interesting places, like, “Are you sure you want to learn Python specifically? Python is the equivalent of English, kludged together from several other programming languages, and while it tries to be adaptable at making different systems work together, it also has no tolerance for error and may not be beginner-friendly.”
If you ask an LLM AI what book or class to recommend to a new programmer who wants to learn how to code in Python, then you’ll get the answer you asked for.
And that’s all.
Questioning assumptions is really only a slightly more adult version of a two-year-old asking why, and it’s just as maddening when applied to your own assumptions as the relentless questions of a toddler.
Most of us have been trained to stop asking.
In the end, you can’t AI-proof your writing if you’re writing on autopilot
Here’s how I wrote this article:
I started out with the observation that yet another LLM AI small business startup was struggling with the same issues as the other LLM AI startups I was familiar with.
That idea got me asking, “Why are these businesses so locked into the idea of being first, to the point where they blow off questions about what their USP really is? What else could they be doing to AI-proof their AI businesses?”
I think those businesses were locked into the idea of being first because being first used to be a viable business strategy when the PITA factor was high enough that it was hard for their customers to engineer their own solutions.
Those businesses were blowing off the question of what their USP really was because they didn’t really have one. They wanted to be unique…but creating solutions to formerly intimidating PITA problems was so easy that the best they could hope for was to be first, so they could dominate the market.
Too bad everyone had the same idea.
What else could they have done to AI-proof their AI-based businesses?
They could have focused on determining what customer problems could never be solved by mediocrity or mere efficiency, take the time to solve that problem well, then use AI to automate the parts of that solution that did not need to have its assumptions questioned.
For example, the logistics business could have developed reusable packaging for shipping heirloom tomatoes safely, so it was easier to bring tasty tomatoes to market, versus the tasteless, overbred tomatoes being provided by other distributors. Then they could have hired someone who was talented at building relationships with heirloom tomato growers in the area.
And then they could have used LLM AI to look at the logistics of harvest and transport to ensure that the tomatoes were reaching the most profitable markets in the most efficient manner possible.
Similarly, with writing (because that was the original point here), writers can’t write stuff that LLM AIs can do easily and expect to stand out, just because someone fact-checked the piece and sprinkled in a little human warmth over the draft when it was done.
That sort of writing, even when it’s 100% human-crafted, comes across as formulaic.
An LLM AI will never be able to do anything more creative than follow a formula, even if it’s not straightforward what that formula is.
LLMs are math. Formulas are what LLM AIs…do.
There will always be a market for fiction and non-fiction of all types that fits the mold, follows the formula, scratches a well-known itch.
Scratch a particular itch and it doesn’t matter how well, or how poorly, something is written. It only matters that the right people see it at the right time. First.
Sometimes formulaic writing succeeds, whether it’s an old type of spicy romance novel repackaged for a new market, materials for a get-rich-quick webinar, or blog articles that help you dominate your SEO keyword niches.
After a while, the people who once relied on first tend to fade from view.
That trend can only get more pronounced when LLM AI-generated text is thrown into the mix and formulaic writing gets easier and faster and lower-cost to produce at mediocre levels by people who don’t have a lot of experience writing or using AIs.
I dunno. I can’t really be against that sort of thing. Maybe if a writer can be replaced by a computer, they should be. But maybe I’ve written enough press releases and white papers that I’m biased.
Meh.
Incidentally, I’ve rewritten most of this article several times.
Every time I read through it, I asked, “Why did I write this? Why didn’t I write something else?” and played with different ways of saying the same thing…often until I realized that I wasn’t saying what I needed to say at all, and changed directions to say something else entirely.
I’m feeling pretty solid about the article right now. It feels like my answer is both honest and not very impressive when you lay it out, a lot of hard work.
So why is this my answer? Why not something else?
Because questioning assumptions is something an AI cannot and may never be able to do; it’s difficult for humans at the best of times, let alone for a math program that isn’t self-aware. Examining your assumptions until you reach an unexpectedly rich answer is the one pain in the ass task that I can’t see ever going away.
I might be wrong. But I doubt it.
At any rate, I wrote all that, then asked ChatGPT to outline a blog post on how to AI-proof your writing.
Here’s what it came up with:
AI Is Everywhere—Here’s How to Keep Your Writing Relevant
1. Introduction
- Why AI-generated content is everywhere.
- Why writers worry about being replaced.
2. What “AI-Proof” Really Means
- Focus on irreplaceable human value.
- Not about beating algorithms, but about deepening connection.
3. Strategies to AI-Proof Writing
- Use personal experiences and unique perspectives.
- Build emotional resonance and nuanced storytelling.
- Provide well-researched context and expert insights.
- Develop a distinct, authentic voice.
4. Building Reader Relationships
- Encourage interaction and community.
- Prioritize trust and authenticity over trends.
5. Conclusion
- AI is a tool, not competition; humanity is your edge.
Unsurprisingly, the outline sounds reasonably common-sense and predictable. And, also unsurprisingly, it doesn’t actually mean anything once you start digging down under the surface.
How do you deepen connection? If deepening connection is so effective, why is stirring up controversy (that is, increasing disconnection!) so effective in getting search engine traffic? Why not write something controversial instead? What context counts as well-researched? What if you’re developing new ideas and techniques that don’t have a lot of research to back them yet? What constitutes an expert insight? Because I know a fair few experts whose insights are utter crap.
The outline throws around a few buzzwords, that’s all.
When you read the latest article on how to AI-proof your writing (or how to set up an AI-based side hustle that makes you $10,000 per month, or whatever) on Forbes or Tom’s Guide or TechRadar, ask yourself, is any of this surprising?
Does the advice seem easy, or does it make you question your assumptions?
Does it seem like common sense, or does it make you take an uncomfortable look at what you’ve been doing lately?
Does it tell you what you want to hear? Are you nodding along with how much you agree?
Or does the article ask you to hold yourself accountable for writing sloppy, predictable work that’s so mediocre that even AI could write it?
Easy answers can make for great clickbait.
But, like being first, they never last.
If you’re interested in more philosophical musings from me, you can either subscribe to my newsletter for updates, or check out my blog archives, including the philosophical musings category.