Wherein I Find Myself Writing About Writing
Most everyone finds writing to be challenging (especially those who say they enjoy it). This is because writing is an intentional, thoughtful act. This is as it should be, but “AI” has recently exacerbated the misbelief that writing is simply an output. In fact, it’s a creative process that has merit in and of itself.
For some time I’ve been sitting on the idea of writing about my personal rules for the essays I publish. I use this space to work out ideas that get stuck in my head a little too long. I tend to focus more on the human side of software development and such topics should be evergreen and less likely to be ephemeral or trendy.
I’m kind of breaking that last rule by writing this essay (and it’s not the first time). As much as I’ve been waiting for the “AI” bubble to pop, I just can’t ignore the damage it’s inflicting before that day arrives.
It’s precisely because I interpret the act of writing as human thought made tangible, after considerable effort, that I find “AI” so inappropriate and unqualified for the task. Writing is thinking. As long as machines are incapable of the latter they should never do the former.
Unfit For The Job
When we understand the mechanics of a Large Language Model (LLM) it’s evident they’re not thinking. LLMs can do certain things quite well enough but we have to know their quirks and limitations and tailor our use accordingly. Laurie Voss reminds us that LLMs are “good at transforming text into less text” which is nowhere near the same thing as writing:
Is what you’re doing taking a large amount of text and asking the LLM to convert it into a smaller amount of text? Then it’s probably going to be great at it. If you’re asking it to convert into a roughly equal amount of text it will be so-so. If you’re asking it to create more text than you gave it, forget about it.
There’s no magic here. If I ask an LLM to produce an entire document out of thin air it doesn’t actually do that—it needs excessive prompting and data. The most original writing any LLM has ever produced is still derivative. By definition an LLM will never produce a wholly original idea.
Most of us aren’t inventing cold fusion or faster-than-light travel but I’d like to think at least some of my energy at work is spent on unique or novel ideas. I won’t be getting that from a document my coworker passes to me while mentioning they used “AI” to write it. Laurie Voss again:
LLMs only know what you tell them. Give it a short prompt and ask for long text and you will get endless waffle, drivel, pointless rambling, and hallucinations. There is no way to get an LLM to perform the thought necessary to write something for you. You have to do the thinking. To get an LLM to write something good you have to give it a prompt so long you might as well have just written the thing yourself.
I’m not abdicating this much responsibility (i.e., thinking) to a machine that’s unfit for the job. I likely wouldn’t do so even if the effort were near zero. As it is I have to micro-manage the LLM, validate its output, and also ignore the ethical, environmental, political or economic impacts of it all? Great tool, thank you!
We devalue those jobs traditionally centered around writing—copywriters, translators, journalists, and even developers—only to discover later that these tasks require true skill and humans have to be brought in to fix the machines’ work.
Most people assume LLMs to be more sophisticated than they are (i.e., they turn “text into less text”) because our general perception is that writing—almost all writing—is a chore or an end product. We ignore the complexity—and value—of the process itself and just want a document in our hands as soon as possible.
Side Effects May Include…
The so-called promise of “AI” really capitalizes on this notion and incentivizes us to optimize for the wrong thing. This is affecting us in many ways that have only recently become apparent.
We really don’t have consensus on its utility. The most powerful people in society like “AI” more than the rest of us. A recent study by Dayforce found that 87% of executives are using “AI” regularly while only 27% of employees are. Holy shit! We know that executives are somewhat divorced from reality and don’t really understand the day-to-day of their own businesses. Most leaders aren’t even aware this gap exists at all!
Workslop is on the rise and we hate each other for it. The phrase “workslop” is truly inspired. However, the more we use “AI” to create passable facsimiles of written work the more everyone else has to deal with it. Productivity suffers and resentment festers. The irony is your workslop is consuming everyone else’s time and energy. Even worse, if you do this to your coworkers they’re struggling with how to tell you to knock it off. It erodes mutual trust or respect. Workers haven’t been this annoyed with each other since we started reheating fish in the company microwave for lunch.
“AI” is altering how we communicate. When it comes to language, we’re screwed no matter what. We mimic an LLM’s style (accidentally or not) just because it’s part of the zeitgeist now. We avoid the use of certain punctation or turns of phrase to distinguish ourselves from the machines. Either way, we’ve ceded territory that belonged to us. I, for one, will have my precious em dashes and Oxford commas pried from my cold, dead hands.
It’s fucking lazy. I’m sorry, it just is. I can’t tiptoe around this point. There’s the good kind of lazy and there’s just plain ol’ lazy. Know the difference.
“AI” makes us feel inadequate. When “AI” appears in our software toolbars and panels there’s a heavy insinuation that I don’t know how to do my fucking job. Nowadays I’m not even granted the dignity of a blank canvas. A new document comes pre-loaded with “AI” calls to action (e.g., “Generate document”, “Help me write…”, sparkles everywhere). The iconoclastic designer Mike Monteiro examines this (and more) with zinger after zinger in his talk “How to draw an orange”…
Every single human being has an intrinsic capacity to create—write, draw, sing, dance, glue things to other things—from a very young age. Sadly, a great many people “grow up” and lose this over time. It never goes away entirely, however, and we can reclaim it. It’s just hard-earned.
Choosing Friction Can Be Cool, Man
I don’t always know where I stand, down to the last detail, on appropriate “AI” use but I agree with this sentiment from Frank Chimero:
Where do I stand in relation to the machine—above it, beside it, under it? Each position carries a different kind of power dynamic. To be above is to steer, beside is to collaborate, below is to serve.
Well, if we want to be “above it” that doesn’t come for free. Any degree of creative autonomy (not just writing) comes at some cost but that’s the point. We have to work those muscles continually. We have to do some hard work. A colleague of mine, Jenny, believes we should be choosing friction.
If you can push a button and get a screenplay or a symphony or a painting at the cost of a nominal subscription fee that does not begin to cover the true expense of this technology to the world, if you did not have to at least subconsciously face your mortality and decide that the pursuit of this piece of art is what you want to spend your finite time on, if your desire to speak is not strong enough to overcome the friction of learning how to speak, is it something that needed to be said?
Bravo! Writing is a deliberate decision that reflects and refines our thinking. There’s no optimization for that. Even that work email you’re composing presumably needs to be sent, and therefore friction is a prerequisite. But the process itself has value because proper space was reserved to just think.
“AI” can really muddy the waters. An LLM can’t think and therefore it’s not a tool that’ll encourage us to think for ourselves, at least not by default. Still, the grifters will pretend otherwise. In the meantime we’re left wringing our hands and having multiple existential crises about it. Cultural norms and beliefs are forming around this, slowly, and not all of them are favorable. We enjoy calling it “slop” for a reason! This feels to me like a very good opportunity to make “AI” utterly gross and uncool.
Do Not Fuck With Miyazaki
We can take our cue from the great director and animator Hayao Miyazaki. In March 2025, OpenAI boasted about its new image generation capabilities by encouraging people to turn their selfies into anime characters according to Miyazaki’s famous style. Everybody did it, had a chuckle, and it was offensive.
Miyazaki’s animation is done by hand. His films require hundreds of thousands of drawings each. He wears a goddamn apron to work! The results are magical and this is precisely what makes him one of our greatest storytellers. Of course, “AI” devoured his entire body of work and now you can skip over all that fluff. Output over process, right? This is the kind of shortcut that Miyazaki has spent his entire career resisting on principle.
Some years ago a few young upstarts showed Miyazaki an AI model that could automate movement for computer generated figures. There’s a deadly pause before he responds to the demo and absolutely murders them with his words. You can watch for yourself—here’s the infamous video.
“Well, we would like to build machine that can draw pictures like humans do.” I’m sorry, what was that? Did they really say that in front of the king himself?!? Get the fuck outta here.
What Only Humans Can Do
Another director, Bong Joon Ho, recently offered another very direct opinion that, um, straddles the line:
My official answer is, AI is good because it’s the very beginning of the human race finally seriously thinking about what only humans can do. But my personal answer is, I’m going to organize a military squad, and their mission is to destroy AI.
I’m not sure about the military squad but I agree that the silver lining could be a greater awareness of our human potential. I’m hopeful but we have to outlast the CEOs, grifters, boosters, and cheapskates to get there.
Contrary to what my chosen field and university degree might imply, I very much prefer working with people over machines. In my experience the most challenging part of creativity in group settings is plain ol’ communication, and much of that is written. Let’s avoid shortcuts in our writing that will do more harm than good.
As for my personal writing on this site you should consider this essay, and many more to come, as my act of protest against the “AI” hype. I will happily labor over each word, sentence, and paragraph. Is it hard work? Yep. Is it expedient? Nope. Is it worthwhile? Maybe don’t ask anybody on Hacker News or Reddit. But I’m not writing for them, am I? I’m writing for me and my very human mind.
Most everyone finds writing to be challenging (especially those who say they enjoy it). This is because writing is an intentional, thoughtful act. This is as it should be, but “AI” has recently exacerbated the misbelief that writing is simply an output. In fact, it's a creative process that has merit in and of itself.
https://matthogg.fyi/wherein-i-find-myself-writing-about-writing/