A Blog Post By a Real, Live Writer

I can’t believe I have to say this, but let me be abundantly clear: any writing with my name on it will never include work produced by AI. Ever. Ever ever.

A headline caught my eye this morning: “WordPress has a new AI tool that will write blog posts for you.

WordPress is hardly the first company to jump right on the AI bandwagon. The Notion app (which I use daily for project management, and which I’ve loved) rolled out content generating AI months ago. When you toured the tool they suggested trying it out for writing emails, writing blog posts, and even writing monthly reports.

I have a lot of suspicions around the quality and security of such a tool. I’m the last of many with the same intuition. As someone who regularly works with government entities as well as private companies, I would urgently advise that you check with your IT departments (and perhaps Security or Legal) before even considering using AI tools in your work. But I have a concern at the center of everything I’ve seen about the explosion in AI developments, and it’s this:

What problem is AI solving?

I know as well as anyone else that the quality of AI output will improve, likely exponentially. Right now it’s comical to see the uncannily slightly-off photorealistic images, or the laughably simple prose or straight-up incorrect facts spewed by text generators, but that will all change. AI will learn how to mimic an open-mouthed smile without stretching the jaws of its subjects in hideous python grins. It will better ape the turns of phrase of skilled copywriters. It will likely soon be able to generate a rote email update that wouldn’t warrant a second thought.

But why?

If an email can be adequately generated by a machine, then why are we reading that email?

If a movie could be entirely fabricated by a computer, who is watching it?

The problem is the evolution of media into “content.” Companies putting out larger amounts of public relations material got more attention than those that didn’t, and Content became King. Articles, press releases, essays, blog posts, newsletters, ads, all fall under Content. TikTok videos, Twitter posts, Instagram stories, all fall under Content. More is better, and Content is for consumption. To get the biggest audience possible you have to feed the beast. Give them something, anything. Generate that Content! Come on… generate! Turn that fuel into Content Ingots!

If the producers of media are just “generating content,” then why wouldn’t you want to find a way to make a free robot do it instantly?

The problem that AI is solving for companies and organizations is one that the companies and organizations created for themselves.

On the subject of internal business content: if a robot can generate a monthly project report, then why is that report necessary?

Let’s break that down. You work as a project manager at a company that produces ingots, which are purchased by clients who use them in custom work. You manage a team of ingot engineers who produce blue ingots for specific clients. As part of that role, you write a monthly project report that you submit to your project executive. Together the two of you present the report in a monthly conference call to c-suite leadership. The report includes: a general overview of the project (the necessary background/context for any reader to be able to pick it up and understand the document’s purpose); details on the work performed during that month; details on successes and challenges; notes on staff changes; information about meeting, exceeding, or falling behind on the project schedule; and plans for future work (which will usually be worked into the next month’s project report). You get the go-ahead from your IT team to use certain approved AI tools, and decide to prompt it to generate text for each of these report subsections. The text comes out in under a minute in clean, simple English. A task that used to take you days to put together (not whole days, but the gathering of data and synthesizing into written words takes multiple sessions over several days) is done in an hour of generating, finessing, and submitting to your PX for approval.

Again I ask: why?

If a robot-created report sufficiently fulfills a requirement, my argument is that the report has either been inadequately used, or is unnecessary to the work.

Robots only generate content. AI is a misnomer for the tools bandied about right now; they’re only efficient reconfigurations of existing content. Content created by people. But the tools are not truly intelligent. They’re not creating anything. There is an entire discussion to be had on the legality of scraping content from real human creators and who owns the resulting product that I won’t get into here. The point I want to get at is that the AI that some people are raving over right now simply generate rearrangements of text that is fed to it. It’s an amount of text (or images for AI image generators) that no human in a hundred lifetimes could consume, so it has a vast library to draw from. It is key to understand that AI only generates content based on content that was created by a human, and has now been copied and bastardized in a suspect way. The content that AI puts out is neither inspired nor driven by human desire. Even a monthly project report on blue ingots, when authored by a human, will contain the core of human creativity in selecting what information to include to spark the interest of its audience.

The c-suite executive reading the report can’t go back to AI and ask for clarity on a point made in the report. They can’t argue over statements made. If the report generated by the AI is sufficient as-is, then it’s just data. It’s just that mushy “Content.” It’s facts to check, and it probably never needed to be written by a human in the first place.

If, on the other hand, the reason you read the report is to get the project manager’s analysis of their progress and the future of the project, then the AI as it currently stands cannot generate a sufficient product. Because it’s not the project manager, whose thoughts, creativity, and experience you are trying to solicit.

To go back to the hypothetical AI-generated blog post: who is the audience that is believed to be reading these posts? I would imagine that the companies and organizations imagine that the audience needs to understand some facts. Maybe there is a new purple ingot that the company wants to advertise; they might want a variety of blog post styles, including a press release style announcement, a complete breakdown of new features, and maybe even a white paper explaining in technical terms how purple ingots will be influential in the larger ship-building industry. Maybe a news organization wants to keep its readers updated regularly on a multi-use development in their city over the space of several years, with quarterly descriptions of the work going on posted to their blog. All of those purposes are, to my analysis, the generation of facts in text form. Perhaps an AI could create a sufficiently readable blog post for these purposes. Perhaps.

But what about a blog post from an author you like to follow (ahem, ahem)? What about an influencer, whose family life you have enjoyed reading about for many years? What about a politician communicating with its local constituency? What about—stay with me here—ethical and humane journalistic analysis of current events? What about companies that use their marketing as part of their vital dialogue with their customer base?

AI is not meeting those needs. Those needs come from human interest, and that’s not solvable by AI.

Companies and organizations that want slush content are getting it from AI, and that’s what they’re giving you, the audience. Is that what you want for yourself?

I’m a writer by profession. I don’t say all of this out of fear for my job—I say this out of complete confidence in my work and its vitality. I say this in complete confidence in the work of so many other writers out there.

A final note on the idea of AI generating fictional content: I am not amused by the people claiming to have written dozens of AI books, I’m not impressed, and I’m not concerned. Five minutes spent checking into such claims produce nothing more than the fact that ChatGPT can help you flood book selling websites with really terrible “books” that no one will read. The question arises: what if the AI got kind of good, though? What if it could truly mimic the writing style of your favorite author? Hey Erin, isn’t The Secret History one of your favorite books? Aren’t you always saying you wish you could find other books just like it but there’s never anything really just like it? Well, what if ChatGPT finally figured out how to mimic the writing of Donna Tartt, and could put out in a day five other books purporting to be just like The Secret History?

I wouldn’t want it. Real talk: it can’t do that now, and it won’t be able to do that in the future. It can’t truly mimic the writing of a real person because that’s not how writing works. Writing is an act of creation driven by human ingenuity. This post has turned out a lot more, I don’t know if spiritual is the right word, but perhaps more hopeful in the belief in humanity and our importance to one another, than I had expected when I set out to write a disclaimer? The more I wrote, however, the more I felt that it was necessary to get my statement out here. It was not hard to put together because it’s so simple at it’s core: AI isn’t solving any problems for me.

Previous
Previous

To Rizz, to Seek, to Find, and Not to Yield

Next
Next

Book Review: The Wager, by David Grann