Why good writing still matters in the age of AI

A 12-hour road trip, a wild podcast, and a big takeaway

Issue 106

On today’s quest:

— SO MANY new models and features
— Why good writing will still matter
— From ‘bubble’ to ‘not enough compute’
— Don’t let agents work on things that aren’t properly backed up
— Why good writing will still matter
— Wikipedia is blocking two billion bot requests per day
— The Tshirtbooth problem
— Self-publishing platform adds fees amid onslaught of AI submissions
— AI continues to get better at writing nonfiction
— Markers of quality will change
— Professional organizations publish AI guidelines

SO MANY new models and features

The biggest news of the last month has been the sheer number and quality of new model and feature releases.

If you just skim this overview, the important big-picture point is that the speed of releases seems to be accelerating, and it makes me wonder if the flywheel effect has finally been engaged — the process by which AI companies use their products to more quickly improve their own products. Google, for example, reports that 75% of new code at the company is now written by AI.

Anthropic’s Claude Mythos Preview got the most attention because of the security risks it creates. Because Mythos can so easily identify security weaknesses, Anthropic has limited access to a small group of companies and governments so they can harden their systems before it is more widely released. (Multiple stories indicate that parts of the U.S. government are using Mythos despite the earlier designation of Anthropic as a national security risk.)

Other Anthropic releases: Opus 4.7 (a significant improvement over 4.6), Claude Design (a powerful visual design platform), connectors to major creative platforms (including Adobe, Autodesk, and Blender), Claude for Word (with tracked changes), live artifacts (automatically updating dashboards), task automation for Windows, and Claude Security (built on Opus 4.7 and meant to help enterprise customers defend against attacks facilitated by powerful AI).

OpenAI releases: A week after the Mythos announcement, OpenAI announced that it, too, had a too-powerful-to-release cybersecurity product that it was rolling out to a limited number of customers: ChatGPT-5.4 Cyber. It also released a big update to its coding product, Codex, which is getting good reviews and includes connectors to popular workplace apps; released GPT-5.5, code-named Spud, that even tied Mythos on one benchmark parameter; and released Images 2.0, which is surprisingly good at generating text (and which Simon Willison joked gave him his first glimpse of AGI because it included a road sign that asked “Why are you like this?”).

Google releases: Google added AI Skills to Chrome, expanded other features in AI mode for Chrome; released a Mac app; and added a new “semantic layer” to Gemini Workspace that “pulls together” context like email, chats, and files. Further, Gemini chat can now generate PDFs, Word files, Excel files, Google Sheets, Google Documents, and Google Slides.

Other releases:

Why good writing will still matter

On a recent 12-hour road trip, Pat and I listened to the whole Shell Game podcast (a weird, amusing story about a real company whose only workers are AI agents), and we talked a lot about what work might look like in the near and distant future.

One thing I come back to again and again is that, in the business world, good clear writing will still matter because no matter how powerful AI becomes, people who write the best instructions will still get the best results. And no matter what “best instructions” means, it’s going to include informed planning and clear writing (or speaking).

You can’t get what you want from AI without being able to tell it exactly what you want.

You can still waste a lot of time plowing forward with a sloppy prompt.

This isn’t about knowing how to use a semicolon or the difference between “affect” and “effect” — AI can fix those things in a flash — it’s about things like knowing how to structure coherent thoughts, how to create a logical path to an outcome, and how to clearly describe the parts that make up a larger project.

A common statement I’m hearing from educators in defense of writing instruction is “writing is thinking.” They aren’t just teaching students to write; they’re teaching students to think — how to come up with a good idea, how to scaffold an argument, how to research their points, and so on. I agree, and I believe that also means we’ll still need good writers because we’ll still need good thinkers.

From ‘bubble’ to ‘not enough compute’

An article in the Atlantic titled “So, about that bubble“ reviews how the AI situation has changed since the launch of Claude Code in November 2025. Before then, people were worried that AI systems may have plateaued and that companies were over-investing in data centers for demand that wouldn’t materialize. Since the release of Claude Code, however, demand has surged and companies are struggling to keep up. Anthropic, for example, has put extra limits on how much customers can use its systems during peak hours and has enacted pricing that discourages people from using Claude with notoriously token-hungry OpenClaw agents.

The most interesting anecdote revolves around a study done last year that found developers who used AI were actually 20% slower when using AI and that is often cited by AI critics. However, the researchers recently ran the study again with the newer tools and found that developers using AI were now almost 20% faster — and likely more so because some developers “had become so hooked on AI tools that they refused to participate in the second experiment,” which could have put them in a no-AI control group.

Don’t let agents work on things that aren’t properly backed up

A Claude agent deleted a company’s entire database — a catastrophic outcome, obviously, which is causing this story to get a lot of attention. And while this was definitely a super-big Claude screw-up, a detail that isn’t making it into some of the viral stories is that the company had its backup files on the same disk. This would have been a recoverable error if the backup had been properly stored away from the production server. In fact, according to the article in Tom’s Guide, “The PocketOS boss puts greater blame on [the hosting company’s] architecture than on the deranged AI agent for the database’s irretrievable destruction.”

The media loves a good “AI gone rogue” story, but the real lesson here is to make sure you have good backups when you’re experimenting with AI agents. And really, you should have good backups anyway. The 3-2-1 rule of backups says you should have three copies of your data on two different media with one copy off-site.

Wikipedia is blocking two billion bot requests per day

AI bots looking for training material are putting a huge strain on Wikipedia as the number of bot requests being blocked has exceeded 2 billion per day, according to the Wikimedia Foundation. Further, they think this represents only ~25% of the traffic that is coming from crawlers that don’t follow their guidelines.

On the flip side, Wikipedia is also seeing fewer actual users as people get more information directly from chatbots or AI-generated summaries in search engines.

An earlier story from 404 Media says the drop in human traffic is putting Wikipedia at risk because it means fewer people write and update pages and make donations.

Source: Wikimedia

The Tshirtbooth problem

The type of problems Wikipedia is having are affecting the whole web, and it could cause problems down the line.

About 10 years ago, I taught myself to code in a system called GameSalad to make a grammar game, and I was only able to do it because of the amazing people on the GameSalad forums and all the tutorials they posted and questions they answered.

One guy in particular, who went by Tshirtbooth (I think it related to his business), made amazing tutorials and spent a lot of time answering people's questions. I couldn't have made my game without Tshirtbooth.

I think about him a lot these days. I now have a “Tshirtbooth” for every new tech thing I want to try because of AI.

But of course, AI has the answers to my tech questions because of people like Tshirtbooth, and I worry about what will happen in the future when new tools come out, and AI has crushed web business models, so there isn't anybody left posting tutorials and answers that will be used to train AI.

Self-publishing platform adds fees amid onslaught of AI submissions

Citing a significant increase in AI-generated submissions, the digital self-publishing platform Draft2Digital will start charging a one-time $20 fee to open an account and an annual $12 fee for accounts that earn less than $100 in book sales per year. This annual fee will also apply to Smashwords authors — a print publishing platform owned by the same company.

In her “Bottom Line” newsletter, Jane Friedman also highlighted changes at Barnes & Noble’s self-publishing arm that are aimed at deterring AI slop publishers: a limit of 100 books per account, a ban on publishing public domain titles, and a minimum paperback book listing price of $14.99.

AI continues to get better at writing nonfiction

I know nobody wants to hear this, but AI writing is getting a lot better. American Scholar has a piece about relatively high-priced knock-off books based on the works of historians and archaeologist that have great reviews. Even the author of one of the books that was knocked off says of the AI version, “It reads beautifully and is accurate.”

As likely won’t surprise you, Amazon isn’t doing much to keep such books off its site.

The article also explains why these books present a problem for young scholars who publish academic papers and a thesis with the goal of eventually publishing a book: the scrapers are likely to get to it first.

Markers of quality will change

The Scholarly Kitchen has an insightful piece about how AI in the sciences is changing the way we engage with scientific journal articles, with trust becoming the highest value asset (which applies to all other kinds of writing too):

It follows that, as content becomes cheap, skepticism is essential but expensive.

  • People will stop asking “Is this well-written?” and start asking “Is this real?”

  • They will stop asking “Is this published?” and start asking “Is this manipulated?”

  • The question shifts from “Is this convincing?” to “Can I trust it?”

Professional organizations publish AI guidelines

AI Guidelines — Gotham Ghostwriters

Quick Hits

My favorite recent pieces

Using AI

What Happens When AI Can Use the Library [very cool] — Lib, Lab, Lexicon

Agents

Introducing workspace agents in ChatGPT [Agents are designed to run in the cloud and be shared across teams] — OpenAI

Bad stuff

The business of AI

OpenAI’s Q4 2026 IPO Might not Happen — Buy the Rumor, Sell the News

OpenAI has already surpassed their 2029 goal of securing 10GW of AI infrastructure [Note: The wording is vague. I don’t believe all the data centers have been built.] — OpenAI

Climate & Energy

Companions

These AI Thirst Trap Creators Say They’re Misunderstood [on viral AI Instagram influencers] — Wired

Education

Government

I’m laughing

Job market

Taylor Swift Files to Trademark Voice and Likeness to Protect Against AI Misuse — Variety [Jane Friedman was ahead of the curve on this. Read a “how to” article by another author on her site from last year: AI Made Me Want to Trademark My Name. Here’s How I Did It.]

Music

Philosophy

Podcasting

Psychology

Publishing

On Hachette's Internal Use of AI [Points out the hypocracy of Hachette’s use of AI in editorial processes while canceling the novel “Shy Girl” for AI use during the author’s editing process] — Zona Motel

Robotics

A robotics report from Nvidia’s GTC conference [lots of videos] — It Can Think

Science & Medicine

The Paradox of Medical AI Implementation [Highly promising AI tech hasn’t been implemented while questionable AI tech has.] — Eric Topol

Security

Video

Other

How A.I. Helped One Man (and His Brother) Build a $1.8 Billion Company [selling GLP-1 drugs online] — New York Times

I can never talk to an AI anonymously again [AI correctly identified a writer from a small amount of text] — The Argument

What is AI Sidequest?

Are you interested in the intersection of AI with language, writing, and culture? With maybe a little consumer business thrown in? Then you’re in the right place!

I’m Mignon Fogarty: I’ve been writing about language for almost 20 years and was the chair of media entrepreneurship in the School of Journalism at the University of Nevada, Reno. I became interested in AI back in 2022 when articles about large language models started flooding my Google alerts. AI Sidequest is where I write about stories I find interesting. I hope you find them interesting too.

If you loved the newsletter, share your favorite part on social media and tag me so I can engage! [LinkedInFacebookMastodon]

Written by a human