Branch your way out of AI hallucinations (and keep your work)

Plus, the seahorse emoji breaks some AIs

Issue 83

On today’s quest:

— Tip: compare documents
— New feature: branched chats
— Anthropic copyright settlement: first details
— Translators are some of the hardest hit by AI
— Weird AI: the seahorse emoji
— Your questions: why do people comment “AI”?

Tip: compare documents

I often work in Google Docs, and the Compare Documents feature there has never worked well for me because it gets bogged down in small formatting and spacing changes that swamp out the changes I actually want to see.

This morning, I had two versions of a document and had lost track of which was the most recent file, but I knew I had made small edits to the most recent one, so I compared them in ChatGPT. It quickly wrote out all the differences, which immediately made it clear which version I had edited last.

The prompt was simple:

Compare these two documents and tell me the differences. Ignore font and typeface differences.

If you want to get a comparison document with tracked changes though, it doesn’t seem to be able to do it. The best I could get was a document that simulated tracked changes by highlighting added text in green and deleted text in red.

New feature: branched chats

ChatGPT just launched a feature that lets you branch a chat. This is useful if you want to test two different ways of approaching a problem or you have a question that goes off on a tangent, but you want to also continue the main chat later. Starting a new branched chat keeps the branched topic from muddying up the context of the original chat.

Branching can also help you save a chat that has generated a hallucination. Once you get bad info, it’s nearly impossible to get rid of it — it’s going to stay in the chat context and can taint everything going forward — so you always want to start a new chat. But that can be painful if you’ve already had a lot of exchanges. By branching at a point before the hallucination, you can keep all your past work.

I’ve also seen people theorize that branching could help users realize that chatbots aren’t all-seeing oracles or sources of absolute truth since you can use this feature to see that you can get different answers to the same question.

Access branching by hovering over the chat, clicking the three-dot menu, and selecting “Branch in a new chat.”

You see troubleshooting text in the background of my screenshot above because branching isn’t working in my main Chrome window for some reason I haven’t figured out yet. But it works in Safari and in a Chrome Incognito window, so this seems like a “me” problem and not a feature problem.

We now know that the settlement against Anthropic for using pirated copyrighted books to train its AI models is $1.5 billion and covers about 500,000 books — equating to $3,000 per book before lawyers’ fees are deducted.

I’ve seen authors say it doesn’t seem like enough, but it’s actually much higher than I was expecting. I haven’t been involved in many class action lawsuits, but on the occasions I’ve been offered a settlement, it was always a token amount that didn’t even seem worth the trouble of responding. I’ll definitely file the paperwork for $3,000 per book!

The award is not for training models; it is for having pirated books. Anthropic says the pirated books were not used to train any of their public models, and the settlement doesn’t give Anthropic the right to use the material from the pirated books going forward. Some other court rulings (but not all) have found that training on copyrighted books is fair use if the books were legally acquired.

Are your books included? 

How long will it take to get paid? It could go quickly, or it could take years. An article at Words & Money outlines the possible delays and barriers.

Future implications: “We expect that the settlement will lead to more licensing that gives author both compensation and control over the use of their work by AI companies,” said Authors Guide CEO, Mary Rasenberger.

Translators are some of the hardest hit by AI

A recent article by Blood in the Machine reiterates what Heddwen Newton told me in a recent interview: good work is largely drying up for translators because of AI.

Some of the translators profiled in the article believe the huge losses aren’t as much from improvements in translation technology as they are from a new permission structure caused by media attention that makes it more acceptable for companies to reduce rates or eliminate jobs because of AI.

Weird AI: the seahorse emoji

If you ask ChatGPT-5 about the nonexistent seahorse emoji, it can’t find the answer and sometimes seems to spiral in frustration. I’ve seen examples where it starts swearing.

Claude 4.1 also stumbles a bit — it showed me two different horse emoji — but then it correctly says there is no seahorse emoji. Gemini 2.5 Flash, on the other hand, knows from the get-go that the seahorse emoji does not exist. +1 for Gemini.

Unfortunately, I haven’t seen anyone credibly explain why the seahorse emoji flummoxes some LLMs.

Your questions: why do people comment ‘AI’?

A reader named Cheri asked a question about why people feel the need to comment “AI” on stories and images on social media.

I have noticed this too. On Instagram, when I see something especially amazing, I often find myself wondering if it is real these days, and I go to the comments to see what people are saying. It makes me sad that this is now my reaction to seeing wonderful things. And also, why would I trust internet comments?! But when multiple people name what the real thing is, I generally trust them and feel happy. Yay, it was a real amazing thing that maybe I could see someday!

As for why people do it, it’s possible some people comment “AI” in the hope of educating people that what they are seeing isn’t real, but I believe most people are commenting “AI” to be derisive. It’s a way of dismissively saying they don’t like AI images.

That would fit with an experience Cheri recounted when she saw someone comment “AI” on a story about Þrídrangar Lighthouse in the Vestmannaeyjar Islands, Iceland, which she says “admittedly looks like it couldn't exist, but does.” She shared with them that it is real, and the “AI” poster said he was “sticking to his theory” — which is the comment of someone who is more interested in sharing a vibe than knowing the truth.

Quick Hits

Using AI

Use cases

Philosophy

Climate

Image AI

Bad stuff

Medicine

Model updates

Education

China

The business of AI

Cringe

Weird AI

Other

The most popular story from the last newsletter

What is AI Sidequest?

Are you interested in the intersection of AI with language, writing, and culture? With maybe a little consumer business thrown in? Then you’re in the right place!

I’m Mignon Fogarty: I’ve been writing about language for almost 20 years and was the chair of media entrepreneurship in the School of Journalism at the University of Nevada, Reno. I became interested in AI back in 2022 when articles about large language models started flooding my Google alerts. AI Sidequest is where I write about stories I find interesting. I hope you find them interesting too.

If you loved the newsletter, share your favorite part on social media and tag me so I can engage! [LinkedInFacebookMastodon]

Written by a human (except the examples below, obviously)