- AI Sidequest: How-To Tips and News
- Posts
- Branch your way out of AI hallucinations (and keep your work)
Branch your way out of AI hallucinations (and keep your work)
Plus, the seahorse emoji breaks some AIs
Issue 83
On today’s quest:
— Tip: compare documents
— New feature: branched chats
— Anthropic copyright settlement: first details
— Translators are some of the hardest hit by AI
— Weird AI: the seahorse emoji
— Your questions: why do people comment “AI”?
Tip: compare documents
I often work in Google Docs, and the Compare Documents feature there has never worked well for me because it gets bogged down in small formatting and spacing changes that swamp out the changes I actually want to see.
This morning, I had two versions of a document and had lost track of which was the most recent file, but I knew I had made small edits to the most recent one, so I compared them in ChatGPT. It quickly wrote out all the differences, which immediately made it clear which version I had edited last.
The prompt was simple:
Compare these two documents and tell me the differences. Ignore font and typeface differences.
If you want to get a comparison document with tracked changes though, it doesn’t seem to be able to do it. The best I could get was a document that simulated tracked changes by highlighting added text in green and deleted text in red.
New feature: branched chats
ChatGPT just launched a feature that lets you branch a chat. This is useful if you want to test two different ways of approaching a problem or you have a question that goes off on a tangent, but you want to also continue the main chat later. Starting a new branched chat keeps the branched topic from muddying up the context of the original chat.
Branching can also help you save a chat that has generated a hallucination. Once you get bad info, it’s nearly impossible to get rid of it — it’s going to stay in the chat context and can taint everything going forward — so you always want to start a new chat. But that can be painful if you’ve already had a lot of exchanges. By branching at a point before the hallucination, you can keep all your past work.
I’ve also seen people theorize that branching could help users realize that chatbots aren’t all-seeing oracles or sources of absolute truth since you can use this feature to see that you can get different answers to the same question.
Access branching by hovering over the chat, clicking the three-dot menu, and selecting “Branch in a new chat.”

You see troubleshooting text in the background of my screenshot above because branching isn’t working in my main Chrome window for some reason I haven’t figured out yet. But it works in Safari and in a Chrome Incognito window, so this seems like a “me” problem and not a feature problem.
Anthropic copyright settlement: first details
We now know that the settlement against Anthropic for using pirated copyrighted books to train its AI models is $1.5 billion and covers about 500,000 books — equating to $3,000 per book before lawyers’ fees are deducted.
I’ve seen authors say it doesn’t seem like enough, but it’s actually much higher than I was expecting. I haven’t been involved in many class action lawsuits, but on the occasions I’ve been offered a settlement, it was always a token amount that didn’t even seem worth the trouble of responding. I’ll definitely file the paperwork for $3,000 per book!
The award is not for training models; it is for having pirated books. Anthropic says the pirated books were not used to train any of their public models, and the settlement doesn’t give Anthropic the right to use the material from the pirated books going forward. Some other court rulings (but not all) have found that training on copyrighted books is fair use if the books were legally acquired.
Are your books included?
Search the big LibGen database at The Atlantic’s site. But don’t assume your book isn’t eligible if it isn’t here. LibGen isn’t the only database involved. In the future, the settlement administrator will create a more complete database.
Search the copyright database at the U.S. government copyright site. A book being formally copyrighted is a requirement.
Sign up to get information: The law firm handling the case has a website where you can sign up to get information in the future.
How long will it take to get paid? It could go quickly, or it could take years. An article at Words & Money outlines the possible delays and barriers.
More lawsuits: On Friday, authors filed a new class-action suit against Apple for using copyrighted books in AI training.
Future implications: “We expect that the settlement will lead to more licensing that gives author both compensation and control over the use of their work by AI companies,” said Authors Guide CEO, Mary Rasenberger.
Translators are some of the hardest hit by AI
A recent article by Blood in the Machine reiterates what Heddwen Newton told me in a recent interview: good work is largely drying up for translators because of AI.
Some of the translators profiled in the article believe the huge losses aren’t as much from improvements in translation technology as they are from a new permission structure caused by media attention that makes it more acceptable for companies to reduce rates or eliminate jobs because of AI.
Weird AI: the seahorse emoji
If you ask ChatGPT-5 about the nonexistent seahorse emoji, it can’t find the answer and sometimes seems to spiral in frustration. I’ve seen examples where it starts swearing.
Claude 4.1 also stumbles a bit — it showed me two different horse emoji — but then it correctly says there is no seahorse emoji. Gemini 2.5 Flash, on the other hand, knows from the get-go that the seahorse emoji does not exist. +1 for Gemini.
Unfortunately, I haven’t seen anyone credibly explain why the seahorse emoji flummoxes some LLMs.
Your questions: why do people comment ‘AI’?
A reader named Cheri asked a question about why people feel the need to comment “AI” on stories and images on social media.
I have noticed this too. On Instagram, when I see something especially amazing, I often find myself wondering if it is real these days, and I go to the comments to see what people are saying. It makes me sad that this is now my reaction to seeing wonderful things. And also, why would I trust internet comments?! But when multiple people name what the real thing is, I generally trust them and feel happy. Yay, it was a real amazing thing that maybe I could see someday!
As for why people do it, it’s possible some people comment “AI” in the hope of educating people that what they are seeing isn’t real, but I believe most people are commenting “AI” to be derisive. It’s a way of dismissively saying they don’t like AI images.
That would fit with an experience Cheri recounted when she saw someone comment “AI” on a story about Þrídrangar Lighthouse in the Vestmannaeyjar Islands, Iceland, which she says “admittedly looks like it couldn't exist, but does.” She shared with them that it is real, and the “AI” poster said he was “sticking to his theory” — which is the comment of someone who is more interested in sharing a vibe than knowing the truth.
Quick Hits
Using AI
A custom GPT by Dartmouth professor Brendan Nyhan that’s meant to mimic his feedback on political science, communication, and information science manuscripts — Bluesky/ChatGPT
LLMs vs. geolocation: GPT‑5 performs worse than other AI models — Bellingcat
A Librarian’s Playbook for Agentic AI — New York Law Institute (NYLI)
How GenAI Tools are Helping Journalists Monitor Public Meetings (This looks very useful.) — Generative AI Newsroom
Use cases
Philosophy
Ethan Mollick posits that we’re entering an age of mass intelligence. The free availability of reasoning models and improving energy efficiency are making powerful AI available to many more people than ever before. — One Useful Thing
If AI lifts off, will living standards follow? — Financial Times (paywall)
Climate
Jevons' Paradox is good sometimes. It's widely misunderstood, and might be good news for AI and the climate — The Weird Turn Pro
Do data centers only seem bad for the climate because we can see them? Most of your climate impacts are invisible — The Weird Turn Pro
Image AI
50 impressive examples of how to use Nano Banana (video).( I’m usually not that interested in image AI, but I ended up watching the entire 25-minute video.) — Matt Wolfe
Bad stuff
Agentic browser security: indirect prompt injection in Perplexity Comet — Brave
Agentic browser security — Simon Willison’s Blog
Discussion about agentic browser security on Hacker News — Hacker News
OpenAI looks to online advertising deal – AI‑driven ads will be hard for consumers to spot. (I had this experience trying to figure out what was an ad and what wasn’t after Perplexity launched ads months ago.) — The Conversation
Medicine
Model updates
Education
Bluebooks: The only real solution to the AI cheating crisis — The New York Times (paywall)
I got an AI to impersonate me and teach me my own course — here’s what I learned about the future of education — The Conversation
The AI Pedagogy Project An extensive collection of education resources — metaLAB (at) Harvard
Harvard’s Generative AI Policy Is Inequitable — The Harvard Crimson
Perplexity to provide Comet AI browser free to all students and some PayPal and Venmo customers. (See the item above under “Bad Stuff” above for cautions about security risks from Comet.)
China
The business of AI
Cringe
Therapists are secretly using ChatGPT. Clients are triggered. (Lots of cringe-worthy anecdotes here. Wow.) — MIT Technology Review
Weird AI
Other
This website lets you blind test GPT‑5 vs GPT‑4o, and the results may surprise you — VentureBeat
Nearly 90% of videogame developers use AI agents, Google study shows — Reuters
Meta locks down AI chatbots for teen users — Mashable
Writing is the worst use of LLMs — The Interference
LLM traffic converts about the same as organic search challenging claims of higher-quality clicks — Search Engine Land
The most popular story from the last newsletter
What is AI Sidequest?
Are you interested in the intersection of AI with language, writing, and culture? With maybe a little consumer business thrown in? Then you’re in the right place!
I’m Mignon Fogarty: I’ve been writing about language for almost 20 years and was the chair of media entrepreneurship in the School of Journalism at the University of Nevada, Reno. I became interested in AI back in 2022 when articles about large language models started flooding my Google alerts. AI Sidequest is where I write about stories I find interesting. I hope you find them interesting too.
If you loved the newsletter, share your favorite part on social media and tag me so I can engage! [LinkedIn — Facebook — Mastodon]
Written by a human (except the examples below, obviously)