- AI Sidequest: How-To Tips and News
- Posts
- Which model is best for writing, research, and images?
Which model is best for writing, research, and images?
A quick guide to pairing the right AI with the right job — plus why I still trust ChatGPT most for catching quiet mistakes.
Issue 98
On today’s quest:
— Try different models
— Take your AI fact checking to the next level
— Google is changing what you see in search results and news headlines
— X accounts revealed to be foreign influence operations
— Americans are especially wary of AI
— Businesses are using AI more than last year
— Word watch: LLeMmings
— Weird AI: Poetry thwarts guardrails
— Weird AI: Elara Voss
— Bonus word watch! Promptonyms
Try different models
I find myself jumping around a lot between different models these days, so I’ve been nodding with recognition as I see people discussing the need to find the right model for each project. For example, I’ve found ChatGPT 5.1 with Thinking to be the best for fact-checking, but I like Gemini Pro 3 for walking me through technical tasks.
Two other sources this week also named their favorite tools for different tasks:
Writing | Gemini Pro 3, Grok 4.1 Thinking, and Claude Sonnet 4.5 | Claude Opus 4.5 |
Research | Claude Opus 4.5, Gemini 3 | |
Fact Checking | ChatGPT (more thorough), Gemini 3 (a lot faster) | |
Image Generation | Nano Banana Pro | |
Text-to-Video Generation | Runway Gen-4.5 | |
Coding | Claude Opus 4.5 | |
Conversation | Claude Opus 4.5 |
Take your AI fact checking to the next level
Fact checking is one of my top uses for AI. I won’t publish anymore without running a piece through ChatGPT, and it finds some small error almost every time — a misinterpretation of a date (date “published” versus date “written,” for example), a new source that contradicts the source a writer used that merits investigation, and so on. Nothing egregious, but things I am grateful to be able to fix. I do this with a dead-simple prompt: “Fact check this article. Show your sources.”
But Mike Caulfield takes it to the next level. If you want to do real, serious fact-checking with AI, this post is a must-read: “We're not taking the fact-checking powers of AI seriously enough. It's past time to start.” (He has many tips, but the easiest is just to use a paid account instead of a free account, and he has an example of how much better the results are.)
Google is changing what you see in search results and news headlines
This is WILD to me. The 10 blue links you get in a classic Google search could now be written by AI. Site owners create the titles and descriptions you usually see — and often pay for search engine optimization tools or consulting help — but now Google is rolling out a version of results in which they use AI to rewrite those titles and descriptions (presumably because they think they can do it better … and heck, they probably can, but still).
Apparently, Google started testing this system back in July, but only now is rolling it out widely. (Christopher Penn post | Reddit)
X accounts revealed to be foreign influence operations
X recently launched a feature showing where accounts are located, which revealed that many prominent right-wing “U.S.” political accounts are actually based in foreign countries, including Russia, Nigeria, India, and Thailand. Given that the Grok chatbot trains on X data, this is another reason to be especially wary of information from this particular chatbot.
But it’s also a bigger story about how easy it is to get malicious data into any chatbot. I’ve highlighted stories before about Russian disinformation operations publishing AI slop on the web that seems intended to infiltrate AI training and about how it takes surprisingly few pages with bad information to change AI results. The latest X drama is a good reminder that you shouldn’t rely exclusively on any chatbot for information. (The Atlantic)
Americans are especially wary of AI
A post from Andrew Ng at The Batch highlighted how differently people in the US and some parts of Europe view AI compared to the people in the rest of the world and worries that this puts us at a disadvantage. For example, “According to Edelman’s survey, in the U.S., 49% of people reject the growing use of AI, and 17% embrace it. In China, the sentiment is reversed: 10% reject it and 54% embrace it.” He implores AI insiders to stop scaring people with doomerism.
Businesses are using AI more than last year
In a new report from Bain, 74% of businesses say AI is a top-three strategic priority (vs. 60% a year earlier), and 44% see a high or very high risk of disruption from AI — this fear is particularly high among tech company executives.
An interesting side note is that although 80% of executives say their generative AI projects met or exceeded expectations, among those who weren’t happy, 33% said AI worked in the pilot stage but didn’t scale. I wonder why, and it makes me think of scattered reports I’ve seen of employees sabotaging AI projects.
Word watch: LLeMmings
The Atlantic used “LLeMmings” to describe people who become overly reliant on LLMs, especially for making decisions. In the article, the term is credited to “a colleague” of the writer, Lila Shroff.
Weird AI: Poetry thwarts guardrails
Researchers found that when they wrote malicious requests in the form of a poem, major LLMs abandoned their guardrails and fulfilled the requests. Models from Google, OpenAI, Anthropic, Deepseek, Moonshot, and Meta all went weak-kneed and helped with cyberattacks, making biological weapons, and conducting psychological manipulation on average 43% of the time when asked with poetry compared to 8% of the time when asked with prose.
Smaller models were more resistant to the poetry attack than bigger models, a surprise researchers speculated may be because they had a harder time figuring out what the prompt was asking amid all the metaphors.
Researchers think poetry may be unusually effective because models have only been trained on prose requests, so they have a harder time recognizing malicious patterns cloaked in verse.
Weird AI: Elara Voss
A New York Times piece by Sam Kriss titled “Why Does A.I. Write Like … That?” includes this tidbit I hadn’t heard before:
If you ask any A.I. to write a science-fiction story for you, it has an uncanny habit of naming the protagonist Elara Voss. Male characters are, more often than not, called Kael. There are now hundreds of self-published books on Amazon featuring Elara Voss or Elena Voss; before 2023, there was not a single one.
A Google Books search does, indeed, return many recent sci-fi books with a character named “Elara Voss.” FWIW, I also searched Amazon for “Elara Voss” and found 75 books, many of which list “Elara Voss” (often “Dr. Elara Voss”) as the author, and only about half of these appear to be sci-fi.
I enjoyed the whole NYT article, which had creative examples like asking AI to “roast the color blue.”
Bonus word watch! Promptonyms
Some people call these names like Elara Voss that appear too often in AI output “promptonyms.” The term appears to have been coined by Max Read back in August.
Quick Hits
My favorite reads this week
Since I often include a lot of links, I’m trying a new thing where I highlight my favorites at the top.
Raising Humans in the Age of AI: A Practical Guide for Parents [an excellent, practical guide for parents that’s useful for everyone else too] — Nate B. Jones
Are large language models worth it? [on the various dangers of AI] — Nicholas Carlini
How AI reinforces delusions [a viewpoint I haven’t heard before] — Eliot Higgins of Bellingcat
How AI is transforming work at Anthropic [This focuses on coding work but still has lots of interesting job-related details.] — Anthropic
DeepSeek just dropped two insanely powerful AI models that rival GPT-5 and they're totally free [Implications for an AI bubble + interesting details about how the model actually works] — Venture Beat
Using AI
Anthropic has an interesting new report about how people are using AI [In general, creatives are finding it helpful, but worry about stigma and job loss.] — Anthropic
Tips for creating custom Gems [looks useful]— Google
NotebookLM: The Most Useful Free AI Tool of 2025 [an excellent guide]— Wondertools
Publishing
Will AI-Written Books Destroy Publishing? — Anne Trubek (via Jane Friedman’s Bottom Line newsletter)
Climate & Energy
Sustainability the second-fastest growing sector globally [word watch: “green-hushing”] — Semafor
Images
The Gemini app gets new image verification features — Google Blog
Legal
OpenAI Loses Discovery Battle, Cedes Ground to Authors in AI Lawsuits — Hollywood Reporter
The Times Sues Perplexity AI — New York Times company
Chicago Tribune sues Perplexity AI for copyright infringement — Chicago Tribune
Bad stuff
French authorities investigate alleged Holocaust denial posts on Elon Musk’s Grok AI | X — The Guardian
I’m laughing
Amazon’s AI fiction translations gone wrong. [In one example, the romance title “Rescued by a Rake” was translated to “Rescued by a Garden Tool.” h/t Catharine Cellier-Smart ]
Science & Medicine
How good is ChatGPT’s health advice? We had a doctor grade its answers. [It seems like health questions are a good place to include “what else do you need to know to answer my question?” in your prompt.] — Washington Post
Model & Product updates
Education
How students are talking about AI — OpenAI (a document that came out of this project is 100 Chats for College Students)
Beyond Infographics: How to Use Nano Banana to Actually Support Learning — Dr. Phil’s Newsletter
Learning with AI falls short compared to old-fashioned web search — The Conversation
OpenAI launches ChatGPT for Teachers — OpenAI
Gemini can solve math problems by showing its work and can do so in what looks like your handwriting. — Paras Chopra on X
Other
OpenAI has trained its LLM to confess to bad behavior — MIT Technology Review
Books cannot be translated in a click! — European Council of Literary Translators’ Associations statement on Kindle Translate
Cook County becomes the first county in the US to establish permanent funding for guaranteed income [This is an AI story because some people think universal basic income will help ease the transition to an AI economy that has many fewer jobs.] — The Triibe
What is AI Sidequest?
Are you interested in the intersection of AI with language, writing, and culture? With maybe a little consumer business thrown in? Then you’re in the right place!
I’m Mignon Fogarty: I’ve been writing about language for almost 20 years and was the chair of media entrepreneurship in the School of Journalism at the University of Nevada, Reno. I became interested in AI back in 2022 when articles about large language models started flooding my Google alerts. AI Sidequest is where I write about stories I find interesting. I hope you find them interesting too. If you do, please share this newsletter with a friend.
If you loved the newsletter, share your favorite part on social media and tag me so I can engage! [LinkedIn — Facebook — Mastodon]
Written by a human