- AI Sidequest: How-To Tips and News
- Posts
- New AI features, centrist nudges, and the 'blandification' of writing
New AI features, centrist nudges, and the 'blandification' of writing
Video-to-spreadsheet tricks, the grow-vs-cut debate, and why no one is immune to AI influence
Issue 105
On today’s quest:
— So. Many. New. Features.
— Tip: Use video to get data
— LLMs nudge people to the center
— Using AI doesn’t just change how people write; it changes the answers they give
— Autocomplete can influence your opinions
— Three types of AI delusion
— The grow versus cut argument
— An Anthropic security researcher is freaking out about the future
So. Many. New. Features.
New features big and small are coming out every day now. It can be hard to keep up, but these are the features I found especially important or useful this week. The first two make Claude more like the buzzy OpenClaw agent:
→ Claude Dispatch lets you interact with Claude Cowork and Claude Code from your phone. You can start a chat at your desk and keep it going while you’re out and about.
→ Claude Computer Use lets Claude Cowork and Claude Code operate your computer (Pro and Max users only, Mac only for now). It can do anything you can do sitting in front of your computer, which makes it both powerful and risky. I have it set up on an old computer that has its own identity and is completely isolated from everything except what I want it to work on.
Anthropic has launched a bunch of other new features too — many for Claude Code — and has become so popular that the company is having trouble keeping up with demand. They’re currently throttling use from 5 a.m. to 11 a.m. Pacific time, so you’ll hit limits faster during those hours.
→ Canva Magic Layers takes images and separates the elements into editable layers. This new feature let me quickly change text on an old image where I didn’t have the original anymore.
Tip: Use video to get data
Problem #1: The Fitbit app is shutting down, and I wanted to get 12 years of weight data I have stored in the app.
The solution: I recorded the screen as I scrolled through the data on my phone, uploaded the video to Gemini, and asked it to turn it into a Google Sheet.
Because I haven’t given Gemini access to my Google Drive, it gave me the data instead, which I pasted into a Sheet.
This project I’ve been putting off for months took about 5 minutes.
Problem #2: Wild with power after the Fitbit project, I decided to catalog the contents of my chest freezer.
The solution: I pulled everything out, quickly took a video of all the labels, and Gemini made a spreadsheet. Now I can easily check whether I have frozen mango pieces hiding in there somewhere. (I do not.)
Note: ChatGPT failed at this task. In my experience, Gemini is much better at handling audio and video. I haven’t tried it with Claude.
LLMs nudge people to the center
While social media is polarising, evidence suggests AI may nudge people towards the centre. www.ft.com/content/3880...
— Stefan Schubert (@stefanschubert.bsky.social)2026-03-28T07:39:15.120Z
Social media rewards attention, so it drives people to the extremes. Platforms make money from your attention, so they promote things that get your attention — like rage bait and weird, outlandish claims. Plus, they have mostly been treated like neutral platforms, unpunished for the misinformation posted by their users.
A new study described in the Financial Times, however, finds that LLMs behave differently, driving people to more centrist views. Likely because AI companies want paying business customers who come to them for utility, they try to deliver accurate information.
The small study found that Grok nudges users to the center-right, and all the other major LLMs nudge users to the center-left — in both cases, more toward the center than dominant social media interactions.
It will be interesting to see if this trend holds for AI platforms that end up relying on advertising for income, which would switch the incentives back to attention.
Using AI doesn’t just change how people write; it changes the answers they give
This seems related to the story above: A new study found that LLMs don’t just change how people write; they change what people write:
“People who heavily relied on LLMs produced essays that answered the happiness question with a neutral response 69% more often than participants who did not use AI or only used AI for light edits. The study participants who used AI less often or avoided AI entirely submitted essays that were much more passionate, either positively or negatively, about the link between money and happiness.”
In the study, the heavy AI users also produced writing that was more formal and less personal. The researchers described the changes as the “blandification” of writing.
Autocomplete can influence your opinions
AI’s ability to influence opinions happens even on a small scale — through autocomplete.
In a recent study, people filled out a survey and were given biased autocomplete suggestions, such as answers favoring or rejecting the death penalty. People didn’t realize they were being swayed by the suggestions, but their answers changed to be more in line with the suggestions even when they didn’t use them.
One of the researchers, Mor Naaman of Cornell, told Scientific American, “We told people before, and after, to be careful, that the AI is going to be (or was) biased, and nothing helped. Their attitudes about the issues still shifted.”
As a side note, this is why it’s bad to allow yourself to be exposed to any kind of propaganda; no one is immune.
Three types of AI delusion
At the extreme end, LLMs can cause people to have delusions.
In an article in The Guardian, the founder of a support group for people whose lives have been derailed by AI psychosis cataloged the three main delusions they see:
The users have “created the first conscious AI.”
The users have “stumbled upon a major breakthrough in their field of work or interest and are going to make millions.”
The users believe “they are speaking directly to God.”
The grow versus cut argument
In a recent newsletter, Nate B. Jones encouraged companies to resist layoffs when they see how much of their existing work AI can do:
“When execution cost drops by an order of magnitude, the correct response isn’t to do the same work with fewer people. It’s to do dramatically more work, to pursue every opportunity that was previously too expensive, too niche, too speculative to attempt. The companies that get this will be unrecognizable in three years. The companies that don’t will have beautiful margins and no growth, and their competitors will eat them alive.”
FWIW, this is what I see in my own work. With AI taking over some of the “drudgiest” administrative parts of my job, I’m doing work I didn’t have the time or resources to do before.
(Nate’s quote is from a paid post that seems inaccessible now without a subscription, but free subscribers get partial posts by email.)
How good are AI image and video detectors?
The New York Times ran more than 1,000 tests across 12 different tools that might be able to test whether images, video, and audio were AI-generated. They found that most were able to detect “quickie” AI images and AI-generated audio, but the detectors struggled to accurately identify AI fingerprints in more complex images, images without people, and videos.
An Anthropic security researcher is freaking out about the future
At a security conference last week, Nicholas Carlini from Anthropic showed Claude identifying critical bugs in established software that nobody has found before — with ease — and developing exploits to take advantage of those bugs. This is the kind of thing that only a handful of people in the world could do before, and it can now be done by almost anyone with the best currently available LLMs.
As with all the other huge advances I’ve been telling you about that happened since November, this is also something that couldn’t have been done just a few months ago. In the 20-minute video, Carlini seems extremely worried and essentially begs the audience of security professionals for immediate help.
Quick Hits
My favorite pieces this week
Seeking a Sounding Board? Beware the Eager-to-Please Chatbot. [Extremely disturbing and important findings.] — New York Times
Inside the Anthropic Mythos leak and the growing distance between the people building AI and the rest of us — Hybrid Horizons
Why are people adopting AI to write? — TechnoLlama
Google Has a Secret Reference Desk. Here's How to Use It. [Not about AI, but useful] — Card Catalog for Life
Using AI
How to use Claude Cowork Projects — Anthropic
How NotebookLM's New 'Cinematic Video' Tool Works — Lifehacker
Agent Mode in PowerPoint — The Signal
Case Study: Autonomous Content Creation for Katieroberts.com — Trust Insights
Agents
I Mapped Where Every AI Agent Actually Sits. Most People Pick Wrong. — Nate B. Jones, YouTube
To Scale AI Agents Successfully, Think of Them Like Team Members — Harvard Business Review
Bad stuff
How companies are using AI to pay workers as little as possible — Popular Information
The People Getting Falsely Accused of Using AI to Write [False accusations seem to especially affect people who are neurodivergent or who speak English as a second language.] — New York magazine
Rogue AI is already here — Fortune
The business of AI
OpenAI “indefinitely” shelves plans for erotic ChatGPT [The idea turned off some employees and investors. Further, shuttering the project is likely part of a broader move to prioritize coders and business users, such as shuttering the Sora video app.] — Ars Technica and CNBC
Walmart: ChatGPT checkout converted 3x worse than website — Search Engine Land
We Are Not in a Bubble — Stratechery
Climate & energy
Data centers' heat exhaust is not raising the land temperature around where they're built — Andy Masley
Google's new TurboQuant algorithm speeds up AI memory 8x, cutting costs by 50% or more [The Superhuman newsletter says, “If scalable, the tech breakthrough could drastically reduce the cost and compute needs of running LLMs.”] — Venture Beat
Education
The AI cheating panic is loud. The way students actually use ChatGPT is much quieter. — ChatGPT for Education
College students are writing with AI – but a pilot study finds they’re not simply letting it write for them — The Conversation
Job market
Legal
Model & product updates
Philosophy
I Saw Something New in San Francisco [On how people are interacting with AI] — New York Times
Publishing
AI and Publishing: FAQ for Writers — Jane Friedman
An AI Upheaval Is Coming for Media. This Journalist Is Already All In. — Wall Street Journal
Robotics
A robot peeling an apple — Bluesky
Science & medicine
AI software for smart glasses wins £1m prize for technology to help people with dementia — The Guardian
Other
Google unleashes Gemini AI agents on the dark web — The Register
How AI-generated content performs in Google Search: A 16-month experiment — Search Engine Land
AI assistants now equal 56% of global search engine volume: Study — Search Engine Land
Wikipedia Bans AI-Generated Content [The article does not say how AI-generated content will be detected.] — 404 Media
What the ‘Shy Girl’ Mess Says About the Future of Fiction — New York Times
Will AI be the end of 'proper' translations? — Barbara Serra
Linux kernel czar says AI bug reports aren't slop anymore — The Register
Endgame for the Open Web — Anil Dash
The Autonomous Battlefield — Foreign Affairs
What is AI Sidequest?
Are you interested in the intersection of AI with language, writing, and culture? With maybe a little consumer business thrown in? Then you’re in the right place!
I’m Mignon Fogarty: I’ve been writing about language for almost 20 years and was the chair of media entrepreneurship in the School of Journalism at the University of Nevada, Reno. I became interested in AI back in 2022 when articles about large language models started flooding my Google alerts. AI Sidequest is where I write about stories I find interesting. I hope you find them interesting too.
If you loved the newsletter, share your favorite part on social media and tag me so I can engage! [LinkedIn — Facebook — Mastodon]
Written by a human