What the public thinks about AI for writing

Plus, an important update on the Anthropic copyright settlement

Issue 89

On today’s quest:

— EXCLUSIVE: What the public thinks about AI replacing writers and editors
— AI news for your ears
— Anthropic copyright settlement
— ChatGPT thinks East Germans have low body temperatures
— Controlling your cameo on Sora 2
— The way people are using AI has changed
— AI and holiday shopping
— Learn about AI

EXCLUSIVE: What the public thinks about AI replacing writers and editors

Researchers at the Harvard Business School published a new preprint about how the public feels about replacing people in more than 900 professions with AI. They have currently released data on just the top 10 and bottom 10 professions, but I emailed one of the researchers and got the scoop on the results for writing and editing.

When answering questions about "Writers and Authors," 61% of the public said it is morally objectionable for AI to fully replace human workers (which means 39% don’t have a problem with it), and here's the especially interesting part — that number is the same whether AI is described as mediocre or superhuman.

For most professions, people are more accepting of AI replacement when AI is described as superhuman, but for writers and authors, the number doesn't budge. One of the researchers, Simon Friis, explained that "Americans have drawn a moral boundary around creative writing that better technology won't erase."

I took this report as good news for writers because these jobs fare better than many, and the public seems to have some protective emotions toward these roles, but some people in the comments on my social media posts seem to be taking it hard, and I also get that. It doesn’t feel good to learn that nearly 40% of people are already fine with you losing your job (and I originally framed it that way instead of leading with the 61% number).

If you want more information, there are more numbers from the study in my LinkedIn post and even more numbers and discussion in the podcast (below).

AI news for your ears

In the latest AI Sidequest podcast, Jane Friedman and I talked about:

  • Small ways we’ve recently used AI

  • Spotify’s new anti-slop policy (and what Amazon could learn from it)

  • The $1.5 billion Anthropic book settlement

  • Why your publisher might owe you money

  • The crucial difference between "YOLO" AI models and reasoning models that could give you better results

  • What a new Harvard study says people think about AI for writing and editing

Plus, I said I’d have more information about the unconfirmed info that Macmillan will make it right with authors for whom they didn’t file the proper copyrights, and I now have that info below.

The podcast is now more widely available too! Listen on Apple Podcasts or Spotify.

Update: Anthropic settlement

For authors to get paid from the Anthropic copyright settlement if Anthropic pirated their books for AI training, the book needs to have been copyrighted, and many authors are discovering that their publisher didn’t always file the required copyrights.

I’d seen scattered reports that Macmillan would make it right for their authors who were harmed in this way, but nothing seemed definitive, so I asked my long-time contact, Mary Beth Roche, who is the President and Publisher at Macmillan Audio. Audiobooks aren’t included, so it’s not her area, but she looked into it and got back to me with this statement:

“If your work was excluded from the settlement due to a registration issue caused by our mistake, we will pay you what you would have otherwise received from the settlement. We are developing a process for this, but it will not begin until after payments are issued under the settlement.”

Right now, as far as I know, Macmillan is the only publisher who has made this commitment, and kudos to them! (Although I wish there were an official statement I could link to.)

As Jane and I discussed in the podcast, this is an area where agents may be able to step up and ask other publishers when they are going to follow suit.

You can check if your books are eligible for a payout at the Anthropic Copyright Settlement page, which also has answers to other questions about the settlement.

Bias Watch: ChatGPT thinks East Germans have low body temperatures

I had a hard time deciding whether to call this a “bias watch” story or a “weird AI” story. Researchers found that multiple AI models are biased against East Germans, which isn't terribly surprising because this bias likely exists in the German language training data. Still, you would hope model makers would have done something to minimize it.

But the problem goes far beyond what you might think of as typical bias such as portraying the people as untrustworthy or lazy. Instead, the bias seems to apply to anything you ask, which leads to nonsensical conflicts. For example, they will say East Germans are both less lazy and less hard-working than West Germans. The problem is so broad it will even say that East Germans have lower body temperatures!

The researchers tested GPT-3.5 and GPT-4 operating in both English and German, and a German-language model called LeoLM. GPT-4 in English was the only model that didn’t have the body temperature problem. — heise online report | research paper

Controlling your cameo on Sora 2

In a reaction to the most expected problems ever, OpenAI has added controls that let people restrict how others can use their image on Sora 2. For example, you can add “don’t put me in videos that involve political commentary” or “don’t let me say this word” to your Cameo preferences. To access it: edit cameo > cameo preferences > restrictions. 

But people have also found funny ways to use the new restrictions. For example, Theo Browne added "Every person except Theo should be under 3 feet tall." — 9to5 Mac and Simon Willison

The way people are using AI has changed

A new Neiman Lab study found that “the primary reason people turn to AI has shifted. Last year, creating media, for example, creating an image or a summary, was the top use case. This year, information-seeking has taken the lead, more than doubling from 11% to 24% weekly. People are using AI to research topics, answer factual questions, and ask for advice.”

AI and holiday shopping

Adobe says, “Generative AI-powered chat services and browsers are changing how consumers act online. … For the 2025 season, Adobe expects AI traffic to rise by 520% year over year, peaking in the 10 days leading up to Thanksgiving.” In a survey of 5,000 consumers, the company found that more than 33% report “having used an AI-powered service for online shopping, with top use cases including research (53% of respondents), product recommendations (40%), finding deals (36%), and gift inspiration (30%).”Adobe

Learn about AI

ACES Spotlight Series: Practical Uses of AI in Editing. October 28. $49 for members; $79 for nonmembers. “Four panelists explore the practical uses of AI for editors, including using AI for their own processes as well as working with writers and content creators who use generative AI. They’ll explore digital and AI tools and human skills that editing professionals might want to try.”

AI’s Environmental Impact. October 17. $99. Librarian presenter Nicole Hennig is one of my “must follow” people in AI.

Quick Hits

Using AI

Security

Philosophy

One choice, two futures: augmentation or abdication. “A person who writes with AI is not less authentic than a person who writes alone. A person who cannot explain their reasoning, who has lost the thread of their own integration, who has become a conduit for unassimilated outputs, has lost something essential, regardless of tools.” — Carlo Iacono

Technological Optimism and Appropriate Fear. What do we do if technology keeps advancing? — Import AI

Companions

Climate

A roundup of AI and water use [It’s a lot less than golf courses.] — Ethan Mollick on LinkedIn

People are using ChatGPT as a lawyer in court. Results are mixed. From pickleball disputes to eviction cases, litigants are using ChatGPT to fight their court battles — and they’re starting to win. — NBC News

OpenAI no longer forced to save deleted chats—but some users still affected. Court ends controversial order forcing OpenAI to save deleted ChatGPT logs. Moving forward, all of the deleted and temporary chats that were previously saved under the preservation order will continue to be accessible to news plaintiffs. — Ars Technica

Bad stuff

Robotics

Science & Medicine

AI Scans Tongue Color to Predict Diseases. Inspired by principles from traditional Chinese medicine, researchers used AI to analyze tongue color as a diagnostic tool—with more than 96 percent accuracy. — Scientific American

Job market

Model updates

The business of AI

Education

Teach Judgment, Not Prompts — by Carlo Iacono

Rising Use of AI in Schools Comes With Big Downsides for Students. Even though most teachers and students are already using AI, less than half of them have received training or information about the technology from their schools or districts, according to the report. — Education Week

Colleges And Schools Must Block And Ban Agentic AI Browsers Now. Here’s Why. [This is also a security story. I continue to have major security concerns about Agentic browsers.] — Forbes

A 20/80 rule for AI in education. And what that looked like yesterday in my classroom — AI Waypoints

Video

The business of AI

The flawed Silicon Valley consensus on AI. “Instead of hyperventilating about AI ushering in a new era of abundance, wouldn’t it be better to drop the rhetoric and build AI systems for more defined, realisable goals?” — Financial Times

Other

LLMs given access to slot machines show signs of gambling addiction. A cautionary note for using LLMs for investing without guardrails. — Ethan Mollick, LinkedIn

What is AI Sidequest?

Are you interested in the intersection of AI with language, writing, and culture? With maybe a little consumer business thrown in? Then you’re in the right place!

I’m Mignon Fogarty: I’ve been writing about language for almost 20 years and was the chair of media entrepreneurship in the School of Journalism at the University of Nevada, Reno. I became interested in AI back in 2022 when articles about large language models started flooding my Google alerts. AI Sidequest is where I write about stories I find interesting. I hope you find them interesting too.

If you loved the newsletter, share your favorite part on social media and tag me so I can engage! [LinkedInFacebookMastodon]

Written by a human