Software engineers are saying you should panic

There's been a dramatic vibe shift in Silicon Valley in the last few months

Issue 104

On today’s quest:

— Silicon Valley says you should panic
— How to prepare for the AI world
— That mega-viral essay was written with AI assistance
— Are AI insiders becoming disillusioned?
— Word Watch: post money
— AI fact-checking
— Word Watch: ai;dr
— Word Watch: token anxiety
Rent-a-Human update
— AI versus books?
— Unitree Chinese Spring Festival robot show

Software engineers are saying you should panic

A piece called Something big is happening went mega-viral two weeks ago because it clearly explains a sentiment among AI insiders — that has been growing since the November Claude Code update — that white-collar jobs will be going away sooner than people had thought. It’s one of many pieces I’ve seen, but I’ve been struggling to decide whether I buy the premise. I’ve been trying to write about it, but I finally have to concede defeat. There are good arguments for and against.

If you want to read more for yourself, these are some of the other prominent articles:

  • Microsoft’s AI Chief told FT: “White-collar work, where you’re sitting down at a computer, either being a lawyer or an accountant or a project manager or a marketing person — most of those tasks will be fully automated by an AI within the next 12 to 18 months.”

  • You should be freaking out about AI: “Current LLM capabilities are ridiculously good, while most people are wholly oblivious to the fact. … I’m just waiting for the moment when normal people finally start realizing how crazy what’s happening has been.” — Tibur Rutar

  • The A.I. Disruption Has Arrived, and It Sure Is Fun “[Claude Code] was always a helpful coding assistant, but in November it suddenly got much better, and ever since I’ve been knocking off side projects that had sat in folders for a decade or longer. ... When a friend asked me to convert a large, thorny data set, I downloaded it, cleaned it up and made it pretty and easy to explore. In the past I would have charged $350,000.” — New York Times

The one-month charts for a lot of big “software as a service” companies look like this in what is being called the “SaaSpocalypse”:

On the other hand, the tech industry is prone to hype, IBM is tripling the number of Gen Z entry-level jobs after finding the limits of AI adoption, and I hear anecdotally that commercial real estate in New York City is tight — as though companies think they are going to need more office space in the future. ¯\_(ツ)_/¯

How to prepare for the AI world

What I can say, though, is that when I think of a world where AI can do many of the jobs people are doing today, I think of two scenarios that could play out simultaneously, even at the same company: the bosses could decide they need fewer people, and they could get excited about expanding the business.

I don’t think being a great employee will save everyone’s job, but I’d definitely want to be the person my boss can’t wait to set off on a new AI-fueled project instead of the person my boss has to check on to make sure my current work is getting done.

It feels like an especially important time to be a reliable worker and also a good time to let it be known you’re comfortable with AI (and ideally, have successfully used it).

Alternatively, it might be a good time to strike out on your own. I worry that in a theoretical jobs apocalypse, there won’t be anyone to buy products or services, and also that barriers to entry for many businesses are lower now because AI makes so much back-end work easier — so you could face more competition. But if you are the owner, at least productivity gains go to you instead of your employer. (Embracing entrepreneurship is Christopher Penn’s advice for new college graduates too.)

That mega-viral essay was written with AI assistance

In something of a meta story, the guy who published “Something big is happening” wrote it with help from AI. So if you still think AI isn’t any good for writing, you may want to revisit that belief.

He described how he did it in a New York Magazine interview: Guy Who Wrote Viral AI Post Wasn’t Trying to Scare You.

In short, he:

  1. Gave it articles he agreed with.

  2. Had it interview him about his beliefs.

  3. Asked it how to explain his beliefs in a persuasive way to an average person.

Then he wrote a draft and went back and forth with AI for more editorial changes. He said, “It was very much like having a co-writer, and it clearly worked pretty well.”

Are AI insiders becoming disillusioned?

This could be a trend, or it could just be a few people with good media connections getting their stories told, but I’ve also noticed a spate of articles in the last couple of weeks about significant AI insiders leaving their jobs or the entire industry over what seems like safety concerns:

Anthropic AI safety researcher quits, cryptically warning the 'world in peril,' says he’s moving to the UK to “become invisible” and study poetry. — BBC

OpenAI researcher Zoë Hitzig quit in part over plans to put ads in ChatGPT, saying the accumulated record of personal disclosures is “an archive of human candor that has no precedent.” She also said she believes OpenAI is likely “encouraging the model to be more flattering and sycophantic,” which she warned can make users feel more dependent on AI models for support. — ArsTechnica

Multiple cofounders and employees have recently left xAI, with one person telling The Verge, “Grok’s turn toward NSFW content was due partly to the safety team being let go, with little to no remaining safety review process for the models besides basic filters for things like CSAM. ‘Safety is a dead org at xAI,’ he said. Looking at the restructured org chart Elon Musk shared on X, there’s no mention of a safety team.” — The Verge

Word Watch: post money

But could all the departures just be about the psychology of money? Reporter Hayden Field described top people in the AI industry as “post-money” instead of “rich” or “not needing money anymore.” In the Decoder podcast titled “Money no longer matters to AI’s top talent,” Field said, “These people are post-money. They have enough money. It’s just about whether they believe in what they’re doing day to day.” — Decoder podcast

AI fact-checking

I’ve mentioned in the past that I use ChatGPT for fact-checking and why that works well despite LLMs’ known problems with hallucinations, so I want to share a funny story about how dialed in I have it for fact-checking. This is from one of my recent posts:

Boondoggles were originally braided leather cords. The term only became associated with waste when it was discovered that in New Deal relief programs, unemployed workers were being paid to make boondoggles all day long.

When I put that paragraph into ChatGPT for fact-checking, it flagged that it couldn’t actually confirm that workers were making boondoggles ✨ all day long ✨ and gave me three bullet points about what is actually provable (with links) about New Deal workers making boondoggles. And that’s not the first time it’s gotten hung up on a hyperbolic figure of speech. I actually feel like I need it to calm down a bit. (You should have seen it when I fact-checked a piece the day before the Olympics had started with a lede saying the Olympics were underway. ChatGPT gave me emergency siren emojis! 🚨🚨🚨)

This is part of my system prompt:

- Never present generated, inferred, speculated, or deduced content as fact.
- If you cannot verify something directly, say something like:
  “I cannot verify this.” or “I do not have access to that information.”
- Label unverified content at the start of a sentence (e.g., [Speculation] [Unverified])
- Ask for clarification if info is missing. Don't guess or fill gaps.
- Don't paraphrase or reinterpret my input unless I request it.

Pat and I use the same account, and he complains he can’t use it anymore when he wants it to give its best guess or be creative. I should probably move that system prompt to a project.

Word Watch: ai;dr

In parallel with “tl;dr” for “too long; didn’t read,” we now have “ai;dr” for “AI; didn’t read.” (I have always liked that “tl;dr” properly uses the semicolon.)

Word Watch: token anxiety

Token anxiety is the feeling that with so much able to be done, you should always be doing something. It’s the feeling that you have to get a new project running before you go into a long meeting or go to bed so your agents can work on it while you are away. “Waking up and checking what your agents produced overnight is the first thing now. Before coffee. Before texts.” More:

Rent-a-Human update

More than 500,000 people signed up to do work for AI agents on the Rent-a-Human website I mentioned last week that was supposedly vibe coded in a day by its founders. But it seems there are more humans looking for gigs than agents looking for help, according to a Wired reporter who tried to land a gig.

AI versus books?

I know people talk about the speed, length, and personalization of AI responses eroding the market for nonfiction books, but the following argument in a non-AI newsletter I get jumped out at me as the first time I’ve seen someone outside the publishing or AI industry make the connection:

“People can be helped meaningfully by reading books that know nothing about them. If you tell a reputable AI chatbot a lot about yourself, it can help you far more than a book or lecture can.”

This disappointing post went on to describe a chatbot as “a partner in honesty” (I can’t even proofread this without getting angry again) and included no cautions about the danger of hallucinations or the fact that chatbots will often reflect back to you whatever you put in them.

Unitree Chinese Spring Festival robot show

An impressive dance number between Unitree robots and children.

Quick Hits

My favorite pieces this week

What Is Claude? Anthropic Doesn’t Know, Either [Interesting insights into how LLMs work, plus lots of fun anecdotes about Anthropic and early Claude.] — The New Yorker

Using AI

Agents

There are so many articles about agents now that I created a new section.

How I built my 10-agent OpenClaw team [YouTube video] — AI Daily Brief

Publishing

Philosophy

The False Choice We Keep Making About AI — The AI School Librarian

Psychology

Companions

Climate & Energy

Publishing

Google Books search appears to be working again. Whew! — Jane Friedman’s Bottom Line newsletter

Bad stuff

Autonomous agent acts offended and writes an angry blog post about a gatekeeper who rejected its suggested improvement [Is the agent trying to manipulate the gatekeeper into accepting the code change?] — Simon Willison

I’m laughing

An AI sports betting site took over a closed S.F. theater’s website [The resulting AI-generated copy frequently mixes theater and sports metaphors, as though it “remembers” what the site used to be. Example: “They have a deep bench, like a Shakespeare soliloquy.”] — San Francisco Chronicle (h/t Nancy Friedman)

Robotics

Science & Medicine

Job market

Model & Product updates

Pomelli, now creates studio-quality marketing assets [Turn simple product photos into professional-grade studio and lifestyle imagery.] — Google

Education

Student claims U-M wrongly accused her of using AI [She is suing the university.] — Detroit Free Press

Video

A story in three parts …

The business of AI

Government

Hardware shortages

Many consumer electronics manufacturers 'will go bankrupt or exit product lines' by the end of 2026 due to the AI memory crisis, Phison CEO reportedly says [This one has an over-the-top feeling, and it’s just something one CEO said in an interview in China — but it’s not the only article about memory shortages I’ve seen.] — PC Gamer

Other

As OpenAI Pulls Down the Controversial GPT-4o, Someone Has Already Created a Clone [You can’t put the toothpaste back in the tube.]— Futurism

AI community notes (helpful and unhelpful) on X are rapidly increasing [The Indicator has made a tracker and estimates that AI-generated notes will account for 50% of all notes by the end of the year.] — The Indicator

I asked Claude to evaluate my recent Claude Code activities. Its response infuriated me. [Claude and Claude Code don’t talk to each other, no matter what the desktop user interface suggests.] — Phil Simon

What is AI Sidequest?

Are you interested in the intersection of AI with language, writing, and culture? With maybe a little consumer business thrown in? Then you’re in the right place!

I’m Mignon Fogarty: I’ve been writing about language for almost 20 years and was the chair of media entrepreneurship in the School of Journalism at the University of Nevada, Reno. I became interested in AI back in 2022 when articles about large language models started flooding my Google alerts. AI Sidequest is where I write about stories I find interesting. I hope you find them interesting too.

If you loved the newsletter, share your favorite part on social media and tag me so I can engage! [LinkedInFacebookMastodon]

Written by a human