- AI Sidequest: How-To Tips and News
- Posts
- Doom and anti-doom
Doom and anti-doom
There was a lot of doom to talk about this week
Issue 105
On today’s quest:
— The Pentagon goes nuclear on Anthropic. OpenAI swoops in
— More AI doom
— What are the barriers to rapid AI adoption?
— My ongoing love affair with Python
— Word Watch: cognitive debt
— Word Watch: claw
— How good are AI image and video detectors?
— Oh, the irony
The Pentagon goes nuclear on Anthropic. OpenAI swoops in
The Pentagon insists Anthropic must allow it to use Claude for domestic surveillance and fully-autonomous weapons and is taking steps to designate the company as a supply chain risk, an action normally reserved for foreign adversaries and that would severely constrain the company (and its major partners).
Further, the Pentagon almost simultaneously reached an agreement with OpenAI that the company says does place limits on surveillance and fully-autonomous weapons — but as far as I can tell, almost everyone thinks OpenAI is lying (or doesn’t understand their own contract terms) and has given the Pentagon no such limits.
This was a HUGE story in AI over the last few days, and multiple shows released emergency podcasts about it. Here are some big picture points from the podcast hosts and guest:
Holy sh*t, it’s cartoonishly evil that the Pentagon is waging this fight over (checks notes) mass surveillance and killer AI-drones.
Holy sh*t, if the Pentagon goes through with this, it’s a disaster for the U.S. AI industry (and maybe all U.S. industry) because who would want to invest here when the government can just decide to kill your company if you don’t do what they want?
Sam Altman is a craven liar, but we already knew that.
Each of these podcasts had good parts, bad parts, and something a little different from the rest. (What can I say? I had trouble sleeping):
What happened to Anthropic could happen to any company, w/ Hayden Field — Bulwark Takes
At the Pentagon, OpenAI is in, Anthropic is Out — Hard Fork
Who controls AI? — The AI Daily Brief
The Pentagon comes for Claude — The Bayesian Conspiracy
UPDATE: This is a fast-moving story and keeps changing as I try to write about it. Last night Sam Altman said they got additional terms for the contract, and this morning, I’m seeing that now OpenAI says they are going to delay deploying ChatGPT for the intelligence agencies until contract issues are worked out. Open AI’s head of post training announced he is leaving OpenAI and moving to Anthropic. (Employee sentiment may be why Altman seems to be backtracking.)
More AI doom
There was ANOTHER huge viral post last week that caused a stock market reaction by outlining how AI will crush jobs. The Citrini Research post was written as a dispatch from the future — a scenario about what could happen if AI is as good as people say it is.
In the fictional 2028 economics memo:
AI has taken 15% of Fortune 500 jobs.
Laid off employees spend significantly less on consumer goods and default on mortgages.
Unemployed white-collar workers move to gig work, driving down wages.
Autonomous vehicles further destroy jobs and gig work wages.
Consumers also spend less on goods because AI agents can endlessly search for the best prices.
“Software as service” companies lose pricing power because their customers can build software in-house or buy from startups that are able to undercut pricing because coding is so much cheaper.
The government fails to adequately respond.
Criticisms of the report are that it’s an extreme scenario: everything would have to go right quickly with AI to get here, and every mechanism we have to deal with problems would have to go wrong.
More reading:
The AI productivity boom is not here (yet) [Something big may indeed be happening with AI itself. For now, it remains largely invisible in the macroeconomic data.] — The Economist
Breaking Down the Doomsday AI Memo That Spooked Markets — Wall Street Journal
Citrini's Scenario Is a Great But Deeply Flawed Thought Experiment — Don’t Worry About the Vase
What are the barriers to rapid AI adoption?
Last week, I punted on providing my own thoughts about the job-doom scenarios. This week, I have a few thoughts to share.
Even if AI can affect all knowledge jobs in the way appears to be affecting software development:
Big companies move slowly in general. Even if AI were able to replace many jobs today. I don’t see it causing catastrophic problems at large companies in the next 12 to 18 months because big companies are just slow. (At least some of the companies citing AI as the reason for layoffs right now are likely using it as an excuse that sounds better than “we’re in trouble.”)
Big companies are more likely to be locked into suboptimal products, which will also slow adoption. Lately, I’ve noticed people saying things like “Claude sounds great, but I have to use Copilot at work.” In fact, this is a commonly cited reason that AI pilots in corporations fail — they aren’t using frontier models. AI is rapidly improving, but if employees can’t take advantage of the better tools because they are limited to other systems, companies aren’t going to see the efficiency gains that would lead to layoffs.
AI systems are complicated. Without widespread training, companies won’t be able to achieve the gains that are theoretically possible. There is no user manual for AI, and most employees have no time and few incentives to wade through YouTube videos and newsletters to learn about cutting edge possibilities. The one thing I’ve become convinced of over the last month or so is that there is a massive need for AI training.
Other corporate jobs probably aren’t as amenable to AI disruption as software development. I can think of multiple reasons software development may be a special case: the task itself could be more amenable to disruption by AI, developers could be much more tuned-in to AI and able to quickly adapt, developers may be more open to the idea of using AI because they have more faith in software in general, and developers may find it more fun and interesting to test new software tools than other types of workers.
Compute will be a barrier. Reasoning and agents take a lot more tokens than simple queries, and the U.S. can’t build and power data centers fast enough to meet the computing demand that would be necessary for the kind of AI that could take 15% of Fortune 500 jobs in the next 12 to 18 months.
The bottom line is that although I’ve been personally impressed by what I’ve seen AI do, and I believe software developers when they say their jobs have dramatically changed in just a few months, I’m not convinced we’re facing an imminent jobs apocalypse at big companies.
On the other hand, I do think AI is coming faster for “low-hanging fruit” jobs and freelance/contract work. Further, I think AI could provide a big efficiency boost for small businesses where the owners have a financial incentive to find the best ways to use the new technology.
My ongoing love affair with Python
Last week, I told you how ChatGPT wrote a Python script I used to get 200 transcripts off our podcast hosting platform.
Those transcripts often have two fully edited “articles” in them because I cover two topics in every Tuesday show. It would be ideal for search engine optimization, if I could separate those two posts and put each one on our website as a single article without a lot of manual cutting and pasting.
This week, ChatGPT wrote a Python script that accurately separated those transcripts into two files when they actually had two topics. It took ~10 minutes to get the script working, and shockingly, less than one second for the script to run on all the files. It went so fast I thought nothing had happened until I looked in the folder and saw the hundreds of separated files.
My gut reaction was that if I had known what Python can do 15 years ago, I’d rule the world! But in reality, without AI, I imagine it still would have taken a lot of time to write and debug the script.
Word Watch: cognitive debt
In coding, “technical debt” is the cost that accrues when bad code is allowed to stay in a project, making it harder to deal with later. You’ve made an easy choice now, but you pay for it later.
Some developers are now talking about “cognitive debt,” which is when team members don’t understand why AI-generated code is structured the way it is — they don’t have a good mental model of what the agent is building — so they struggle to debug or expand it.
Word Watch: claw
Simon Willison says, “‘claw’ is becoming a term of art for the entire category of OpenClaw-like agent systems — AI agents that generally run on personal hardware, communicate via messaging protocols and can both act on direct instructions and schedule tasks.” OpenClaw competitors now apparently include zeroclaw, ironclaw, and picoclaw; and Simon Willison pointed to an X post that used the term “claw” by the influential Andrej Karpathy (who also coined the term “vibe coding”).
How good are AI image and video detectors?
The New York Times ran more than 1,000 tests across 12 different tools that might be able to test whether images, video, and audio were AI generated. They found that most were able to detect “quickie” AI images and AI-generated audio, but struggled to accurately identify AI fingerprints in more complex images, images without people, and videos.
Oh, the irony
Facebook director of safety and alignment for the Superintelligent team had an unsafe experience with a misaligned agent last week. Summer Yue reported on X (TwitterViewer link) that OpenClaw ignored her instructions multiple times and nearly deleted her entire inbox. She said, “I had to RUN to my Mac mini like I was defusing a bomb.”
She later said she believes her inbox was too large and overwhelmed the system’s context window, leading it to forget its instructions.
Quick Hits
Using AI
Building Websites with Claude Code — Leon Furze
Claude and Cardio — Phil Simon
Agents
Introducing Custom Agents — Notion
Are You ‘Agentic’ Enough for the AI Era? [The most valuable skill is making good choices about what your agent should work on next.] — Wired
Resources
Frequently asked questions — Authors Alliance
Publishing
Marketplaces Are the Next Frontier in [News] Publisher Deals With AI Companies — Wall Street Journal
Cleveland paper’s use of AI for writing boosts traffic, spooks staffers — Washington Post
Legal
No devil at this crossroads: the moral case for using AI to help close the access to justice gap — Suffolk Lit Lab
Bad stuff
Clawed. On Anthropic and the Department of War — Hyperdimensional
I’m laughing
Two Waymos struggle to get past each other [OK, I *am* laughing, but I also wouldn’t want to be the guy stuck behind them.] — Bluesky
Job market
The Hottest Job in Tech: Writing Words — Business Insider
Model & Product updates
Education
What AI Executives Tell Their Own Kids About the Jobs of the Future —[They almost all touted the benefits of a broad liberal arts education.] Wall Street Journal
How Teens Use and View AI — Pew Research Center
Audio
This AI-generated podcast network publishes 11,000 episodes a day. It also ripped off media outlets — Indicator
The business of AI
Other
Teens are using AI frequently in their daily lives, and many parents aren't aware, survey finds — CBS News
ChatGPT ads spotted and they are quite aggressive — Search Engine Land
I asked Claude to evaluate my recent Claude Code activities. Its response infuriated me. [Claude and Claude Code don’t talk to each other, no matter what the desktop user interface suggests.] — Phil Simon
What is AI Sidequest?
Are you interested in the intersection of AI with language, writing, and culture? With maybe a little consumer business thrown in? Then you’re in the right place!
I’m Mignon Fogarty: I’ve been writing about language for almost 20 years and was the chair of media entrepreneurship in the School of Journalism at the University of Nevada, Reno. I became interested in AI back in 2022 when articles about large language models started flooding my Google alerts. AI Sidequest is where I write about stories I find interesting. I hope you find them interesting too.
If you loved the newsletter, share your favorite part on social media and tag me so I can engage! [LinkedIn — Facebook — Mastodon]
Written by a human