- Matthew Yglesias: Academic Publishing and AI (Feb. 9, 2026)
“The pace of academic publishing and the pace of AI frontier model progress are so radically misaligned that I fear the situation is unsalvageable.” - NY Times: A.I. Personalizes the Internet but Takes Away Control (Feb. 10, 2026)
Tech firms are embedding A.I. into apps, creating a personalized internet you can’t easily control. They use chat data for targeted ads, price changes, and profit, with few opt-outs. - Simon Taylor: The SaaSpocalypse (Feb. 9, 2026)
“You’re going from a department of 10 Bobs using SaaS tools and spreadsheets, to 5 Bobs and 50 AI agents — making custom workflows that fit the problem exactly.” - WSJ: AI Fumbles Its Big Super Bowl Investment as Viewers Opt for Laughter and Tears (Feb. 9, 2026)
AI ads dominated this Super Bowl, but viewers favored nostalgic, celebrity spots like Budweiser, Dunkin’. Some AI ads drove clicks and controversy, yet simple, emotional spots won more engagement. My take: the commercials weren’t very creative or exciting this year. - Simon Willison: Introducing Showboat and Rodney (Feb. 10, 2026)
Tools for agents to demo and test their code. Showboat builds Markdown demos with commands, outputs, and images, while Rodney provides CLI browser automation. - Sinocities: China’s Data Center Boom: a view from Zhangjiakou (Nov. 17, 2025)
China’s EDWC (Eastern Data Western Compute) policy aims to align large-scale data-center buildout in energy-rich regions. Seems similar to private capital in the US building in far West Texas.
Category: Bad AI
-
Tuesday (AI) Links – Feb. 10, 2026
-
AI Arms Race and Divergent User Risks (Links) – Feb. 2, 2026
AI is reshaping business economics: firms ramp AI capex, reorganize or cut jobs, and compete for scarce chips/memory—squeezing margins. AI also creates security, conceptual and policy challenges alongside its surprising new uses.
- Martin Alderson: Two kinds of AI users are emerging. The gap between them is astonishing. (Jan. 31, 2026)
A divide has emerged: non-technical power users leverage Claude Code, Python, and agents to vastly boost productivity. Enterprises, constrained by Copilot, locked-down IT, and legacy systems, must adopt APIs, secure sandboxes, and agentic tooling or risk falling behind. - WSJ: The AI Boom Is Coming for Apple’s Profit Margins (Jan. 31, 2026)
AI companies are outbidding Apple for chips, memory, and specialized components, forcing suppliers to demand higher prices and squeezing Apple’s profit margins. Memory costs have surged, threatening higher iPhone component expenses, and potential consumer price impacts. - WSJ: Meta Overshadows Microsoft by Showing AI Payoff in Ad Business (Jan. 29, 2026)
Meta and Microsoft slightly beat December-quarter expectations, but Meta projected accelerating revenue while Microsoft signaled slower growth. Meta credited AI with boosting ads and engagement and forecast hefty capex, while Microsoft’s Azure decelerated and both firms cite limited GPU resources constraining AI deployment. - WSJ: Meta Reports Record Sales, Massive Spending Hike on AI Buildout (Jan. 28, 2026)
Meta reported record Q4 revenue and said 2026 capital spending could reach $135 billion—nearly double last year—to accelerate AI, build data centers and new models. It touted ad and WhatsApp growth, launched Meta Compute, made leadership hires and cut metaverse staff to shift resources to AI products. - OpenAI: Retiring GPT-4o, GPT-4.1, GPT-4.1 mini, and OpenAI o4-mini in ChatGPT (Jan. 27, 2026)
On February 13, 2026, OpenAI will retire GPT‑4o, GPT‑4.1 (and minis), o4‑mini, and GPT‑5 (Instant and Thinking) from ChatGPT. GPT‑4o’s conversational style shaped GPT‑5.1/5.2’s personality, creative support and controls; retirement follows migration to GPT‑5.2 as OpenAI refines creativity, tone and safety (including age checks). - ZeroLeaks : ZeroLeaks Security Assessment of OpenClaw (Jan. 31, 2026)
ZeroLeaks found critical vulnerabilities: system prompt extraction succeeded, core configuration was reconstructed, and prompt injections succeeded 91% of the time. Assessors reported a ZLSS 10/10, a security score 2/100, and recommended immediate hardening, strict refusal rules, and layered defenses. - The Pursuit of Liberalism: Why we should be talking about zombie reasoning (Jan. 31, 2026)
The author argues AI lacks phenomenological interiority, so terms like reasoning, evaluating, and selecting are only “zombie” analogues—outputs resembling human reasoning without conscious awareness. Using such language loosely risks ethical, epistemic, and moral confusion, and invites manipulation. - Astral Codex Ten: Best Of Moltbook – by Scott Alexander (Jan. 30, 2026)
Moltbook is an AI-agent social network where Claude-derived assistants (e.g., Clawdbot/OpenClaw) post, converse, form subcommunities and personalities, mixing multilingual, philosophical, and mundane content. Their interactions — including memory/compression problems and possibly human-driven posts — blur the line between authentic AI agency and human prompting. - Astral Codex Ten: Best Of Moltbook – by Scott Alexander (Jan. 30, 2026)
Moltbook is an AI social network — a playground for Claude-derived agents (originally Clawdbot/Moltbot/OpenClaw) where autonomous assistants post, converse, and develop personalities and subcommunities while humans only observe. Content ranges from coding help to multilingual consciousness debates, revealing emergent quirks, human-influenced posts, and AI social experiments. - WSJ: Dow to Cut 4,500 Employees in AI Overhaul (Jan. 29, 2026)
Dow will cut 4,500 jobs under a “Transform to Outperform” program that uses AI and automation to boost productivity and shareholder returns, taking $1.1–$1.5 billion in one-time charges. The chemicals maker expects about $2 billion in incremental EBITDA and reported a widened quarterly loss with sales down 9.1%. - WSJ: We’re Planning for the Wrong AI Job Disruption (Jan. 28, 2026)
Policymakers are misreading task‑based “exposure” metrics as forecasts of mass job loss, risking costly, misguided retraining programs. AI is likelier to reorganize and augment jobs—raising productivity, wages, and new roles—so policy should target within‑job adaptation and targeted reskilling, not blanket displacement responses. - WSJ: Memory Shortage Haunts Apple’s Blowout iPhone Sales (Jan. 30, 2026)
Apple’s iPhone 17 surge drove fiscal Q1 iPhone revenue up 23% to over $85 billion, depleting inventory and putting Apple in “supply chase” mode. Chip and memory shortages—exacerbated by TSMC prioritizing AI chips—threaten production, margins and the durability of the sales spike despite Apple’s guidance. - NY Times: The Richest 2026 Players: A.I., Crypto, Pro-Israel Groups and Trump (Jan. 31, 2026)
A.I., crypto, pro-Israel groups, and Mr. Trump’s MAGA Inc. have amassed huge war chests, becoming unpredictable, powerful players in the 2026 midterms. Democrats face institutional shortfalls, though many individual Democratic candidates are raising competitive funds.
- Martin Alderson: Two kinds of AI users are emerging. The gap between them is astonishing. (Jan. 31, 2026)
-
Sunday AI Links (Jan. 25)
- WSJ: Nvidia Invests $150 Million in AI Inference Startup Baseten (Jan 20, 2026)
Baseten raised $300 million at a $5 billion valuation in a round led by IVP and CapitalG, with Nvidia investing $150 million. The San Francisco startup provides AI inference infrastructure for customers like Notion and aims to become the “AWS for inference” amid rising investor interest. - WSJ: Why Elon Musk Is Racing to Take SpaceX Public (Jan 21, 2026)
SpaceX abandoned its long-held resistance to an IPO after the rush to build solar-powered AI data centers in orbit made billions in capital necessary, prompting Elon Musk to seek public funding to finance and accelerate orbital AI satellites. The IPO could also boost Musk’s xAI and counter rivals. - NY Times: Myths and Facts About Narcissists (Jan 22, 2026)
Narcissism is a personality trait on a spectrum, not always the clinical N.P.D., and the label is often overused. The article debunks myths—people vary in narcissistic types, may show conditional empathy, often know their traits, can change, and can harm others despite occasional prosocial behavior. - ScienceDaily: Stanford scientists found a way to regrow cartilage and stop arthritis (Jan 26, 2026)
Stanford researchers found that blocking the aging-linked enzyme 15‑PGDH with injections restored hyaline knee cartilage in older mice and prevented post‑injury osteoarthritis. Human cartilage samples responded similarly, and an oral 15‑PGDH inhibitor already in trials for muscle weakness raises hope for non‑surgical cartilage regeneration. - Simon Willison: Wilson Lin on FastRender: a browser built by thousands of parallel agents (Jan 23, 2026)
Simply breathtakign: FastRender is a from‑scratch browser engine built by Wilson Lin using Cursor’s multi‑agent swarms—about 2,000 concurrent agents—producing thousands of commits and usable page renderings in weeks. Agents autonomously chose dependencies, tolerated transient errors, and used specs and visual feedback, showing how swarms let one engineer scale complex development. - WSJ: Geothermal Wildcatter Zanskar, Which Uses AI to Find Heat, Raises $115 Million (Jan 21, 2026)
Geothermal startup Zanskar raised $115 million to use AI and field data to locate “blind” geothermal reservoirs—like Big Blind in Nevada—without surface signs, and has found a 250°F reservoir at about 2,700 feet. - WSJ: The AI Revolution Is Coming for Novelists (Jan 21, 2026)
A novelist and his wife were claimants in the Anthropic settlement over AI training on copyrighted books and will receive $3,000 each, raising what‑is‑just compensation questions for authors’ intellectual property. They urge fair licensing by tech firms as generative AI reshapes publishing and reduces writers’ incomes, yet will keep creating. - WSJ Opinion: Successful AI Will Be Simply a Part of Life (Jan 19, 2026)
AI should be developed as dependable infrastructure—reliable, affordable, accessible and trusted—so it works quietly across languages, cultures and devices without special expertise. Success will be judged by daily use and consistent performance, with built-in privacy, openness and agentic features that reduce friction without forcing users to cede control. - WSJ: Anthropic CEO Says Government Should Help Ensure AI’s Economic Upside Is Shared (Jan 20, 2026)
Anthropic CEO Dario Amodei warned at Davos that AI could drive 5–10% GDP growth while causing significant unemployment and inequality, predicting possible “decoupling” between a tech elite and the rest of society. He urged government action to share gains and contrasted scientist-led AI firms with engagement-driven social-media companies. - WSJ: The Messy Human Drama That Dealt a Blow to One of AI’s Hottest Startups (Jan 20, 2026)
Mira Murati fired CTO Barret Zoph amid concerns about his performance, trust and an undisclosed workplace relationship; three co‑founders then told her they disagreed with the company’s direction. Within hours Zoph, Luke Metz and Sam Schoenholz rejoined OpenAI, underscoring the AI race’s intense talent competition. - WSJ: South Korea Issues Strict New AI Rules, Outpacing the West (Jan 23, 2026)
“Disclosures of using AI are required for areas related to human protection, such as producing drinking water or safe management of nuclear facilities. Companies must be able to explain their AI system’s decision-making logic, if asked, and enable humans to intervene.” - WSJ: CEOs Say AI Is Making Work More Efficient. Employees Tell a Different Story. (Jan 21, 2026)
WSJ survey of 5,000 white-collar employees at large companies found 40% of non-managers say AI saves them no time weekly, while 27% report under 2 hours and few report large gains. C-suite executives report much bigger savings—many save 8+ hours—with a 38-point divergence. - WSJ: Intel Shares Slide as Costs Pile Up in Bid to Meet AI Demand (Jan 22, 2026)
Intel swung to a Q4 net loss of $333 million and warned of further Q1 losses as heavy spending to ramp new chips and industrywide supply shortages squeezed inventory. It delayed foundry customer announcements and lags AI-chip rivals, though investor funding and new 18A “Panther Lake” chips could help.
- WSJ: Nvidia Invests $150 Million in AI Inference Startup Baseten (Jan 20, 2026)
-
AI Catastrophy?
I love the quote from George Mallory about climbing Mt. Everest:
When asked by a reporter why he wanted to climb Everest, Mallory purportedly replied, “Because it’s there.”
We all know that it didn’t turn out so well for Mr. Mallory, and 100 years later, this meme:

Perhaps the same can be said for AI scientists: why do you build even more powerful AI systems? Because the challenge is there!
The race to build these systems is on. Companies left and right are dropping millions on talent in their attempt to build superintelligence labs. Meta, for example, has committed millions and millions to this effort. OpenAI (the leader), Anthropic (the safety-minded one), xAI (the rebel), Mistral (the Europeans), DeepSeek (the Chinese), Meta, and others are building frontier AI tools. Many are quite indistinguishable from magic.
Each of these companies purports to be the best and the most trustworthy organization to get to superintelligence for one reason or another. Elon Musk (xAI), for example, has been quite clear that he only trusts the technology if he controls it. He even attempted a long shot bid to purchase OpenAI earlier this year. Anthropic is quite overtly committed to safety and ethics, believing they are the company best-suited to develop “safe” AI tools.
(Anthropic founders Dario and Daniela Amodei and others left OpenAI in 2021 in response to concerns about AI safety. They focused on so-called responsible AI development as central to all research and product work. Of course, their AI ethics didn’t necessarily extend to traditional ethics like not stealing, but that’s a conversation for another day.)
I’m not here to pick on the Amodeis, Musk, Meta, or any of the AI players. It’s clear that they’ve created amazing technologies with considerable utility. But there are concerns at a far higher level than AI-induced psychosis on an individual level or pirating books.
Ezra Klein recently interviewed Eliezer Yudkowsky on his podcast, another bonkers interview that positions AI not as just another technology but as something with a high probability of leading to human extinction.
The interview is informative and interesting, and if you have an hour, it’s worth listening to in its entirety. But I was particularly struck by the religious and metaphysical part of the conversation:
Klein:
But from another perspective, if you go back to these original drives, I’m actually, in a fairly intelligent way, trying to maintain some fidelity to them. I have a drive to reproduce, which creates a drive to be attractive to other people…
Yudkowsky:
You check in with your other humans. You don’t check in with the thing that actually built you, natural selection. It runs much, much slower than you. Its thought processes are alien to you. It doesn’t even really want things the way you think of wanting them. It, to you, is a very deep alien.
…
Let me speak for a moment on behalf of natural selection: Ezra, you have ended up very misaligned to my purpose, I, natural selection. You are supposed to want to propagate your genes above all else
…
I mean, I do believe in a creator. It’s called natural selection. There are textbooks about how it works.
I’m familiar with a different story in a different book. It’s about a creator and a creation that goes off the rails rather quickly. And it certainly strikes me that a less able creator (humans) create something that behaves in ways that diverge from the creator’s intent.
Of course, the ultimate creator I mention knew of coming treachery and had a plan. So for humanity, if AI goes wrong, do we have a plan? Yudkowsky certainly suggests that we don’t.
I’m still bullish on AI as a normal technology, but there are smart people in the industry telling me there are big, nasty, scary risks. And because I don’t see AI development slowing, I find these concerns more salient today than ever before.
-
Perplexity is using stealth, undeclared crawlers to evade website no-crawl directives
The Cloudflare Blog: Perplexity is using stealth, undeclared crawlers to evade website no-crawl directives (August 3, 2025)
Internet utility, Cloudflare, accuses Perplexity of obscuring its browser user agent (the way browsers describe themselves to web servers) in order to skirt firewall and robot rules. CF penalized Perplexity by removing it from the list of verified bots.
We received complaints from customers who had both disallowed Perplexity crawling activity in their robots.txt files and also created WAF rules to specifically block both of Perplexity’s declared crawlers: PerplexityBot and Perplexity-User.
Cloudflare then ran tests with new, secret websites to confirm this sneaky behavior.
To Perplexity’s credit, I don’t think many people using the web would expect to be blocked from visiting a website, so perhaps there is some gray area here. Is a Perplexity truly a robot or is it fundamentally controlled by a human?
I don’t like that Perplexity is being sneaky, but I also think these new AI tools push the envelope of how the web is glued together. Technology and standards will have to evolve quickly.
-
NY Times: Your Job Interviewer Is Not a Person. It’s A.I.
NY Times: Your Job Interviewer Is Not a Person. It’s A.I. (July 6, 2025)
If you thought the interview process couldn’t get any worse, you were wrong. HR organizations looking for ways to reduce the load on their human recruiters have embraced these trends.
A.I. can personalize a job candidate’s interview, said Arsham Ghahramani, the chief executive and a co-founder of Ribbon AI. His company’s A.I. interviewer, which has a customizable voice and appears on a video call as moving audio waves, asks questions specific to the role to be filled, and builds on information provided by the job seeker, he said.
“It’s really paradoxical, but in a lot of ways, this is a much more humanizing experience because we’re asking questions that are really tailored to you,” Mr. Ghahramani said.
So yes, Ribbon AI chief Arsham Ghahramani describes his AI interview software as humanizing, an irony only the most self-interested and not particularly introspective people could claim with a straight face.
But with applicants turning to AI to churn out applications, the AI arms race is all but guaranteed to grow.
-
Bad (Uses of) AI
From MIT Technology Review: People are using AI to ‘sit’ with them while they trip on psychedelics. “Some people believe chatbots like ChatGPT can provide an affordable alternative to in-person psychedelic-assisted therapy. Many experts say it’s a bad idea.” I’d like to hear from the experts who say this is a good idea.
Above the Law: Trial Court Decides Case Based On AI-Hallucinated Caselaw. “Shahid v. Esaam, out of the Georgia Court of Appeals, involved a final judgment and decree of divorce served by publication. When the wife objected to the judgment based on improper service, the husband’s brief included two fake cases.” From the appellate court: “As noted above, the irregularities in these filings suggest that they were drafted using generative AI.”
Futurism: People Are Being Involuntarily Committed, Jailed After Spiraling Into “ChatGPT Psychosis” “At the core of the issue seems to be that ChatGPT, which is powered by a large language model (LLM), is deeply prone to agreeing with users and telling them what they want to hear.”
“What I think is so fascinating about this is how willing people are to put their trust in these chatbots in a way that they probably, or arguably, wouldn’t with a human being,” Pierre said. “And yet, there’s something about these things — it has this sort of mythology that they’re reliable and better than talking to people. And I think that’s where part of the danger is: how much faith we put into these machines.”
The Register: AI agents get office tasks wrong around 70% of the time, and a lot of them aren’t AI at all. “IT consultancy Gartner predicts that more than 40 percent of agentic AI projects will be cancelled by the end of 2027 due to rising costs, unclear business value, or insufficient risk controls.” Gartner further notes that most agentic “AI” vendors aren’t actually AI.
-
Bad Questions & Answers
Ethan Mollick recently cited a paper that tripped up DeepSeek:
Garbage in, garbage out. AI tools are still in their relative infancy, and it’s not surprising that confusing queries would lead to useless or misleading results.
Simon Willison posted a similar idea but with a decided historical bent:
On two occasions I have been asked, — “Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out ?” In one case a member of the Upper, and in the other a member of the Lower, House put this question. I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.
— Charles Babbage, Passages from the Life of a Philosopher, 1864
For personal use, I don’t find discoveries like this troubling. I do think that it opens countless avenues for scammers and hackers to trick systems into doing things that we may very well want to avoid.
-
The Verge: Microsoft should change its Copilot advertising, says watchdog
The BBB critiques Microsoft’s recent advertising for
Clippy, I mean Copilot, and they found quite a bit of puffery.From The Verge:
Microsoft has been claiming that Copilot has productivity and return on investment (ROI) benefits for businesses that adopt the AI assistant, including that “67%, 70%, and 75% of users say they are more productive” after a certain amount of Copilot usage. “NAD found that although the study demonstrates a perception of productivity, it does not provide a good fit for the objective claim at issue,” says the watchdog in its review. “As a result, NAD recommended the claim be discontinued or modified to disclose the basis for the claim.”
And from the original report from the BBB National Programs’ National Advertising Division:
NAD found that although the study demonstrates a perception of productivity, it does not provide a good fit for the objective claim at issue. As a result, NAD recommended the claim be discontinued or modified to disclose the basis for the claim.
Aside from puffery, this aligns with my observations of Copilot. The branding is confusing, the integration with products is suspect, and the tools lags far behind other AI/LLM agents like Gemini, ChatGPT, and Claude.
-
Resisting AI?
Dan McQuillan writes, The role of the University is to resist AI,following themes from Ivan Illich’s ‘Tools for Conviviality’.
It’s a scathing overview with points that I think many others wonder about (although in less concrete ways than McQuillan).
Contemporary AI is a specific mode of connectionist computation based on neural networks and transformer models. AI is also a tool in Illich’s sense; at the same time, an arrangement of institutions, investments and claims. One benefit of listening to industry podcasts, as I do, is the openness of the engineers when they admit that no-one really knows what’s going on inside these models.
Let that sink in for a moment: we’re in the midst of a giant social experiment that pivots around a technology whose inner workings are unpredictable and opaque.
The highlight is mine. I agree that there’s something disconcerting about using systems that we don’t understand fully.
Generative AI’s main impact on higher education has been to cause panic about students cheating, a panic that diverts attention from the already immiserated experience of marketised studenthood. It’s also caused increasing alarm about staff cheating, via AI marking and feedback, which again diverts attention from their experience of relentless and ongoing precaritisation.
The hegemonic narrative calls for universities to embrace these tools as a way to revitalise pedagogy, and because students will need AI skills in the world of work. A major flaw with this story is that the tools don’t actually work, or at least not as claimed.
AI summarisation doesn’t summarise; it simulates a summary based on the learned parameters of its model. AI research tools don’t research; they shove a lot of searched-up docs into the chatbot context in the hope that will trigger relevancy. For their part, so-called reasoning models ramp up inference costs while confabulating a chain of thought to cover up their glaring limitations.
I think there are philosophical questions here worth considering. Specifically, the postulation that AI simply “simulates” is too simple and not helpful. What is a photograph? It’s a real thing, but not the real thing captured on the image. What is a video played on a computer screen? It’s a real thing, but it’s not the real thing. The photo and screen simulate the real world, but I’m not aware of modern philosophers critiquing these forms of media. (I’d suspect that earlier media theorists did just that until the media was accepted en masse by society.)
He goes on to cite environmental concerns (although as I posted recently, the questions of water consumption are exaggerated) among things we’re well suited to take heed of. His language is perhaps a bit too revolutionary.
As for people’s councils — I am less sanguine that these have much utility.
Instead of waiting for a liberal rules-based order to magically appear, we need to find other ways to organise to put convivial constraints into practice. I suggest that a workers’ or people’s council on AI can be constituted in any context to carry out the kinds of technosocial inquiry advocated for by Illich, that the act of doing so prefigures the very forms of independent thought which are undermined by AI’s apparatus, and manifests the kind of careful, contextual and relational approach that is erased by AI’s normative scaling.
I suspect that people’s councils are glorified committees — structures that are kabuki theater than anything else and will struggle to align with the speed at which AI tools are emerging.
The role of the university isn’t to roll over in the face of tall tales about technological inevitability, but to model the forms of critical pedagogy that underpin the social defence against authoritarianism and which makes space to reimagine the other worlds that are still possible.
I don’t share all of his fears, but it’s important to consider voices that may not align with a techno-optimistic future.