- Matthew Yglesias: Academic Publishing and AI (Feb. 9, 2026)
“The pace of academic publishing and the pace of AI frontier model progress are so radically misaligned that I fear the situation is unsalvageable.” - NY Times: A.I. Personalizes the Internet but Takes Away Control (Feb. 10, 2026)
Tech firms are embedding A.I. into apps, creating a personalized internet you can’t easily control. They use chat data for targeted ads, price changes, and profit, with few opt-outs. - Simon Taylor: The SaaSpocalypse (Feb. 9, 2026)
“You’re going from a department of 10 Bobs using SaaS tools and spreadsheets, to 5 Bobs and 50 AI agents — making custom workflows that fit the problem exactly.” - WSJ: AI Fumbles Its Big Super Bowl Investment as Viewers Opt for Laughter and Tears (Feb. 9, 2026)
AI ads dominated this Super Bowl, but viewers favored nostalgic, celebrity spots like Budweiser, Dunkin’. Some AI ads drove clicks and controversy, yet simple, emotional spots won more engagement. My take: the commercials weren’t very creative or exciting this year. - Simon Willison: Introducing Showboat and Rodney (Feb. 10, 2026)
Tools for agents to demo and test their code. Showboat builds Markdown demos with commands, outputs, and images, while Rodney provides CLI browser automation. - Sinocities: China’s Data Center Boom: a view from Zhangjiakou (Nov. 17, 2025)
China’s EDWC (Eastern Data Western Compute) policy aims to align large-scale data-center buildout in energy-rich regions. Seems similar to private capital in the US building in far West Texas.
Category: Legal
-
Tuesday (AI) Links – Feb. 10, 2026
-
Sunday AI Links (Jan. 25)
- WSJ: Nvidia Invests $150 Million in AI Inference Startup Baseten (Jan 20, 2026)
Baseten raised $300 million at a $5 billion valuation in a round led by IVP and CapitalG, with Nvidia investing $150 million. The San Francisco startup provides AI inference infrastructure for customers like Notion and aims to become the “AWS for inference” amid rising investor interest. - WSJ: Why Elon Musk Is Racing to Take SpaceX Public (Jan 21, 2026)
SpaceX abandoned its long-held resistance to an IPO after the rush to build solar-powered AI data centers in orbit made billions in capital necessary, prompting Elon Musk to seek public funding to finance and accelerate orbital AI satellites. The IPO could also boost Musk’s xAI and counter rivals. - NY Times: Myths and Facts About Narcissists (Jan 22, 2026)
Narcissism is a personality trait on a spectrum, not always the clinical N.P.D., and the label is often overused. The article debunks myths—people vary in narcissistic types, may show conditional empathy, often know their traits, can change, and can harm others despite occasional prosocial behavior. - ScienceDaily: Stanford scientists found a way to regrow cartilage and stop arthritis (Jan 26, 2026)
Stanford researchers found that blocking the aging-linked enzyme 15‑PGDH with injections restored hyaline knee cartilage in older mice and prevented post‑injury osteoarthritis. Human cartilage samples responded similarly, and an oral 15‑PGDH inhibitor already in trials for muscle weakness raises hope for non‑surgical cartilage regeneration. - Simon Willison: Wilson Lin on FastRender: a browser built by thousands of parallel agents (Jan 23, 2026)
Simply breathtakign: FastRender is a from‑scratch browser engine built by Wilson Lin using Cursor’s multi‑agent swarms—about 2,000 concurrent agents—producing thousands of commits and usable page renderings in weeks. Agents autonomously chose dependencies, tolerated transient errors, and used specs and visual feedback, showing how swarms let one engineer scale complex development. - WSJ: Geothermal Wildcatter Zanskar, Which Uses AI to Find Heat, Raises $115 Million (Jan 21, 2026)
Geothermal startup Zanskar raised $115 million to use AI and field data to locate “blind” geothermal reservoirs—like Big Blind in Nevada—without surface signs, and has found a 250°F reservoir at about 2,700 feet. - WSJ: The AI Revolution Is Coming for Novelists (Jan 21, 2026)
A novelist and his wife were claimants in the Anthropic settlement over AI training on copyrighted books and will receive $3,000 each, raising what‑is‑just compensation questions for authors’ intellectual property. They urge fair licensing by tech firms as generative AI reshapes publishing and reduces writers’ incomes, yet will keep creating. - WSJ Opinion: Successful AI Will Be Simply a Part of Life (Jan 19, 2026)
AI should be developed as dependable infrastructure—reliable, affordable, accessible and trusted—so it works quietly across languages, cultures and devices without special expertise. Success will be judged by daily use and consistent performance, with built-in privacy, openness and agentic features that reduce friction without forcing users to cede control. - WSJ: Anthropic CEO Says Government Should Help Ensure AI’s Economic Upside Is Shared (Jan 20, 2026)
Anthropic CEO Dario Amodei warned at Davos that AI could drive 5–10% GDP growth while causing significant unemployment and inequality, predicting possible “decoupling” between a tech elite and the rest of society. He urged government action to share gains and contrasted scientist-led AI firms with engagement-driven social-media companies. - WSJ: The Messy Human Drama That Dealt a Blow to One of AI’s Hottest Startups (Jan 20, 2026)
Mira Murati fired CTO Barret Zoph amid concerns about his performance, trust and an undisclosed workplace relationship; three co‑founders then told her they disagreed with the company’s direction. Within hours Zoph, Luke Metz and Sam Schoenholz rejoined OpenAI, underscoring the AI race’s intense talent competition. - WSJ: South Korea Issues Strict New AI Rules, Outpacing the West (Jan 23, 2026)
“Disclosures of using AI are required for areas related to human protection, such as producing drinking water or safe management of nuclear facilities. Companies must be able to explain their AI system’s decision-making logic, if asked, and enable humans to intervene.” - WSJ: CEOs Say AI Is Making Work More Efficient. Employees Tell a Different Story. (Jan 21, 2026)
WSJ survey of 5,000 white-collar employees at large companies found 40% of non-managers say AI saves them no time weekly, while 27% report under 2 hours and few report large gains. C-suite executives report much bigger savings—many save 8+ hours—with a 38-point divergence. - WSJ: Intel Shares Slide as Costs Pile Up in Bid to Meet AI Demand (Jan 22, 2026)
Intel swung to a Q4 net loss of $333 million and warned of further Q1 losses as heavy spending to ramp new chips and industrywide supply shortages squeezed inventory. It delayed foundry customer announcements and lags AI-chip rivals, though investor funding and new 18A “Panther Lake” chips could help.
- WSJ: Nvidia Invests $150 Million in AI Inference Startup Baseten (Jan 20, 2026)
-
WSJ:How a Bold Plan to Ban State AI Laws Fell Apart—and Divided Trumpworld
As I noted last week, Congressional efforts to block state AI laws in the Big Beautiful Bill lost support and was ultimately dropped from the Senate bill by a close vote of 99-1.
-
TechCrunch: Congress might block state AI laws for five years
Senators Ted Cruz and Marsha Blackburn include a measure to limit (most) state oversight of AI laws for the next five years as part of the “Big Beautiful Bill” currently in the works. Critics (and the Senate Parliamentarian) have reduced the scope and duration of the provision to modify the measure.
However, over the weekend, Cruz and Sen. Marsha Blackburn (R-TN), who has also criticized the bill, agreed to shorten the pause on state-based AI regulation to five years. The new language also attempts to exempt laws addressing child sexual abuse materials, children’s online safety, and an individual’s rights to their name, likeness, voice, and image. However, the amendment says the laws must not place an “undue or disproportionate burden” on AI systems — legal experts are unsure how this would impact state AI laws.
The regulation is supported by some in the tech industry, including OpenAI CEO Sam Altman, whereas Anthropic’s leadership is opposed.
I’m sympathetic to the aims of this bill as a patchwork of 50 state laws regulating AI would make it more difficult to innovate in this space. But I’m also aware of real-life harm (as a recent NY Times story profiled), so I’d be much more sanguine if we had federal-level regulation, a prospect that seems very unlikely considering the current political makeup.