Theme 1: Rapid AI product and enterprise competition—bigger context models and code tools are reshaping markets and provoking legal, contractual, and vendor fights. Theme 2: Real-world harms and governance gaps—from outages, misinformation, and misidentifications to infrastructure costs—require transparency, safety, accountability, and clearer regulation.
-
Claude: 1M context is now generally available for Opus 4.6 and Sonnet 4.6 (Mar. 13, 2026)
Claude Opus 4.6 and Sonnet 4.6 now offer a full 1M token context window at standard pricing, no long-context premium. They support 600 images or pages, include Claude Code for Max, Team, and Enterprise, and preserve long conversations. -
Heinan Cabouly: Amazon Forced Engineers to Use AI Coding Tools. Then It Lost 6.3 Million Orders. (Mar. 12, 2026)
Amazon required 80% of engineers to use its AI tool Kiro, tracking adoption as an OKR. Within three months, AI-related incidents deleted production, caused two outages that lost 6.3 million orders, and forced a 90-day safety reset. -
Mike Ramos: Please Do Not A/B Test My Workflow (Mar. 12, 2026)
Anthropic’s A/B tests on Claude Code altered plan mode behavior, degrading workflows without opt-out or clear transparency for paid users. -
WIRED: Inside OpenAI’s Race to Catch Up to Claude Code (Mar. 11, 2026)
OpenAI, despite early work on Codex and backing from Microsoft, fell behind as Anthropic’s Claude Code captured market share by focusing on real-world code and enterprise traction. -
NY Times: Cascade of A.I. Fakes About War With Iran Causes Chaos Online (Mar. 14, 2026)
AI-generated images and videos of the Iran war—over 110 staged posts—have flooded social media, showing fake explosions, ruined streets, and non-existent troops. -
the Guardian: Tennessee grandmother jailed after AI facial recognition error links her to fraud (Mar. 12, 2026)
Tennessee grandmother was misidentified by facial‑recognition AI in a North Dakota fraud case, arrested, and jailed nearly six months before bank records proved her alibi. -
WSJ: Amazon’s Win Against Perplexity Kicks AI Shopping Wars Into High Gear (Mar. 11, 2026)
A judge barred Perplexity’s AI from using Amazon’s password-protected pages to shop for users. Retailers worry bots bypass ads, cut ad revenue, and siphon customer data. -
Marginal REVOLUTION: The moralization of artificial intelligence (Mar. 13, 2026)
“Analyzing 69,890 news headlines from 2018 to 2024, we found that AI was moralized at levels comparable to GMOs and vaccines, technologies whose moral opposition has been studied for decades. It ranked above both. The sharpest spike came within weeks of ChatGPT’s launch in late 2022.” -
WSJ: The Pentagon Dealmaker Who Has Become Anthropic’s Nemesis (Mar. 12, 2026)
Emil Michael, the Pentagon’s point person in the Anthropic dispute, led tough negotiations that collapsed, leaving Anthropic suing, other customers wary, and the Defense Department scrambling to replace its AI while using it in combat. -
NY Times: Microsoft Takes a Stand Against the Trump Administration in Anthropic Fight (Mar. 11, 2026)
Microsoft backed Anthropic in suing the Pentagon over a supply-chain risk label after talks on surveillance and autonomous weapons failed. -
WSJ: The Electric Grid Needs Huge Upgrades. No One Knows Who Will Pay for Them. (Mar. 12, 2026)
Utilities plan tens of billions to expand transmission for AI data centers, sparking fights over who pays. Tech firms, ratepayers, and regions contest cost allocation, and regulators warn consumers may still face higher bills. -
NY Times Opinion: Social Media Isn’t Just Speech. It’s Also a Defective, Hazardous Product. (Mar. 14, 2026)
Social media acts like a defective, hazardous product, using algorithms, infinite scroll, and unpredictable rewards to drive compulsive use. Rising teen depression, self-harm, and suicide have spurred lawsuits seeking public-health regulation and legal accountability. -
The Register: Nanny state vs. Linux: show us your ID, kid (Mar. 13, 2026)
New laws push operating system vendors to collect users’ ages, threatening open source distributions, privacy, and youth access to Linux. Some projects refuse, others propose local age flags, but critics call the rules ineffective, invasive, and harmful.
Leave a Reply