AI industry is consolidating around core products, inference-focused computing, and large models/infrastructure (Mistral Small 4, Nvidia-backed data centers, developer tools). Meanwhile, tech’s expansion—algorithmic worker surveillance, contentious data‑center projects, and waning billionaire philanthropy—provokes worker, community, and public backlash.
-
WSJ: ChatGPT Maker OpenAI to Cut Back on Side Projects in Push to ‘Nail’ Core Business (Mar. 16, 2026)
OpenAI plans to refocus on coding and business users, cutting or deprioritizing side projects to boost productivity and unify research and product teams. -
Simon Willison: Introducing Mistral Small 4 (Mar. 16, 2026)
Mistral released Mistral Small 4, an Apache-2 119B Mixture-of-Experts model (6B active) that unifies reasoning, multimodal, and coding capabilities, supports reasoning_effort modes, and is 242GB on Hugging Face. -
Mistral: Introducing Mistral Small 4 (Mar. 16, 2026)
Mistral Small 4 is a 119B-parameter, multimodal Mixture of Experts model that unifies chat, coding, and deep reasoning, with a 256k context window and configurable reasoning effort. -
Cory Doctorow: The future of Amazon coders is the present of Amazon warehouse workers (Mar. 20, 2021)
The future of white‑collar surveillance is already present in blue‑collar workplaces as Amazon uses algorithmic management to surveil and discipline delivery drivers and warehouse staff. -
Chrome for Developers: Let your Coding Agent debug your browser session with Chrome DevTools MCP (Dec. 11, 2025)
Chrome DevTools MCP can now auto-connect coding agents to active Chrome sessions, letting agents reuse signed-in sessions and inspect selected DevTools panels. -
WSJ: What Is Inference? Explaining the Massive New Shift in AI Computing (Mar. 16, 2026)
“You can think of AI as a restaurant. The model is the chef. After it undergoes a period of intensive training, learning hundreds (or billions) of recipes and techniques, it is ready to begin taking orders.Inference is the day-to-day operation of the restaurant. Diners place their orders (often in the form of a query to a chatbot) and the chef prepares their meals (the chatbot’s response).”Spending is shifting from training to inference, as companies focus on real-time AI responses, driving demand for inference-specific chips that prioritize memory, bandwidth, and low latency. -
WSJ: Nvidia-Backed AI Startup to Spend Billions on Korea Data Center to Combat China (Mar. 16, 2026)
Nvidia-backed Reflection AI is building a 250-megawatt data center with Shinsegae in South Korea to run open-source, Korean-customized models, backed by billions and Nvidia chips. -
NY Times: ‘Nobody Owns Us’: How Plans for a Google Data Center Divided an Oklahoma Town (Mar. 14, 2026)
Plans for an 827-acre Google data center near Sand Springs, Oklahoma, rezoned from farmland, provoked outrage over secrecy, water, and power use. The Rock Volunteer Fire Department refused a $250,000 donation, and residents sued to block the project. -
NY Times: The Billionaire Backlash Against a Philanthropic Dream (Mar. 15, 2026)
The Giving Pledge, once trendy among billionaires, has stalled as signers dwindle and its reputation frays. Critics call it performative, donors shift to politics or private profit, and the pledge lacks enforcement, tracking, or rapid giving.
