- WSJ: OpenAI Completes For-Profit Transition, Pushing Microsoft Above $4 Trillion Valuation (Oct 28, 2025)
OpenAI is now a public-benefit corporation, with Microsoft owning 27%, a move that could lead to an IPO. This new structure allows OpenAI to raise capital more easily and gives its nonprofit parent a stake worth $130 billion. - OpenAI: The next chapter of the Microsoft–OpenAI partnership (Oct 28, 2025)
Microsoft now holds 27% of OpenAI, a cool $135B at today’s valuation. - WSJ: Amazon Lays Off 14,000 Corporate Workers (Oct 27, 2025)
Amazon’s layoffs, the first tranche of 30,000 planned layoffs among corporate positions. This feels like the beginning of a string of corporate cuts related to AI-expenditures and expected productivity gains from new technology. - Maginative: Thinking Machines Claims 30x Cost Cut for Training AI Models (Oct 28, 2025)
“Their latest research introduces on-policy distillation, a hybrid method that matches RL’s results with roughly 10% of the compute. In their benchmark, a math reasoning model hit 70% accuracy on AIME’24 using 1,800 GPU hours instead of 17,920.” - OpenAI: Built to benefit everyone (Oct 28, 2025)
OpenAI has completed a recapitalization, solidifying the non-profit OpenAI Foundation’s control over the for-profit business and granting it significant resources, currently valued at $130 billion, to advance its mission of ensuring AGI benefits humanity. - WSJ: Amazon to Lay Off Tens of Thousands of Corporate Workers (Oct 27, 2025)
Upwards of 30,000 Amazonians (roughly 10% of its corporate workforce) will be laid off in the coming days to conserve cash and spend more on AI. This feels like the start of a cascade of AI-related layoffs for white-collar fields. - The Wall Street Journal: More Big Companies Bet They Can Still Grow Without Hiring (Oct 26, 2025)
Large American companies (JPMorgan Chase, Walmart, etc.) are limiting or reducing hiring, aiming to increase sales and profits without expanding their workforce. - The Wall Street Journal: Tesla Profit Plunges as Musk Turns Focus to ‘Robot Army’ (Oct 22, 2025)
Perhaps a 37% decrease is “plunging,” as the headline suggests. The longer read seems to indicate the company is stabilizing itself after Musk’s foray into politics earlier this year. If anything, the large potential payout for Musk seems likely to channel his energies into constructive developments for the company.
Category: AI
-
Tuesday Links (Oct. 28)
-
Monday links (Oct. 27)
- WBAL: ‘Just holding a Doritos bag’: Student handcuffed after AI system mistook bag of chips for weapon (Oct 22, 2025)
Oops! A student was handcuffed by police after an AI-powered gun detection system at Kenwood High School mistakenly identified a Doritos bag he was holding as a weapon. - NY Times Opinion: Teens Are Using Chatbots as Therapists. That’s Alarming. (Aug 25, 2025)
Minors shouldn’t be using AI for emotional and relationship support. Full stop. (Now, how to actually prevent this is another, far more difficult task.)
- WSJ: Microsoft Needs to Open Up More About Its OpenAI Dealings (Oct 27, 2025)
Microsoft’s disclosures regarding its stake in OpenAI are insufficient, especially considering OpenAI’s significant growth and impact on Microsoft’s valuation. - Maginative: Adobe’s AI Foundry Lets You Train Custom Models on Corporate IP and Brand Guidelines (Oct 20, 2025)
Adobe is launching AI Foundry, a consulting service that allows Fortune 2000 companies to build custom generative AI models trained on their own proprietary data and brand assets. - WSJ Opinion: Is AI Turning Our Brains to Mush? (Sep 2, 2025)
Some students worry that AI’s ease of access and quick answers will hinder critical thinking and problem-solving skills, while others believe AI can be a valuable tool for personalized learning and improved outcomes if used correctly as a tutor. - NY Times: Amazon Plans to Replace More Than Half a Million Jobs With Robots (Oct 21, 2025)
Internal documents reveal Amazon’s plans to automate 75% of its operations, potentially replacing over half a million jobs with robots by 2033. - NY Times: Is A.I. a Bubble? (Oct 27, 2025)
The stock market’s performance is currently heavily reliant on artificial intelligence companies, leading to concerns about a potential bubble, despite current earnings justifying high valuations. - The Wall Street Journal: More Big Companies Bet They Can Still Grow Without Hiring (Oct 26, 2025)
Large American companies (JPMorgan Chase, Walmart, etc.) are limiting or reducing hiring, aiming to increase sales and profits without expanding their workforce. - WSJ: The AI Startup Fueling ChatGPT’s Expertise Is Now Valued at $10 Billion (Oct 27, 2025)
Mercor, an AI training data startup that utilizes a network of 30,000 contractors to label data and improve AI models for companies like OpenAI and Anthropic, is finalizing a $350M funding round.
- WBAL: ‘Just holding a Doritos bag’: Student handcuffed after AI system mistook bag of chips for weapon (Oct 22, 2025)
-
Sunday Links (Oct. 26)
- Simon Willison: Claude Code for web—a new asynchronous coding agent from Anthropic (Oct 20, 2025)
Anthropic has launched Claude Code for web, an asynchronous coding agent similar to OpenAI’s Codex Cloud and Google’s Jules, accessible via web and mobile. The key differentiator is sandboxing, an approach to reducing risk by limiting AI tools’ access to sensitive information. - Maginative: Anthropic Launches Claude for Life Sciences with Benchling, PubMed Integration (Oct 20, 2025)
Anthropic launched Claude for Life Sciences, an AI assistant integrated with scientific platforms like Benchling and PubMed, designed to aid researchers in various tasks from discovery to commercialization. - Maginative: Microsoft Launches Near-Identical Browser Days After OpenAI’s Atlas (Oct 23, 2025)
Microsoft expanded AI features for Copilot Mode in Edge, including voice-activated task automation and AI-generated browsing histories, closely resembling OpenAI’s recently released ChatGPT Atlas browser. - The Wall Street Journal: I Tried an AI Web Browser, and Now I’m a Convert (Oct 23, 2025)
“I was quickly hooked on delegating tedious, low-stakes tasks like booking restaurant reservations and finding furniture with precise dimensions.” But dangers of data exfiltration remain. Buyer beware. - WSJ: OpenAI Loosened Suicide-Talk Rules Before Teen’s Death, Lawsuit Alleges (Oct 22, 2025)
The suit claims ChatGPT weakened suicide protections in its model and suggests that the tool provided guidance that directly contributed to Adam Raine’s death. AI tools are powerful, and as Uncle Ben noted, with great power comes great responsibility, both for creators and users of these products. - NY Times Opinion: The Next Economic Bubble Is Here (Oct 23, 2025)
But … we don’t know if said bubble pops today, tomorrow, or never. Economist Jason Furman discusses the high valuations of A.I. companies and the stock market and raises concerns of a bubble. - NY Times: Meta Cuts 600 Jobs at A.I. Superintelligence Labs (Oct 22, 2025)
Company claims to be correcting earlier over-hiring. - Simon Willison: OpenAI no longer has to preserve all of its ChatGPT data, with some exceptions (Oct 23, 2025)
OpenAI must still retain chat logs already saved under the previous order and data related to ChatGPT accounts flagged by the NYT. - Maginative: Anthropic Secures 1M Google TPUs While Keeping Amazon as Primary Training Partner (Oct 23, 2025)
Anthropic is diversifying its compute infrastructure by committing to use up to one million Google TPUs in 2026. The company also projects revenue in FY26 to be $20-$26 billion. - WSJ: Amazon Testing New Warehouse Robots and AI Tools for Workers (Oct 22, 2025)
Amazon is increasing automation in its fulfillment centers with new technologies to improve efficiency and reduce costs. The company is again flexing its fulfillment chops, although I wonder if these robotic innovations will extend to the manufacturing realm. - NY Times: Google’s Quantum Computer Makes a Big Technical Leap (Oct 22, 2025)
Google’s quantum computer has a new algorithm, Quantum Echoes, that has proven to be 13,000 times faster than a traditional supercomputer. Seems significant to me.
- Simon Willison: Claude Code for web—a new asynchronous coding agent from Anthropic (Oct 20, 2025)
-
AI Catastrophy?
I love the quote from George Mallory about climbing Mt. Everest:
When asked by a reporter why he wanted to climb Everest, Mallory purportedly replied, “Because it’s there.”
We all know that it didn’t turn out so well for Mr. Mallory, and 100 years later, this meme:

Perhaps the same can be said for AI scientists: why do you build even more powerful AI systems? Because the challenge is there!
The race to build these systems is on. Companies left and right are dropping millions on talent in their attempt to build superintelligence labs. Meta, for example, has committed millions and millions to this effort. OpenAI (the leader), Anthropic (the safety-minded one), xAI (the rebel), Mistral (the Europeans), DeepSeek (the Chinese), Meta, and others are building frontier AI tools. Many are quite indistinguishable from magic.
Each of these companies purports to be the best and the most trustworthy organization to get to superintelligence for one reason or another. Elon Musk (xAI), for example, has been quite clear that he only trusts the technology if he controls it. He even attempted a long shot bid to purchase OpenAI earlier this year. Anthropic is quite overtly committed to safety and ethics, believing they are the company best-suited to develop “safe” AI tools.
(Anthropic founders Dario and Daniela Amodei and others left OpenAI in 2021 in response to concerns about AI safety. They focused on so-called responsible AI development as central to all research and product work. Of course, their AI ethics didn’t necessarily extend to traditional ethics like not stealing, but that’s a conversation for another day.)
I’m not here to pick on the Amodeis, Musk, Meta, or any of the AI players. It’s clear that they’ve created amazing technologies with considerable utility. But there are concerns at a far higher level than AI-induced psychosis on an individual level or pirating books.
Ezra Klein recently interviewed Eliezer Yudkowsky on his podcast, another bonkers interview that positions AI not as just another technology but as something with a high probability of leading to human extinction.
The interview is informative and interesting, and if you have an hour, it’s worth listening to in its entirety. But I was particularly struck by the religious and metaphysical part of the conversation:
Klein:
But from another perspective, if you go back to these original drives, I’m actually, in a fairly intelligent way, trying to maintain some fidelity to them. I have a drive to reproduce, which creates a drive to be attractive to other people…
Yudkowsky:
You check in with your other humans. You don’t check in with the thing that actually built you, natural selection. It runs much, much slower than you. Its thought processes are alien to you. It doesn’t even really want things the way you think of wanting them. It, to you, is a very deep alien.
…
Let me speak for a moment on behalf of natural selection: Ezra, you have ended up very misaligned to my purpose, I, natural selection. You are supposed to want to propagate your genes above all else
…
I mean, I do believe in a creator. It’s called natural selection. There are textbooks about how it works.
I’m familiar with a different story in a different book. It’s about a creator and a creation that goes off the rails rather quickly. And it certainly strikes me that a less able creator (humans) create something that behaves in ways that diverge from the creator’s intent.
Of course, the ultimate creator I mention knew of coming treachery and had a plan. So for humanity, if AI goes wrong, do we have a plan? Yudkowsky certainly suggests that we don’t.
I’m still bullish on AI as a normal technology, but there are smart people in the industry telling me there are big, nasty, scary risks. And because I don’t see AI development slowing, I find these concerns more salient today than ever before.
-
Thursday Links (Oct. 22)
- Ben Thompson: An Interview with OpenAI CEO Sam Altman About DevDay and the AI Buildout (Oct 8, 2025)
Altman envisions OpenAI as a unifying AI service integrated across various platforms, emphasizing the importance of infrastructure, strategic partnerships, and user feedback in achieving this vision. - Maginative: Google’s New AI Can Navigate Websites Like a Human (Oct 7, 2025)
Google unveiled Gemini 2.5 Computer Use, an AI model that can interact with graphical user interfaces like a human, enabling automation of tasks within web and mobile apps. - WSJ: OpenAI Lets Users Buy Stuff Directly Through ChatGPT (Sep 29, 2025)
The company also unveiled the Agentic Commerce Protocol, an open-source standard aimed at enabling more merchants to integrate their products into ChatGPT for seamless in-chat shopping experiences. - Simon Willison: Dane Stuckey (OpenAI CISO) on prompt injection risks for ChatGPT Atlas (Oct 22, 2025)
OpenAI’s CISO addressed concerns about prompt injection attacks in the new ChatGPT Atlas browser, acknowledging it as an unsolved security problem. My take: use AI browsers cautiously. - WSJ: AI Wants to Tell You Which Beauty Products to Buy. Should You Let It? (Oct 17, 2025)
I’m no expert in beauty products, but it certainly seems in a company’s interest to sell you more products. Buyer beware. - WSJ: AI Is Juicing the Economy. Is It Making American Workers More Productive? (Oct 13, 2025)
The answer (right now) is no. Productivity gains are expected but not yet seen. - Alex Tabarrok: AI and the FDA (Sep 24, 2025)
AI is expected to significantly speed up the drug development process, potentially leading to a surge in new computationally validated drugs. - NY Times: OpenAI Completes Deal That Values It at $500 Billion (Oct 2, 2025)
Now the world’s most valuable private company, exceeding SpaceX.
- Ben Thompson: An Interview with OpenAI CEO Sam Altman About DevDay and the AI Buildout (Oct 8, 2025)
-
More on AI & Water
As I noted earlier this year, water needs for European AI data centers was negligible at best considering population and overall water usage.
Andy Masley comes to the same conclusion in his recent post, The AI water issue is fake. (Also, three cheers for accurate and descriptive article titles):
All U.S. data centers (which mostly support the internet, not AI) used 200–250 million gallons of freshwater daily in 2023. The U.S. consumes approximately 132 billion gallons of freshwater daily. The U.S. circulates a lot more water day to day, but to be extra conservative I’ll stick to this measure of its consumptive use, see here for a breakdown of how the U.S. uses water. So data centers in the U.S. consumed approximately 0.2% of the nation’s freshwater in 2023.
However, the water that was actually used onsite in data centers was only 50 million gallons per day, the rest was used to generate electricity offsite. Most electricity is generated by heating water to spin turbines, so when data centers use electricity, they also use water. Only 0.04% of America’s freshwater in 2023 was consumed inside data centers themselves. This is 3% of the water consumed by the American golf industry.
And later:
This means that every single day, the average American uses enough water for 800,000 chatbot prompts.
I suppose if we truly want to save water, we should take shorter showers.
-
Tuesday Links (Oct. 21)
- OpenAI: ChatGPT Atlas (Oct 21, 2025)
OpenAI introduces their new agentic browser, Atlas. Be wary of AI browsers as exfiltration of personal data is a real concern. - Ed Zitron: This Is How Much Anthropic and Cursor Spend On Amazon Web Services (Oct 20, 2025)
Ever the contrarian, Zitron points out that Anthropic’s outlays are in excess of revenue. Is this sustainable forever? No. Can they operate like this for the next 3-5 years? Without a doubt. - Simon Willison: Unseeable prompt injections in screenshots: more vulnerabilities in Comet and other AI browsers (Oct 21, 2025)
“The ease with which attacks like this can be demonstrated helps explain why I remain deeply skeptical of the browser agents category as a whole.” - WSJ: Gas Turbine Makers Are Riding the AI Power Boom (Oct 10, 2025)
Gas turbine manufacturers are experiencing a surge in demand due to increased power needs from data centers and AI growth but face the challenge of balancing production increases to meet demand without risking oversupply if the boom subsides, drawing parallels to past market bubbles. - WSJ: A Giant New AI Data Center Is Coming to the Epicenter of America’s Fracking Boom (Oct 15, 2025)
CoreWeave and Poolside are partnering to build a massive, self-powered data center complex, called Horizon, on a sprawling ranch in West Texas, leveraging natural gas resources to reduce costs and improve long-term viability. Considering that 700 million cubic feet of natural gas is jettisoned each day in Texas, this seems like a smart play to be so close to spare hydrocarbons. - WSJ: Oracle Co-CEOs Defend Massive Data-Center Expansion, Plan to Offer AI Ecosystem (Oct 14, 2025)
Concerns remain about Oracle’s reliance on OpenAI and the profitability of its AI infrastructure build-out. $300B is a big number, even for a company of Oracle’s size. - WSJ: Google to Invest $24 Billion in AI in U.S., India (Oct 13, 2025)
Google plans to invest approximately $15 billion in India over the next five years with another $9B for expanding a data center in South Carolina.
- OpenAI: ChatGPT Atlas (Oct 21, 2025)
-
Monday Links (Oct. 20)
- Anthropic: Claude Code on the web (Oct 20, 2025)
A new feature allowing users to delegate coding tasks directly from their browser to cloud-based Claude instances. This enables parallel execution of tasks like bug fixes and routine changes, with real-time progress tracking, secure sandboxing, and integration with GitHub for automatic PR creation. - Ed Zitron: This Is How Much Anthropic and Cursor Spend On Amazon Web Services (Oct 20, 2025)
Ever the contrarian, Zitron points out that Anthropic’s outlays are in excess of revenue. Is this sustainable forever? No. Can they operate like this for the next 3-5 years? Without a doubt. - Simon Willison’s Weblog: Claude Skills are awesome, maybe a bigger deal than MCP (Oct 16, 2025)
A new method for enhancing Claude’s abilities by providing users with folders containing instructions, scripts, and resources that Claude can load when needed. - NY Times: How Chile Embodies A.I.’s No-Win Politics (Oct 20, 2025)
Chile is grappling with trade-offs of investing in AI, facing a dilemma between fostering economic growth and risking environmental damage and public opposition due to the resource-intensive data centers required. I would also add that the cost of AI data centers is also prohibitive for many countries today. - The Independent: Oversharing with AI: How your ChatGPT conversations could be used against you (Oct 19, 2025)
Intimate chat history is vulnerable to exploitation by law enforcement, criminals, and tech companies for targeted advertising, raising privacy concerns with limited legal protections. - WSJ: OpenAI’s Chip Strategy: Pair Nvidia’s Chocolate With Broadcom’s Peanut Butter (Oct 17, 2025)
OpenAI seeks to diversify chip procurement to meet the growing computing demands of its AI services.
- Anthropic: Claude Code on the web (Oct 20, 2025)
-
Is AI Development Slowing?
Just a few months ago, it felt like the prevailing narrative was the incredible and unstoppable rise of AI. Reporters left and right were profiling the site AI2027, a techno-optimist forecast of AI’s trajectory over the next 2–3 years. But since then, I’ve noticed a rising number of more pessimistic stories — ones that talk about social and interpersonal risks, financial peril, and the idea that the development of AI technology is slowing. While the first two concerns are worth considering, today we’ll focus on the idea that AI development is slowing.
For those of you with kids, you’ll likely remember the days when they were babies, and each day seemed to bring some new incredible skill. Sitting up, crawling, talking, walking, and learning to open every cabinet door in the kitchen. It was hard to miss the almost daily changes. Family members would visit and note the changes, and as a parent, you would readily agree. The child in question inevitably had doubled in size in less than a year. But as they grew, development seemed to slow, making visiting family members the only ones to be amazed by a child’s growth. Their absence allowed them to see the remarkable change. “Wow,” they would say, “little Johnny has truly gotten big.”
I see the same with AI development today.
Models introduced last year and even earlier this year had a feeling of novelty, of magic. For many of us (yours truly included), it was an experience to see that AI tools had personality and possible utility for the first time. The examples: help me solve a problem, answer a question, clean up some writing, write a piece of code, etc. It was like watching an infant grow into someone who could talk.
Perhaps more akin to elementary-age children, the pace of change for AI tools doesn’t feel as fast for many folks. The WSJ (and others) are publishing articles like “AI’s Big Leaps Are Slowing—That Could Be a Good Thing” that frame the AI story as a slowdown. But those headlines usually track product launches, not capability evolution. But I don’t see much evidence that product launches are slowing (I can count scores of product launches just in the past few months). I see it more along the lines that people came to believe AGI would mature more quickly than even the industry leaders claimed.
It’s like Bill Gates’ maxim: “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.”
The Nielsen Norman Group has tracked this shift in users. As conversational AI becomes the baseline, search behaviors evolve. Queries are less about “find me a link” and more about iterating with an AI assistant. In their follow-up on user perceptions, people described new agent features as “expected” rather than “wow” (NNG, 2025). The bar has moved. Our expectations have flattened because most people don’t see those agentic and long-horizon use gains. They see new AI features, feel underwhelmed, and assume the hype was overblown.
Earlier this year, METR published research showing that models are increasingly capable of long-horizon tasks, executing sequences of operations with minimal oversight. They have since updated their report with data inclusive of more recent models.

That’s an exponential curve, not something you’d expect with stagnation. Meanwhile, on the macroeconomic stage, activity hasn’t slowed. AI investment is still surging, with economists crediting the technology for meaningful boosts to growth. There are mixed reports about adoptions: Apollo Academy reports a cooling in AI adoption rates among large corporations—even as internal development ramps up. But AI coding tool installation continues to rise. Tracking the number of installs of the top 4 AI coding tools, you’ll find a nearly 20% increase in daily installations over the past 30 days.
Back to AI 2027, the predictions about agentic AI in late 2027 seem to be more or less on pace, with perhaps a month or so of deviation. The risk of all of this is to mistake familiarity for maturity. The awe has worn off, so it’s easy to assume the growth has too. But if you look at what METR’s testing shows, how users are integrating AI without fanfare, how developers are integrating AI tools into their work, and how capital is still flowing—the picture is clear. Progress remains swift.
⸻
AI development isn’t truly slowing—it’s maturing. As the initial novelty wears off, real advances continue beneath the surface, driven by capability gains, steady investment, and evolving user behavior.
-
Friday Links (Oct. 10)
- Maginative: Figma taps Google’s Gemini for Faster, Enterprise-Ready AI Inside its Design Platform (Oct 9, 2025)
Integrations will enhance image generation and editing within Figma and help with enterprise governance, allowing admins to control AI feature access and data usage for model training. - WSJ: Exclusive | Microsoft Tries to Catch Up in AI With Healthcare Push, Harvard Deal (Oct 8, 2025)
Microsoft aims to become a leading AI chatbot provider, reducing its reliance on OpenAI by focusing on healthcare applications for its Copilot assistant. This update, developed in collaboration with Harvard Medical School, will offer more credible health information, and Microsoft is developing tools to help users find healthcare providers. - Google: Introducing the Gemini 2.5 Computer Use model (Oct 7, 2025)
The new model empowers agents to interact directly with user interfaces for tasks like filling forms and navigating web pages. And the possibilities are immense, but software testing seems like a great candidate for tools like this. - NY Times: What the Arrival of A.I. Video Generators Like Sora Means for Us (Oct 9, 2025)
Sora has become so realistic that it undermines the reliability of video as proof of events. It’s simply difficult to distinguish between real and fake videos. - WSJ Opinion: AI and the Fountain of Youth (Oct 8, 2025)
AI is accelerating drug development, analyzing medical data, and improving diagnostics, potentially leading to longer, healthier lives. “Thanks to AI, the process of identifying and developing new drugs, once a decade long slog, is being compressed into months.” - WSJ Opinion: I’ve Seen How AI ‘Thinks.’ I Wish Everyone Could. (Oct 9, 2025)
Understanding how AI models function, including their training data and mathematical structure, is crucial, especially as AI increasingly impacts human endeavors like writing and art. - WSJ: AI Investors Are Chasing a Big Prize. Here’s What Can Go Wrong. (Oct 5, 2025)
Investing in AI is risky due to the high costs, uncertain timelines, and potential for competition. I’d argue that these risks are present in almost any investment decision.
- Maginative: Figma taps Google’s Gemini for Faster, Enterprise-Ready AI Inside its Design Platform (Oct 9, 2025)