Category: AI

  • Tuesday Links (Oct. 28)

  • Monday links (Oct. 27)

  • Sunday Links (Oct. 26)

  • AI Catastrophy?

    I love the quote from George Mallory about climbing Mt. Everest:

    When asked by a reporter why he wanted to climb Everest, Mallory purportedly replied, “Because it’s there.”

    We all know that it didn’t turn out so well for Mr. Mallory, and 100 years later, this meme:

    Perhaps the same can be said for AI scientists: why do you build even more powerful AI systems? Because the challenge is there!

    The race to build these systems is on. Companies left and right are dropping millions on talent in their attempt to build superintelligence labs. Meta, for example, has committed millions and millions to this effort. OpenAI (the leader), Anthropic (the safety-minded one), xAI (the rebel), Mistral (the Europeans), DeepSeek (the Chinese), Meta, and others are building frontier AI tools. Many are quite indistinguishable from magic.

    Each of these companies purports to be the best and the most trustworthy organization to get to superintelligence for one reason or another. Elon Musk (xAI), for example, has been quite clear that he only trusts the technology if he controls it. He even attempted a long shot bid to purchase OpenAI earlier this year. Anthropic is quite overtly committed to safety and ethics, believing they are the company best-suited to develop “safe” AI tools.

    (Anthropic founders Dario and Daniela Amodei and others left OpenAI in 2021 in response to concerns about AI safety. They focused on so-called responsible AI development as central to all research and product work. Of course, their AI ethics didn’t necessarily extend to traditional ethics like not stealing, but that’s a conversation for another day.)

    I’m not here to pick on the Amodeis, Musk, Meta, or any of the AI players. It’s clear that they’ve created amazing technologies with considerable utility. But there are concerns at a far higher level than AI-induced psychosis on an individual level or pirating books.

    Ezra Klein recently interviewed Eliezer Yudkowsky on his podcast, another bonkers interview that positions AI not as just another technology but as something with a high probability of leading to human extinction.

    The interview is informative and interesting, and if you have an hour, it’s worth listening to in its entirety. But I was particularly struck by the religious and metaphysical part of the conversation:

    Klein:

    But from another perspective, if you go back to these original drives, I’m actually, in a fairly intelligent way, trying to maintain some fidelity to them. I have a drive to reproduce, which creates a drive to be attractive to other people…

    Yudkowsky:

    You check in with your other humans. You don’t check in with the thing that actually built you, natural selection. It runs much, much slower than you. Its thought processes are alien to you. It doesn’t even really want things the way you think of wanting them. It, to you, is a very deep alien.

    Let me speak for a moment on behalf of natural selection: Ezra, you have ended up very misaligned to my purpose, I, natural selection. You are supposed to want to propagate your genes above all else

    I mean, I do believe in a creator. It’s called natural selection. There are textbooks about how it works.

    I’m familiar with a different story in a different book. It’s about a creator and a creation that goes off the rails rather quickly. And it certainly strikes me that a less able creator (humans) create something that behaves in ways that diverge from the creator’s intent.

    Of course, the ultimate creator I mention knew of coming treachery and had a plan. So for humanity, if AI goes wrong, do we have a plan? Yudkowsky certainly suggests that we don’t.

    I’m still bullish on AI as a normal technology, but there are smart people in the industry telling me there are big, nasty, scary risks. And because I don’t see AI development slowing, I find these concerns more salient today than ever before.

  • Thursday Links (Oct. 22)

  • More on AI & Water

    As I noted earlier this year, water needs for European AI data centers was negligible at best considering population and overall water usage.

    Andy Masley comes to the same conclusion in his recent post, The AI water issue is fake. (Also, three cheers for accurate and descriptive article titles):

    All U.S. data centers (which mostly support the internet, not AI) used 200–250 million gallons of freshwater daily in 2023. The U.S. consumes approximately 132 billion gallons of freshwater daily. The U.S. circulates a lot more water day to day, but to be extra conservative I’ll stick to this measure of its consumptive use, see here for a breakdown of how the U.S. uses water. So data centers in the U.S. consumed approximately 0.2% of the nation’s freshwater in 2023.

    However, the water that was actually used onsite in data centers was only 50 million gallons per day, the rest was used to generate electricity offsite. Most electricity is generated by heating water to spin turbines, so when data centers use electricity, they also use water. Only 0.04% of America’s freshwater in 2023 was consumed inside data centers themselves. This is 3% of the water consumed by the American golf industry.

    And later:

    This means that every single day, the average American uses enough water for 800,000 chatbot prompts. 

    I suppose if we truly want to save water, we should take shorter showers.

    HT: Simon Willison: The AI water issue is fake

  • Tuesday Links (Oct. 21)

  • Monday Links (Oct. 20)

  • Is AI Development Slowing?

    Just a few months ago, it felt like the prevailing narrative was the incredible and unstoppable rise of AI. Reporters left and right were profiling the site AI2027, a techno-optimist forecast of AI’s trajectory over the next 2–3 years. But since then, I’ve noticed a rising number of more pessimistic stories — ones that talk about social and interpersonal risks, financial peril, and the idea that the development of AI technology is slowing. While the first two concerns are worth considering, today we’ll focus on the idea that AI development is slowing.

    For those of you with kids, you’ll likely remember the days when they were babies, and each day seemed to bring some new incredible skill. Sitting up, crawling, talking, walking, and learning to open every cabinet door in the kitchen. It was hard to miss the almost daily changes. Family members would visit and note the changes, and as a parent, you would readily agree. The child in question inevitably had doubled in size in less than a year. But as they grew, development seemed to slow, making visiting family members the only ones to be amazed by a child’s growth. Their absence allowed them to see the remarkable change. “Wow,” they would say, “little Johnny has truly gotten big.”

    I see the same with AI development today.

    Models introduced last year and even earlier this year had a feeling of novelty, of magic. For many of us (yours truly included), it was an experience to see that AI tools had personality and possible utility for the first time. The examples: help me solve a problem, answer a question, clean up some writing, write a piece of code, etc. It was like watching an infant grow into someone who could talk.

    Perhaps more akin to elementary-age children, the pace of change for AI tools doesn’t feel as fast for many folks. The WSJ (and others) are publishing articles like “AI’s Big Leaps Are Slowing—That Could Be a Good Thing” that frame the AI story as a slowdown. But those headlines usually track product launches, not capability evolution. But I don’t see much evidence that product launches are slowing (I can count scores of product launches just in the past few months). I see it more along the lines that people came to believe AGI would mature more quickly than even the industry leaders claimed.

    It’s like Bill Gates’ maxim: “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.”

    The Nielsen Norman Group has tracked this shift in users. As conversational AI becomes the baseline, search behaviors evolve. Queries are less about “find me a link” and more about iterating with an AI assistant. In their follow-up on user perceptions, people described new agent features as “expected” rather than “wow” (NNG, 2025). The bar has moved. Our expectations have flattened because most people don’t see those agentic and long-horizon use gains. They see new AI features, feel underwhelmed, and assume the hype was overblown.

    Earlier this year, METR published research showing that models are increasingly capable of long-horizon tasks, executing sequences of operations with minimal oversight. They have since updated their report with data inclusive of more recent models.

    That’s an exponential curve, not something you’d expect with stagnation.

    Meanwhile, on the macroeconomic stage, activity hasn’t slowed. AI investment is still surging, with economists crediting the technology for meaningful boosts to growth. There are mixed reports about adoptions: Apollo Academy reports a cooling in AI adoption rates among large corporations—even as internal development ramps up. But AI coding tool installation continues to rise. Tracking the number of installs of the top 4 AI coding tools, you’ll find a nearly 20% increase in daily installations over the past 30 days.

    Back to AI 2027, the predictions about agentic AI in late 2027 seem to be more or less on pace, with perhaps a month or so of deviation. The risk of all of this is to mistake familiarity for maturity. The awe has worn off, so it’s easy to assume the growth has too. But if you look at what METR’s testing shows, how users are integrating AI without fanfare, how developers are integrating AI tools into their work, and how capital is still flowing—the picture is clear. Progress remains swift.

    AI development isn’t truly slowing—it’s maturing. As the initial novelty wears off, real advances continue beneath the surface, driven by capability gains, steady investment, and evolving user behavior.

  • Friday Links (Oct. 10)