Author: Andrew

  • More on AI & Water

    As I noted earlier this year, water needs for European AI data centers was negligible at best considering population and overall water usage.

    Andy Masley comes to the same conclusion in his recent post, The AI water issue is fake. (Also, three cheers for accurate and descriptive article titles):

    All U.S. data centers (which mostly support the internet, not AI) used 200–250 million gallons of freshwater daily in 2023. The U.S. consumes approximately 132 billion gallons of freshwater daily. The U.S. circulates a lot more water day to day, but to be extra conservative I’ll stick to this measure of its consumptive use, see here for a breakdown of how the U.S. uses water. So data centers in the U.S. consumed approximately 0.2% of the nation’s freshwater in 2023.

    However, the water that was actually used onsite in data centers was only 50 million gallons per day, the rest was used to generate electricity offsite. Most electricity is generated by heating water to spin turbines, so when data centers use electricity, they also use water. Only 0.04% of America’s freshwater in 2023 was consumed inside data centers themselves. This is 3% of the water consumed by the American golf industry.

    And later:

    This means that every single day, the average American uses enough water for 800,000 chatbot prompts. 

    I suppose if we truly want to save water, we should take shorter showers.

    HT: Simon Willison: The AI water issue is fake

  • Tuesday Links (Oct. 21)

  • Monday Links (Oct. 20)

  • Is AI Development Slowing?

    Just a few months ago, it felt like the prevailing narrative was the incredible and unstoppable rise of AI. Reporters left and right were profiling the site AI2027, a techno-optimist forecast of AI’s trajectory over the next 2–3 years. But since then, I’ve noticed a rising number of more pessimistic stories — ones that talk about social and interpersonal risks, financial peril, and the idea that the development of AI technology is slowing. While the first two concerns are worth considering, today we’ll focus on the idea that AI development is slowing.

    For those of you with kids, you’ll likely remember the days when they were babies, and each day seemed to bring some new incredible skill. Sitting up, crawling, talking, walking, and learning to open every cabinet door in the kitchen. It was hard to miss the almost daily changes. Family members would visit and note the changes, and as a parent, you would readily agree. The child in question inevitably had doubled in size in less than a year. But as they grew, development seemed to slow, making visiting family members the only ones to be amazed by a child’s growth. Their absence allowed them to see the remarkable change. “Wow,” they would say, “little Johnny has truly gotten big.”

    I see the same with AI development today.

    Models introduced last year and even earlier this year had a feeling of novelty, of magic. For many of us (yours truly included), it was an experience to see that AI tools had personality and possible utility for the first time. The examples: help me solve a problem, answer a question, clean up some writing, write a piece of code, etc. It was like watching an infant grow into someone who could talk.

    Perhaps more akin to elementary-age children, the pace of change for AI tools doesn’t feel as fast for many folks. The WSJ (and others) are publishing articles like “AI’s Big Leaps Are Slowing—That Could Be a Good Thing” that frame the AI story as a slowdown. But those headlines usually track product launches, not capability evolution. But I don’t see much evidence that product launches are slowing (I can count scores of product launches just in the past few months). I see it more along the lines that people came to believe AGI would mature more quickly than even the industry leaders claimed.

    It’s like Bill Gates’ maxim: “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.”

    The Nielsen Norman Group has tracked this shift in users. As conversational AI becomes the baseline, search behaviors evolve. Queries are less about “find me a link” and more about iterating with an AI assistant. In their follow-up on user perceptions, people described new agent features as “expected” rather than “wow” (NNG, 2025). The bar has moved. Our expectations have flattened because most people don’t see those agentic and long-horizon use gains. They see new AI features, feel underwhelmed, and assume the hype was overblown.

    Earlier this year, METR published research showing that models are increasingly capable of long-horizon tasks, executing sequences of operations with minimal oversight. They have since updated their report with data inclusive of more recent models.

    That’s an exponential curve, not something you’d expect with stagnation.

    Meanwhile, on the macroeconomic stage, activity hasn’t slowed. AI investment is still surging, with economists crediting the technology for meaningful boosts to growth. There are mixed reports about adoptions: Apollo Academy reports a cooling in AI adoption rates among large corporations—even as internal development ramps up. But AI coding tool installation continues to rise. Tracking the number of installs of the top 4 AI coding tools, you’ll find a nearly 20% increase in daily installations over the past 30 days.

    Back to AI 2027, the predictions about agentic AI in late 2027 seem to be more or less on pace, with perhaps a month or so of deviation. The risk of all of this is to mistake familiarity for maturity. The awe has worn off, so it’s easy to assume the growth has too. But if you look at what METR’s testing shows, how users are integrating AI without fanfare, how developers are integrating AI tools into their work, and how capital is still flowing—the picture is clear. Progress remains swift.

    AI development isn’t truly slowing—it’s maturing. As the initial novelty wears off, real advances continue beneath the surface, driven by capability gains, steady investment, and evolving user behavior.

  • Work Life Balance

    Yesterday, I saw this NYTimes story about LendingTree’s CEO sudden and accidental death.

    He was the founder and longtime CEO of the company. He was a multimillionaire and a vital part of the company’s leadership team. Yet the lede of the story implied how irreplaceable he was:

    LendingTree named its chief operating officer, Scott Peyree, as its new chief.

    So yes, folks, even the most important of corporate officers can be replaced mere hours after an accident. It reminds me that the most important things in life happen not at work (where you can be replaced at will) but at home and with your family. I doubt that his wife has already named his replacement.

  • Friday Links (Oct. 10)

  • Thursday Links (Oct. 9)

    • Simon Willison: gpt-image-1-mini (Oct 6, 2025)
      OpenAI quietly released gpt-image-1-mini, a smaller and cheaper image generation model. The results are impressive and inexpensive, as mere pennies net good results.
    • Simon Willison: GPT-5 Pro (Oct 6, 2025)
      With a September 30, 2024 knowledge cutoff and a 400,000-token context window.
    • Sam Altman: Sora update #1 (Oct 3, 2025)
      OpenAI is planning two major changes to Sora based on user feedback: providing rightsholders with more granular control over the use of their characters in generated videos, and implementing a revenue-sharing model with rightsholders for video generation.
    • Morningstar, Inc.: The AI bubble is 17 times the size of the dot-com frenzy – and four times subprime, this analyst argues (Oct 3, 2025)
      MacroStrategy Partnership argues the AI bubble is significantly larger than both the dot-com and 2008 real estate bubbles. This claim, of course, is only proved after the “bubble” pops, so like most economic forecasting, only time will tell. It does seem quite clear that investment in AI infrastructure is high.
    • Stratechery: OpenAI’s Windows Play (Oct 7, 2025)
      OpenAI has made a flurry of announcements, including partnerships with Oracle, Nvidia, Samsung, SK Hynix, and AMD, as well as new product offerings like Instant Checkout and Sora 2. These moves suggest that OpenAI is positioning itself to be the “Windows of AI” by creating a platform where applications reside within ChatGPT, similar to how Windows became the dominant PC operating system.
    • Alex Tabarrok: The ai Boom (Oct 5, 2025)
      Anguilla’s internet domain, .ai, is experiencing a massive surge in registrations due to the booming interest in artificial intelligence. This increase in .ai domain registrations has become a major source of income for the small island nation, now contributing nearly half of its state revenues.
    • The Register: McKinsey wonders how to sell AI apps with no measurable benefits (Oct 9, 2025)
      Vendors should demonstrate clear value to line-of-business decision-makers who are increasingly weighing AI investments against staffing costs. Hand-waving and chanting “AI” is not a good strategy to explain rising costs. 
    • WSJ: Elon Musk Gambles Billions in Memphis to Catch Up on AI (Oct 5, 2025)
      xAI is investing heavily in the Memphis area, building massive data centers powered by a new power plant to support its chatbot Grok. 
  • Tuesday Links (Oct. 7)

  • Friday Links (Oct. 3)

  • AI Pricing Trends

    When Disney launched Disney+ in 2020, it came to market with a really low price: $6.99 per month. The strategy was obvious—use a bargain price to quickly build a subscriber base and compete with Netflix.

    It worked. Families eagerly added Disney+ to their lineup of streaming services, drawn by its deep library of shows and movies. But as the platform grew, investors started pushing Disney to make the service profitable. Over the next few years, Disney steadily raised prices. Today, the ad-free tier costs nearly three times what it did at launch.

    This story isn’t just about streaming. It’s a preview of what’s coming with AI services.

    The $20 Benchmark for AI

    When OpenAI launched ChatGPT Plus at $20 a month, it wasn’t the product of intricate economic modeling. Instead, it reflected an attempt to recoup some of the enormous costs behind the scenes—training models, running massive server farms, and paying world-class researchers.

    That $20 price point quickly became the de facto benchmark for consumer-facing AI tools, with Anthropic and others adopting similar rates.

    But OpenAI has been clear that its ambitions go far beyond the current offering. Its Stargate initiative involves building massive infrastructure, partnering globally, and spending billions on data centers. At some point, investor money won’t be enough—they’ll need sustainable revenue.

    And just like Disney, the path is clear: grow the subscriber base, then gradually raise prices once users view the product as indispensable.

    The Coming Price Climb

    Right now, $20 a month feels reasonable. But look ahead 5–10 years. As AI capabilities expand, it’s easy to imagine prices climbing to $50, $60, or even $100 per month.

    For individual consumers, that may be tough to swallow. A household with multiple subscriptions could find itself spending several hundred dollars a month on AI tools.

    For businesses, however, the math looks different. If AI can make an employee just 10% more productive, the return on investment is obvious. An employee earning $60,000 annually who produces the equivalent of $66,000 in value thanks to AI easily justifies a $100 subscription. For programmers or knowledge workers achieving productivity gains of 50% or more, companies might begrudgingly pay hundreds—even thousands—per employee, per month.

    The economics are compelling, and the pressure to raise prices is certain.

    AI-Adjacent Tools Will Follow

    This dynamic won’t be limited to large language models. AI-adjacent tools—platforms like Jira or Siteimprove—are racing to integrate AI features into their products. The added capabilities will deepen customer reliance. But once the early adoption phase passes, I expect these vendors to raise prices as well.

    It’s the same playbook: demonstrate new value, increase lock-in, then adjust pricing upward.

    The Staffing Equation

    All of this has implications beyond budgets. If AI makes employees 50% more efficient, organizations will rethink staffing structures. Efficiency gains don’t automatically translate into cost savings unless roles are consolidated or organizations grow.

    Take three departments, each with a similar role. If AI tools boost each person’s productivity by 50%, the organization suddenly has capacity for 4.5 units of work when only three are needed. The logical response is to reduce headcount—perhaps to two positions covering all three departments. But that requires some degree of centralization to realize these gains. Better options include company growth or redeploying personnel in areas of need or opportunity. There will be disruption of the workforce, but it doesn’t have to lead to layoffs.

    Planning for the Future

    The lesson from Disney+ is clear: early low prices are temporary. AI services are following the same trajectory, and organizations should prepare now.

    • Expect rising subscription costs—both for core AI platforms and for AI-enhanced tools.
    • Budget for increases of 50% or more annually over the next few years.
    • Plan staffing structures to capture the efficiency gains AI makes possible. I’m sure companies will prefer growth and redeployment, but that’s not always assured.

    AI will reshape productivity in profound ways. But as with streaming, the honeymoon pricing phase won’t last forever.