Blog

  • The Bicycle of the Mind

    Steve Jobs famously described the computer as a “bicycle for the mind.” In an interview decades ago, he compared the efficiency of various species traveling a mile, noting that humans were far from the most efficient.

    But when you gave a human a bicycle, the energy required to travel that same distance dropped dramatically — surpassing nearly every other creature. He then talked about humans as tool builders.

    Jobs used this analogy to explain how computers empower people and “amplify” human creativity, allowing us to do extraordinary things. Looking at technology today, it’s clear his prediction was on the mark. Computers have indeed enabled humans to create, design, and communicate in ways that were once unimaginable. Reach into your pocket (or purse) and grab your smartphone. That phone is far more than a device used to make calls. Personally, I have over 25,000 photos and several thousand videos.

    Computers have given rise to entirely new professions — designers, photographers, programmers, content marketers — jobs that simply didn’t exist a generation ago. The same appears likely with AI.

    AI: The Next Bicycle of the Mind

    Artificial intelligence tools represent another step in human tool building. AI has the potential to democratize creativity in ways that were previously unthinkable.

    Just weeks ago, OpenAI released Sora 2 (following Google’s Nano Banana) that similarly focused on image feature fidelity. These systems allow creators to upload a photo of a person and generate remarkably accurate, lifelike images — trying on different outfits, hairstyles, or even placing themselves in imaginative settings, a huge leap from earlier models. You can create fantastical scenes — climbing Mount Everest, eating dinner on the Titanic, etc. — things that defy reality but are fun. These tools give everyone, not just professional artists, the ability to create.

    There are dedicated apps for Sora and Meta AI, both of which have a growing amount of AI-generated photos and videos (and a lot of AI slop).

    Creative Industries and AI

    The implications go far beyond personal creativity. Filmmakers, for instance, can now generate entire scenes — a cheering crowd, a packed stadium — with minimal cost. What once required massive budgets and production teams (here’s a story about the stadiums in Ted Lasso) can now be achieved with AI tools.

    George Lucas waited more than 10 years between Star Wars: Episode VI and Episode I because the technology he needed to capture his creative vision simply didn’t exist. After seeing Jurassic Park, he realized that computer-generated imagery had advanced enough to make his vision possible. AI tools have the potential to unlock more creativity for countless filmmakers who aren’t named Spielberg or Lucas.

    The Productivity Curve

    Economist Jason Furman recently discussed the possibility of a productivity J-curve in relation to AI — where initial productivity may decline as we adopt these tools, but long-term gains will follow. 

    Filmmakers adopting AI today may not see immediate results — it takes years to produce a film — but these technologies are entering creative pipelines now. In a few years, we should begin seeing the results: imaginative, visually stunning works produced at lower costs. (As an aside, the WSJ reports on the new film company, B5 Studios, that plans to  create content more quickly with less expensive.)

    The same pattern applies to app development and web creation. Coding agents like OpenAI’s Codex or Anthropic’s Claude Code are dramatically lowering barriers for developers, and Anthropic lists their customers who have built using Claude with impressively good examples. Apple is integrating Claude Code into Xcode, paving the way for a new wave of iPhone apps from creators who previously lacked the resources to build them.

    AI in Education and Creativity

    For university and educational institutions, these advances offer tremendous opportunities. Creative professionals can produce higher-quality work with fewer resources. Students in creative programs can now create visually rich, engaging projects that would have been technically or financially impossible just a few years ago.

    And the possibilities extend beyond visual arts and programming into writing. Every aspiring writer now has access to an editor, proofreader, and creative partner through AI. A budding novelist can write a first chapter and instantly receive feedback, grammatical corrections, and stylistic suggestions. AI becomes a bicycle for the mind — not replacing editors, but extending editorial support to those who previously lacked such resources.

    Of course, professional authors like John Grisham and JK Rowling will continue to rely on human editors and publishers. But for new authors, AIcan help them polish their work and realize their creative ideas.

    The Human Potential

    As leaders, the challenge is to encourage people to see these tools not as job killers or creativity crushers, but as amplifiers of human potential. AI, like the computer before it, can help extend human flourishing.

    It’s a tool that can make us more creative, more expressive, and more capable of bringing our ideas to life. Like the bicycle that allows humans to move faster and farther than ever before, AI is the next great vehicle for the mind — empowering us to go places we never could have reached on our own.

  • Tuesday Links (Oct. 28)

  • Monday links (Oct. 27)

  • Sunday Links (Oct. 26)

  • AI Catastrophy?

    I love the quote from George Mallory about climbing Mt. Everest:

    When asked by a reporter why he wanted to climb Everest, Mallory purportedly replied, “Because it’s there.”

    We all know that it didn’t turn out so well for Mr. Mallory, and 100 years later, this meme:

    Perhaps the same can be said for AI scientists: why do you build even more powerful AI systems? Because the challenge is there!

    The race to build these systems is on. Companies left and right are dropping millions on talent in their attempt to build superintelligence labs. Meta, for example, has committed millions and millions to this effort. OpenAI (the leader), Anthropic (the safety-minded one), xAI (the rebel), Mistral (the Europeans), DeepSeek (the Chinese), Meta, and others are building frontier AI tools. Many are quite indistinguishable from magic.

    Each of these companies purports to be the best and the most trustworthy organization to get to superintelligence for one reason or another. Elon Musk (xAI), for example, has been quite clear that he only trusts the technology if he controls it. He even attempted a long shot bid to purchase OpenAI earlier this year. Anthropic is quite overtly committed to safety and ethics, believing they are the company best-suited to develop “safe” AI tools.

    (Anthropic founders Dario and Daniela Amodei and others left OpenAI in 2021 in response to concerns about AI safety. They focused on so-called responsible AI development as central to all research and product work. Of course, their AI ethics didn’t necessarily extend to traditional ethics like not stealing, but that’s a conversation for another day.)

    I’m not here to pick on the Amodeis, Musk, Meta, or any of the AI players. It’s clear that they’ve created amazing technologies with considerable utility. But there are concerns at a far higher level than AI-induced psychosis on an individual level or pirating books.

    Ezra Klein recently interviewed Eliezer Yudkowsky on his podcast, another bonkers interview that positions AI not as just another technology but as something with a high probability of leading to human extinction.

    The interview is informative and interesting, and if you have an hour, it’s worth listening to in its entirety. But I was particularly struck by the religious and metaphysical part of the conversation:

    Klein:

    But from another perspective, if you go back to these original drives, I’m actually, in a fairly intelligent way, trying to maintain some fidelity to them. I have a drive to reproduce, which creates a drive to be attractive to other people…

    Yudkowsky:

    You check in with your other humans. You don’t check in with the thing that actually built you, natural selection. It runs much, much slower than you. Its thought processes are alien to you. It doesn’t even really want things the way you think of wanting them. It, to you, is a very deep alien.

    Let me speak for a moment on behalf of natural selection: Ezra, you have ended up very misaligned to my purpose, I, natural selection. You are supposed to want to propagate your genes above all else

    I mean, I do believe in a creator. It’s called natural selection. There are textbooks about how it works.

    I’m familiar with a different story in a different book. It’s about a creator and a creation that goes off the rails rather quickly. And it certainly strikes me that a less able creator (humans) create something that behaves in ways that diverge from the creator’s intent.

    Of course, the ultimate creator I mention knew of coming treachery and had a plan. So for humanity, if AI goes wrong, do we have a plan? Yudkowsky certainly suggests that we don’t.

    I’m still bullish on AI as a normal technology, but there are smart people in the industry telling me there are big, nasty, scary risks. And because I don’t see AI development slowing, I find these concerns more salient today than ever before.

  • Thursday Links (Oct. 22)

  • More on AI & Water

    As I noted earlier this year, water needs for European AI data centers was negligible at best considering population and overall water usage.

    Andy Masley comes to the same conclusion in his recent post, The AI water issue is fake. (Also, three cheers for accurate and descriptive article titles):

    All U.S. data centers (which mostly support the internet, not AI) used 200–250 million gallons of freshwater daily in 2023. The U.S. consumes approximately 132 billion gallons of freshwater daily. The U.S. circulates a lot more water day to day, but to be extra conservative I’ll stick to this measure of its consumptive use, see here for a breakdown of how the U.S. uses water. So data centers in the U.S. consumed approximately 0.2% of the nation’s freshwater in 2023.

    However, the water that was actually used onsite in data centers was only 50 million gallons per day, the rest was used to generate electricity offsite. Most electricity is generated by heating water to spin turbines, so when data centers use electricity, they also use water. Only 0.04% of America’s freshwater in 2023 was consumed inside data centers themselves. This is 3% of the water consumed by the American golf industry.

    And later:

    This means that every single day, the average American uses enough water for 800,000 chatbot prompts. 

    I suppose if we truly want to save water, we should take shorter showers.

    HT: Simon Willison: The AI water issue is fake

  • Tuesday Links (Oct. 21)

  • Monday Links (Oct. 20)

  • Is AI Development Slowing?

    Just a few months ago, it felt like the prevailing narrative was the incredible and unstoppable rise of AI. Reporters left and right were profiling the site AI2027, a techno-optimist forecast of AI’s trajectory over the next 2–3 years. But since then, I’ve noticed a rising number of more pessimistic stories — ones that talk about social and interpersonal risks, financial peril, and the idea that the development of AI technology is slowing. While the first two concerns are worth considering, today we’ll focus on the idea that AI development is slowing.

    For those of you with kids, you’ll likely remember the days when they were babies, and each day seemed to bring some new incredible skill. Sitting up, crawling, talking, walking, and learning to open every cabinet door in the kitchen. It was hard to miss the almost daily changes. Family members would visit and note the changes, and as a parent, you would readily agree. The child in question inevitably had doubled in size in less than a year. But as they grew, development seemed to slow, making visiting family members the only ones to be amazed by a child’s growth. Their absence allowed them to see the remarkable change. “Wow,” they would say, “little Johnny has truly gotten big.”

    I see the same with AI development today.

    Models introduced last year and even earlier this year had a feeling of novelty, of magic. For many of us (yours truly included), it was an experience to see that AI tools had personality and possible utility for the first time. The examples: help me solve a problem, answer a question, clean up some writing, write a piece of code, etc. It was like watching an infant grow into someone who could talk.

    Perhaps more akin to elementary-age children, the pace of change for AI tools doesn’t feel as fast for many folks. The WSJ (and others) are publishing articles like “AI’s Big Leaps Are Slowing—That Could Be a Good Thing” that frame the AI story as a slowdown. But those headlines usually track product launches, not capability evolution. But I don’t see much evidence that product launches are slowing (I can count scores of product launches just in the past few months). I see it more along the lines that people came to believe AGI would mature more quickly than even the industry leaders claimed.

    It’s like Bill Gates’ maxim: “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.”

    The Nielsen Norman Group has tracked this shift in users. As conversational AI becomes the baseline, search behaviors evolve. Queries are less about “find me a link” and more about iterating with an AI assistant. In their follow-up on user perceptions, people described new agent features as “expected” rather than “wow” (NNG, 2025). The bar has moved. Our expectations have flattened because most people don’t see those agentic and long-horizon use gains. They see new AI features, feel underwhelmed, and assume the hype was overblown.

    Earlier this year, METR published research showing that models are increasingly capable of long-horizon tasks, executing sequences of operations with minimal oversight. They have since updated their report with data inclusive of more recent models.

    That’s an exponential curve, not something you’d expect with stagnation.

    Meanwhile, on the macroeconomic stage, activity hasn’t slowed. AI investment is still surging, with economists crediting the technology for meaningful boosts to growth. There are mixed reports about adoptions: Apollo Academy reports a cooling in AI adoption rates among large corporations—even as internal development ramps up. But AI coding tool installation continues to rise. Tracking the number of installs of the top 4 AI coding tools, you’ll find a nearly 20% increase in daily installations over the past 30 days.

    Back to AI 2027, the predictions about agentic AI in late 2027 seem to be more or less on pace, with perhaps a month or so of deviation. The risk of all of this is to mistake familiarity for maturity. The awe has worn off, so it’s easy to assume the growth has too. But if you look at what METR’s testing shows, how users are integrating AI without fanfare, how developers are integrating AI tools into their work, and how capital is still flowing—the picture is clear. Progress remains swift.

    AI development isn’t truly slowing—it’s maturing. As the initial novelty wears off, real advances continue beneath the surface, driven by capability gains, steady investment, and evolving user behavior.