Blog

  • WSJ: ‘Vibe Coding’ Has Arrived for Businesses

    WSJ: ‘Vibe Coding’ Has Arrived for Businesses (July 8, 2025)

    Vibe coding (using AI tools to create code) has exploded in popularity this year, speeding prototyping and development considerably. But experienced engineers are still required to confirm the AI-assisted development work fulfills the requirements and follows security best practices.

    Creating your own app is now possible with any number of artificial intelligence-based tools, leading to the “vibe coding” revolution for code-writing amateurs.

    But professional developers are picking it up now, too, bringing the practice—generally understood as the ability to create functioning apps and websites without strictly editing code—into businesses.

    Using AI tools like OpenAI’s GPT models and Anthropic’s Claude, Wilkinson’s (Vanguard’s divisional chief information officer for financial adviser services) team is vibe coding new webpages with the help of product and design staff. The process has eliminated the need for traditional handoffs of work between teams, speeding up the design for a new Vanguard webpage by 40%. Prototyping went from taking two weeks to 20 minutes, she said.

    “The role of the engineer is still very, very critical to make sure that the boundaries and conditions are set up front for what the vibe coding is going to produce,” she said. “It doesn’t excuse the engineer from needing to understand what’s going on behind the scenes.”

    I built my first vibe-coded app this week, and I was astonished by Claude Code and what it wrote. But I also have enough dormant development skills to understand how to create a new webserver instance, install tools using the terminal, and write MySQL queries.

    Jude Schramm, CIO of Fifth Third Bank, said the regional bank’s 700 full-time engineers may be entirely vibe coding in a few years’ time. Schramm said he’s already thinking more about the value of his developers as business problem-solvers rather than as code authors.

    This suggests that expertise remains a necessary component of the vibes-assisted world.

  • WSJ: Your Prize for Saving Time at Work With AI: More Work

    WSJ: Your Prize for Saving Time at Work With AI: More Work (July 8, 2025)

    It’s the age-old tension between employee satisfaction and employer-demanded productivity.  I (optimistically) believe it’s possible to use AI tools to take the drudgery out of the least agreeable parts of work and provide more time for creativity and innovative pursuits.

    A recent survey found nearly half of workers believe their AI time savings should belong to them, not their employers. That survey, conducted by business-software maker SAP, also found that workers using AI save almost an hour a day on average.

    But my optimism is tempered by knowing that a recession is coming, and companies have used these downturns to prune headcount and raise expectations of remaining employees. This seems the likely result, regardless of how these tools could be mutually beneficial.

    The clear message from [Andy Jassy] and other business leaders is that we can’t simply do as much work as we’ve been doing, in less time, and clock out early. If we do, we risk being replaced by someone who uses AI to increase output.

  • Ed Zitron: Anthropic Is Bleeding Out

    Ed Zitron: Anthropic Is Bleeding Out (July 10, 2025)

    AI and technology critic Ed Zitron explores the possibility that Anthropic’s pricing model is insufficient for long-term sustainability. His predictions are bleak for the company.


    AI IDE Cursor recently upped their prices, passing along Anthropic’s prices increases of late May 2025.

    What I have described in this newsletter is one of the most dramatic and aggressive price increases in the history of software, with effectively no historical comparison. No infrastructure provider in the history of Silicon Valley has so distinctly and aggressively upped its prices on customers, let alone their largest and most prominent ones, and doing so is an act of desperation that suggests fundamental weaknesses in their business models.

    I do take some issue about how Zitron frames the price increases. I remember when Netflix effectively upped their prices by 60% when they split the streaming and DVD portions of their business. Rapid price hikes to happen in technology companies.

    Nevertheless, there’s one much, much, much bigger problem: Anthropic is very likely losing money on every single Claude Code customer, and based on my analysis, appears to be losing hundreds or even thousands of dollars per customer.

    The reality is that developers are quite adroit at pushing the limits of technology, finding clever ways to maximize what they get out of subscriptions. It seems like Anthropic needs more developers on their platform, particularly ones who aren’t very active programmers.

  • AI Robot Massage

    WSJ: I Pitted an AI Robot Massage Against the Real Thing (July 7, 2025)

    The Aescape massage robot has significant limitations compared to the human equivalent (specifically in working on the neck and head), and it has far fewer AI chops than the marketing suggests.

    WSJ columnist, Dawn Gilbertson:

    The robot can’t reach two areas that are most enjoyable for me, the head and neck. And, in this particular case, I had a wicked stiff neck that needed attention.

  • Meta’s AI Spending Spree

    After reorganizing its AI group in May, Meta has been on a free agent hiring spree. So much so that one intrepid developer created a little dashboard: Zuck’s Haul.

    As of July 6, the total spent is $247m. Big numbers, folks.

  • Joanna Stern on AI Energy Uses

    Joanna Stern from the Wall Street Journal explores: How Much Energy Does Your AI Prompt Use? I Went to a Data Center to Find Out. Her findings are helpful but not surprising.

    Estimated Energy Usage by AI Task:

    • Text generation: 0.17-1.7 watt-hours (depending on model size)
    • Image generation: About 1.7 watt-hours for a 1024×1024 image
    • Video generation: 20-110 watt-hours for just 6 seconds of video

    For context: I can turn off an 8w LED lamp (60 watt equivalent) for an hour and save roughly enough energy to create 5 images. Or, if you have a 4-ton AC, you could turn it off for one hour and generate 40 videos.

    In terms of consumption, a gallon of gas contains 33.7 kilowatt-hours, meaning I could ask ChatGPT nearly 100,000 questions for the same energy cost as driving 26 miles (for the average 2022 model-year vehicle).

    I think we ought to be mindful of the environment and be good stewards of our planet, but I think it’s also important to have context behind these numbers. The potential scope of use is huge (7+ billion people), but relative energy consumption per request remains low and declining with silicon improvements.

    Nvidia has seen a jump in energy efficiency with its latest Blackwell Ultra chips, according to Josh Parker, the company’s head of sustainability. “We’re using 1/30th of the energy for the same inference workloads that we were just a year ago,” Parker said.

    We saw this with the shift from incandescent to LED light bulbs. The cost of lighting building dropped in terms of energy use and dollars spent is much less today than 20 years ago. I have every reason to expect the same to happen in computing, particularly related to AI technology.

  • IEEE Spectrum: Large Language Models Are Improving Exponentially

    Recent report predicts a bright future for LLMs:

    That was a key motivation behind work at Model Evaluation & Threat Research (METR). The organization, based in Berkeley, Calif., “researches, develops, and runs evaluations of frontier AI systems’ ability to complete complex tasks without human input.” In March, the group released a paper called Measuring AI Ability to Complete Long Tasks, which reached a startling conclusion: According to a metric it devised, the capabilities of key LLMs are doubling every seven months. This realization leads to a second conclusion, equally stunning: By 2030, the most advanced LLMs should be able to complete, with 50 percent reliability, a software-based task that takes humans a full month of 40-hour workweeks. And the LLMs would likely be able to do many of these tasks much more quickly than humans, taking only days, or even just hours.

    As a caveat — I’m not sure how many companies would be satisfied with a 50% success rate for key software. Having an AI tool complete a job that would take a human a full month would be a good thing. But let’s face it, a person still has to determine if the work was done satisfactorily. 50% isn’t a passing grade for any subject.

  • WSJ: CEOs Start Saying the Quiet Part Out Loud: AI Will Wipe Out Jobs

    Analysts have been seeing structural changes in the job market related to AI, and now CEOs are admitting it openly. Ford’s CEO, Jim Farley, suggests that 50% of white-collar jobs will be trimmed. JP Morgan exec Marianna Lake also sees a 10% drop in headcount.

    “I think it’s going to destroy way more jobs than the average person thinks,” James Reinhart, CEO of the online resale site ThredUp, said at an investor conference in June.

    While Microsoft’s CEO isn’t publicly declaring that AI will cause job losses, the company did announce another reduction this month, bringing their recent layoffs to a total of around 15,000 people.

  • WSJ:How a Bold Plan to Ban State AI Laws Fell Apart—and Divided Trumpworld

    As I noted last week, Congressional efforts to block state AI laws in the Big Beautiful Bill lost support and was ultimately dropped from the Senate bill by a close vote of 99-1.

  • Bad (Uses of) AI

    From MIT Technology Review: People are using AI to ‘sit’ with them while they trip on psychedelics. “Some people believe chatbots like ChatGPT can provide an affordable alternative to in-person psychedelic-assisted therapy. Many experts say it’s a bad idea.” I’d like to hear from the experts who say this is a good idea.

    Above the Law: Trial Court Decides Case Based On AI-Hallucinated Caselaw. “Shahid v. Esaam, out of the Georgia Court of Appeals, involved a final judgment and decree of divorce served by publication. When the wife objected to the judgment based on improper service, the husband’s brief included two fake cases.” From the appellate court: “As noted above, the irregularities in these filings suggest that they were drafted using generative AI.”

    Futurism: People Are Being Involuntarily Committed, Jailed After Spiraling Into “ChatGPT Psychosis” “At the core of the issue seems to be that ChatGPT, which is powered by a large language model (LLM), is deeply prone to agreeing with users and telling them what they want to hear.”

    “What I think is so fascinating about this is how willing people are to put their trust in these chatbots in a way that they probably, or arguably, wouldn’t with a human being,” Pierre said. “And yet, there’s something about these things — it has this sort of mythology that they’re reliable and better than talking to people. And I think that’s where part of the danger is: how much faith we put into these machines.”

    The Register: AI agents get office tasks wrong around 70% of the time, and a lot of them aren’t AI at all. “IT consultancy Gartner predicts that more than 40 percent of agentic AI projects will be cancelled by the end of 2027 due to rising costs, unclear business value, or insufficient risk controls.” Gartner further notes that most agentic “AI” vendors aren’t actually AI.