Tag: AI-20250707

  • Meta’s AI Spending Spree

    After reorganizing its AI group in May, Meta has been on a free agent hiring spree. So much so that one intrepid developer created a little dashboard: Zuck’s Haul.

    As of July 6, the total spent is $247m. Big numbers, folks.

  • Joanna Stern on AI Energy Uses

    Joanna Stern from the Wall Street Journal explores: How Much Energy Does Your AI Prompt Use? I Went to a Data Center to Find Out. Her findings are helpful but not surprising.

    Estimated Energy Usage by AI Task:

    • Text generation: 0.17-1.7 watt-hours (depending on model size)
    • Image generation: About 1.7 watt-hours for a 1024×1024 image
    • Video generation: 20-110 watt-hours for just 6 seconds of video

    For context: I can turn off an 8w LED lamp (60 watt equivalent) for an hour and save roughly enough energy to create 5 images. Or, if you have a 4-ton AC, you could turn it off for one hour and generate 40 videos.

    In terms of consumption, a gallon of gas contains 33.7 kilowatt-hours, meaning I could ask ChatGPT nearly 100,000 questions for the same energy cost as driving 26 miles (for the average 2022 model-year vehicle).

    I think we ought to be mindful of the environment and be good stewards of our planet, but I think it’s also important to have context behind these numbers. The potential scope of use is huge (7+ billion people), but relative energy consumption per request remains low and declining with silicon improvements.

    Nvidia has seen a jump in energy efficiency with its latest Blackwell Ultra chips, according to Josh Parker, the company’s head of sustainability. “We’re using 1/30th of the energy for the same inference workloads that we were just a year ago,” Parker said.

    We saw this with the shift from incandescent to LED light bulbs. The cost of lighting building dropped in terms of energy use and dollars spent is much less today than 20 years ago. I have every reason to expect the same to happen in computing, particularly related to AI technology.

  • Bad (Uses of) AI

    From MIT Technology Review: People are using AI to ‘sit’ with them while they trip on psychedelics. “Some people believe chatbots like ChatGPT can provide an affordable alternative to in-person psychedelic-assisted therapy. Many experts say it’s a bad idea.” I’d like to hear from the experts who say this is a good idea.

    Above the Law: Trial Court Decides Case Based On AI-Hallucinated Caselaw. “Shahid v. Esaam, out of the Georgia Court of Appeals, involved a final judgment and decree of divorce served by publication. When the wife objected to the judgment based on improper service, the husband’s brief included two fake cases.” From the appellate court: “As noted above, the irregularities in these filings suggest that they were drafted using generative AI.”

    Futurism: People Are Being Involuntarily Committed, Jailed After Spiraling Into “ChatGPT Psychosis” “At the core of the issue seems to be that ChatGPT, which is powered by a large language model (LLM), is deeply prone to agreeing with users and telling them what they want to hear.”

    “What I think is so fascinating about this is how willing people are to put their trust in these chatbots in a way that they probably, or arguably, wouldn’t with a human being,” Pierre said. “And yet, there’s something about these things — it has this sort of mythology that they’re reliable and better than talking to people. And I think that’s where part of the danger is: how much faith we put into these machines.”

    The Register: AI agents get office tasks wrong around 70% of the time, and a lot of them aren’t AI at all. “IT consultancy Gartner predicts that more than 40 percent of agentic AI projects will be cancelled by the end of 2027 due to rising costs, unclear business value, or insufficient risk controls.” Gartner further notes that most agentic “AI” vendors aren’t actually AI.

  • AI Free Agency

    From the Wall Street Journal: Mark Zuckerberg Announces New Meta ‘Superintelligence Labs’ Unit and a partial reorganization of Meta.

    Mark Zuckerberg announced a new “Superintelligence” division within Meta Platforms, officially organizing an effort that has been the subject of an intense recruiting blitz in recent months.

    Former Scale CEO Alexandr Wang will lead the team as chief AI officer, and former GitHub CEO Nat Friedman will lead the company’s work on AI products, according to an internal memo Zuckerberg sent to employees that was viewed by The Wall Street Journal. 

    This after another WSJ article last week about “the list”, designed to ameliorate Meta’s recent disappointing Llama work.

    All over Silicon Valley, the brightest minds in AI are buzzing about “The List,” a compilation of the most talented engineers and researchers in artificial intelligence that Mark Zuckerberg has spent months putting together. 

    Facebooks’ pivot from virtual reality / metaverse (Facebook -> Meta) to AI suggests that the metaverse was the wrong bet. I suspect Zuckerberg knows it, too, but this huge spending spree aligns with Zuck’s ethos, move fast and break things.

    In a world where a really good basketball player (Shai Gilgeous-Alexander) can command $285 million over four years, spending upwards of $100 million per transformative engineer seems like a relative bargain.

  • If AI Lets Us Do More in Less Time—Why Not Shorten the Workweek?

    It’s a good question for work (particularly for white collar roles) — if workers are more productive because of AI, should the workweek be shorter?

    This question is increasingly central to debates about the future of work and closely tied to the growing interest in the four-day workweek. According to Convictional CEO Roger Kirkness, his team was able to shift to a 32-hour schedule without any pay cuts—thanks to AI. As he told his staff, “Fridays are now considered days off.” The reaction was enthusiastic. “Oh my God, I was so happy,” said engineer Nick Wechner, who noted how much more quickly he could work using AI tools.

    Aside from his contention for boss of the year award, Kirkness recognizes the key criteria for success: getting your work done. If the work can be done faster, companies can choose: (1) reduce the total number of hours worked per employee (without reducing headcount); (2) reduce headcount by a commensurate number (in Convictional’s case, 20%); (3) grow the company to do more work with a similar number of employees.

    As a worker, I’m sympathetic to the idea of shorter work weeks, but I suspect that growth is a more realistic option. Employees continue to work similar hours, but increased productivity leads to company growth (but not headcount growth).

  • Microsoft Releases Copilot Extension for VS Code

    From Microsoft:

    GitHub Copilot is an AI peer programming tool that helps you write code faster and smarter.

    GitHub Copilot adapts to your unique needs allowing you to select the best model for your project, customize chat responses with custom instructions, and utilize agent mode for AI-powered, seamlessly integrated peer programming sessions.

    Simon Willison reports, “So far this is just the extension that provides the chat component of Copilot, but the launch announcement promises that Copilot autocomplete will be coming in the near future.”

    I’ve been pessimistic about Copilot, including a post earlier today about Copilot’s misleading advertising. But we’ve seen Anthropic make impressive strides with their programming tools, so perhaps Microsoft is taking steps to make a more useful agent.

  • Bloomberg: Apple Weighs Using Anthropic or OpenAI to Power Siri in Major Reversal

    Mark Gurman reports (paywall) that Apple is considering using OpenAI or Anthropic to power Siri.

    Maginative has a little more on Apple’s failures with AI:

    This isn’t just about technology. It’s about Apple essentially admitting it can’t keep up in the most important tech race in decades.

    The backstory makes this even more dramatic. Apple originally promised enhanced Siri capabilities in 2024, then delayed them to 2025, and finally pushed them indefinitely to 2026. Some within Apple’s AI division believe the features could be scrapped altogether and rebuilt from scratch.

    I have a lot of Apple products, and I find Siri’s utility to be limited to things like “play the song, Back in Black” or “call my wife.” And the Apple Intelligence presentation from WWDC 2024 remains a black eye for the company, so I welcome this news as a helpful recognition of Apple’s position in the AI race as well as a way to make their products more useful for end users.

  • TechCrunch: Congress might block state AI laws for five years

    Senators Ted Cruz and Marsha Blackburn include a measure to limit (most) state oversight of AI laws for the next five years as part of the “Big Beautiful Bill” currently in the works. Critics (and the Senate Parliamentarian) have reduced the scope and duration of the provision to modify the measure.

    However, over the weekend, Cruz and Sen. Marsha Blackburn (R-TN), who has also criticized the bill, agreed to shorten the pause on state-based AI regulation to five years. The new language also attempts to exempt laws addressing child sexual abuse materials, children’s online safety, and an individual’s rights to their name, likeness, voice, and image. However, the amendment says the laws must not place an “undue or disproportionate burden” on AI systems — legal experts are unsure how this would impact state AI laws.

    The regulation is supported by some in the tech industry, including OpenAI CEO Sam Altman, whereas Anthropic’s leadership is opposed.

    I’m sympathetic to the aims of this bill as a patchwork of 50 state laws regulating AI would make it more difficult to innovate in this space. But I’m also aware of real-life harm (as a recent NY Times story profiled), so I’d be much more sanguine if we had federal-level regulation, a prospect that seems very unlikely considering the current political makeup.

  • The Verge: Microsoft should change its Copilot advertising, says watchdog

    The BBB critiques Microsoft’s recent advertising for Clippy, I mean Copilot, and they found quite a bit of puffery.

    From The Verge:

    Microsoft has been claiming that Copilot has productivity and return on investment (ROI) benefits for businesses that adopt the AI assistant, including that “67%, 70%, and 75% of users say they are more productive” after a certain amount of Copilot usage. “NAD found that although the study demonstrates a perception of productivity, it does not provide a good fit for the objective claim at issue,” says the watchdog in its review. “As a result, NAD recommended the claim be discontinued or modified to disclose the basis for the claim.”

    And from the original report from the BBB National Programs’ National Advertising Division:

    NAD found that although the study demonstrates a perception of productivity, it does not provide a good fit for the objective claim at issue. As a result, NAD recommended the claim be discontinued or modified to disclose the basis for the claim. 

    Aside from puffery, this aligns with my observations of Copilot. The branding is confusing, the integration with products is suspect, and the tools lags far behind other AI/LLM agents like Gemini, ChatGPT, and Claude.

  • Checking In on AI and the Big Five

    Ben Thompson writes on the Big 5 (Amazon, Apple, Google, Meta/Facebook, Microsoft) and where they stand in the AI field today.

    … [is] AI complementary to existing business models (i.e. Apple devices are better with AI) or disruptive to them (i.e. AI might be better than Search but monetize worse). A higher level question, however, is if AI simply obsoletes everything, from tech business models to all white collar work to work generally or even to life itself.

    Perhaps it is the smallness of my imagination or my appreciation of the human condition that makes me more optimistic than many about the probability of the most dire of predictions: I think they are quite low. At the same time, I think that those dismissing AI as nothing but hype are missing the boat as well. This is a big deal, even if the changes may end up fitting into the Bill Gates maxim that “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.”

    I tend to agree with Thompson’s predictions — change over the next decade will be significant (and hard to imagine now) and the likelihood of the dire predictions coming true is astonishingly low in the near term.

    Like Thompson, I assumed that Microsoft’s partnership with OpenAI would position them to lap the other companies listed here, but the Copilot product is persistently disappointing, especially when considering ChatGPT’s rising utility. Google Gemini, as a tool, is gaining capabilities, particularly as it relates to Veo and programming, although I think the Gemini-infused Google search results have too many embarrassing mistakes for it to be a useful tool today.