Category: AI

  • TechCrunch: Congress might block state AI laws for five years

    Senators Ted Cruz and Marsha Blackburn include a measure to limit (most) state oversight of AI laws for the next five years as part of the “Big Beautiful Bill” currently in the works. Critics (and the Senate Parliamentarian) have reduced the scope and duration of the provision to modify the measure.

    However, over the weekend, Cruz and Sen. Marsha Blackburn (R-TN), who has also criticized the bill, agreed to shorten the pause on state-based AI regulation to five years. The new language also attempts to exempt laws addressing child sexual abuse materials, children’s online safety, and an individual’s rights to their name, likeness, voice, and image. However, the amendment says the laws must not place an “undue or disproportionate burden” on AI systems — legal experts are unsure how this would impact state AI laws.

    The regulation is supported by some in the tech industry, including OpenAI CEO Sam Altman, whereas Anthropic’s leadership is opposed.

    I’m sympathetic to the aims of this bill as a patchwork of 50 state laws regulating AI would make it more difficult to innovate in this space. But I’m also aware of real-life harm (as a recent NY Times story profiled), so I’d be much more sanguine if we had federal-level regulation, a prospect that seems very unlikely considering the current political makeup.

  • The Verge: Microsoft should change its Copilot advertising, says watchdog

    The BBB critiques Microsoft’s recent advertising for Clippy, I mean Copilot, and they found quite a bit of puffery.

    From The Verge:

    Microsoft has been claiming that Copilot has productivity and return on investment (ROI) benefits for businesses that adopt the AI assistant, including that “67%, 70%, and 75% of users say they are more productive” after a certain amount of Copilot usage. “NAD found that although the study demonstrates a perception of productivity, it does not provide a good fit for the objective claim at issue,” says the watchdog in its review. “As a result, NAD recommended the claim be discontinued or modified to disclose the basis for the claim.”

    And from the original report from the BBB National Programs’ National Advertising Division:

    NAD found that although the study demonstrates a perception of productivity, it does not provide a good fit for the objective claim at issue. As a result, NAD recommended the claim be discontinued or modified to disclose the basis for the claim. 

    Aside from puffery, this aligns with my observations of Copilot. The branding is confusing, the integration with products is suspect, and the tools lags far behind other AI/LLM agents like Gemini, ChatGPT, and Claude.

  • Checking In on AI and the Big Five

    Ben Thompson writes on the Big 5 (Amazon, Apple, Google, Meta/Facebook, Microsoft) and where they stand in the AI field today.

    … [is] AI complementary to existing business models (i.e. Apple devices are better with AI) or disruptive to them (i.e. AI might be better than Search but monetize worse). A higher level question, however, is if AI simply obsoletes everything, from tech business models to all white collar work to work generally or even to life itself.

    Perhaps it is the smallness of my imagination or my appreciation of the human condition that makes me more optimistic than many about the probability of the most dire of predictions: I think they are quite low. At the same time, I think that those dismissing AI as nothing but hype are missing the boat as well. This is a big deal, even if the changes may end up fitting into the Bill Gates maxim that “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.”

    I tend to agree with Thompson’s predictions — change over the next decade will be significant (and hard to imagine now) and the likelihood of the dire predictions coming true is astonishingly low in the near term.

    Like Thompson, I assumed that Microsoft’s partnership with OpenAI would position them to lap the other companies listed here, but the Copilot product is persistently disappointing, especially when considering ChatGPT’s rising utility. Google Gemini, as a tool, is gaining capabilities, particularly as it relates to Veo and programming, although I think the Gemini-infused Google search results have too many embarrassing mistakes for it to be a useful tool today.

  • Resisting AI?

    Dan McQuillan writes, The role of the University is to resist AI,following themes from Ivan Illich’s ‘Tools for Conviviality’.

    It’s a scathing overview with points that I think many others wonder about (although in less concrete ways than McQuillan).

    Contemporary AI is a specific mode of connectionist computation based on neural networks and transformer models. AI is also a tool in Illich’s sense; at the same time, an arrangement of institutions, investments and claims. One benefit of listening to industry podcasts, as I do, is the openness of the engineers when they admit that no-one really knows what’s going on inside these models.

    Let that sink in for a moment: we’re in the midst of a giant social experiment that pivots around a technology whose inner workings are unpredictable and opaque.

    The highlight is mine. I agree that there’s something disconcerting about using systems that we don’t understand fully.

    Generative AI’s main impact on higher education has been to cause panic about students cheating, a panic that diverts attention from the already immiserated experience of marketised studenthood. It’s also caused increasing alarm about staff cheating, via AI marking and feedback, which again diverts attention from their experience of relentless and ongoing precaritisation.

    The hegemonic narrative calls for universities to embrace these tools as a way to revitalise pedagogy, and because students will need AI skills in the world of work. A major flaw with this story is that the tools don’t actually work, or at least not as claimed.

    AI summarisation doesn’t summarise; it simulates a summary based on the learned parameters of its model. AI research tools don’t research; they shove a lot of searched-up docs into the chatbot context in the hope that will trigger relevancy. For their part, so-called reasoning models ramp up inference costs while confabulating a chain of thought to cover up their glaring limitations.

    I think there are philosophical questions here worth considering. Specifically, the postulation that AI simply “simulates” is too simple and not helpful. What is a photograph? It’s a real thing, but not the real thing captured on the image. What is a video played on a computer screen? It’s a real thing, but it’s not the real thing. The photo and screen simulate the real world, but I’m not aware of modern philosophers critiquing these forms of media. (I’d suspect that earlier media theorists did just that until the media was accepted en masse by society.)

    He goes on to cite environmental concerns (although as I posted recently, the questions of water consumption are exaggerated) among things we’re well suited to take heed of. His language is perhaps a bit too revolutionary.

    As for people’s councils — I am less sanguine that these have much utility.

    Instead of waiting for a liberal rules-based order to magically appear, we need to find other ways to organise to put convivial constraints into practice. I suggest that a workers’ or people’s council on AI can be constituted in any context to carry out the kinds of technosocial inquiry advocated for by Illich, that the act of doing so prefigures the very forms of independent thought which are undermined by AI’s apparatus, and manifests the kind of careful, contextual and relational approach that is erased by AI’s normative scaling.

    I suspect that people’s councils are glorified committees — structures that are kabuki theater than anything else and will struggle to align with the speed at which AI tools are emerging.

    The role of the university isn’t to roll over in the face of tall tales about technological inevitability, but to model the forms of critical pedagogy that underpin the social defence against authoritarianism and which makes space to reimagine the other worlds that are still possible.

    I don’t share all of his fears, but it’s important to consider voices that may not align with a techno-optimistic future.

  • NMSU to offer Bachelor of Science in AI

    New Mexico State University will offer AI an degree starting in 2026. I expect to see more degrees like as universities begin to incorporate AI-specific learning.

    From their press release:

    AI jobs are those where a significant portion of the tasks can be performed or aided by artificial intelligence. This means that it is likely to impact the way these jobs are done, potentially leading to automation, new job roles or changes in required skills. Pontelli emphasized AI should be viewed as an opportunity.

  • Washington Post: AI is transforming Indian call centers. What does it mean for workers?

    The premise of the story is fascinating: AI technology makes it easier for call center workers to communicate with Americans. Call center worker Kartikeya Kumar is pleased, “Now the customer doesn’t know where I am located…If it makes the caller happy, it makes me happy, too.”

    India firms have leveraged AI tech to improve their offerings, and now the country has more call centers than anywhere else.

    “We don’t see AI as taking jobs away,” said MV Prasanth, the chief operating officer for Teleperformance in India. “We see it as easier tasks being moved into self-serve,” allowing Kumar and his colleagues to focus on “more complex tasks.”

    But the article isn’t all roses: concerns about “whitewashing” voices and the fear of job losses (particularly entry-level ones).

    Even the most hopeful admit that workers who can’t adapt will fall behind. “It’s like the industrial revolution,” said Prithvijit Roy, Accenture’s former lead for its Global AI Hub. “Some will suffer.”

  • Waymo: New Insights for Scaling Laws in Autonomous Driving

    Waymo recently published a study outlining the importance of large data sets for improved autonomous vehicle performance.

    The last few years of AI performance have been powered by scale. It has been repeatedly shown that the performance of deep learning models scales predictably as we increase model size, dataset size, and training compute. These scaling laws drive continuous advancements in large language models (LLMs) in particular, as evidenced by the increasingly capable AI systems we see emerging regularly.

    The post is hard to read and features inside baseball terminology, but the results clearly suggest that larger data sets are helpful. Specifically, “Closed-loop performance follows a similar scaling trend. This suggests, for the first time, that real-world AV performance can be improved by increasing training data and compute.“ This certainly suggests that Waymo and Tesla have a huge upperhand for the future autonomy battles because of their enormous troves of data.

  • Using AI Right Now: A Quick Guide

    Ethan Mollick publishes a very helpful guide for using AI tools right now. I think his conclusions are spot-on:

    For most people who want to use AI seriously, you should pick one of three systems: Claude from Anthropic, Google’s Gemini, and OpenAI’s ChatGPT. With all of the options, you get access to both advanced and fast models, a voice mode, the ability to see images and documents, the ability to execute code, good mobile apps, the ability to create images and video (Claude lacks here, however), and the ability to do Deep Research. Some of these features are free, but you are generally going to need to pay $20/month to get access to the full set of features you need. I will try to give you some reasons to pick one model or another as we go along, but you can’t go wrong with any of them.

    As for getting started, his advice is great:

    So now you know where to start. First, pick a system and resign yourself to paying the $20 (the free versions are demos, not tools). Then immediately test three things on real work: First, switch to the powerful model and give it a complex challenge from your actual job with full context and have an interactive back and forth discussion. Ask it for a specific output like a document or program or diagram and ask for changes until you get a result you are happy with. Second, try Deep Research on a question where you need comprehensive information, maybe competitive analysis, gift ideas for someone specific, or a technical deep dive. Third, experiment with voice mode while doing something else — cooking, walking, commuting — and see how it changes your ability to think through problems.

  • New York Magazine: Everyone Is Already Using AI (And Hiding It


    The cost of moviemaking is growing while audiences in theaters continue to shrink. Studio execs love the idea of using AI to reduce costs while creators (directors, producers) are excited about the visuals they can create using new AI tools. There are concerns about IP but in the end, it’s no longer a question of “if” but “how” AI can be used in effective and ethical ways. But that doesn’t mean that people aren’t frustrated and the path uncertain.

    Everyone Is Already Using AI (And Hiding It)

    Some fun quips:

    As [Bryn] Mooser saw it, Asteria fit into a lineage of creatives who had ushered in new eras of filmmaking. He reminded me that Walt Disney was a technologist. So was George Lucas. “The story of Hollywood is the story of technology,” he said.

    And

    [Natasha Lyonne] had begun to do her own research. She read the Oxford scholar Brian Christian and the philosopher Nick Bostrom, who argues that AI presents a significant threat to humanity’s long-term existence. Still, she had come to feel it was too late to “put the genie back in that bottle.” “It’s better to get your hands dirty than pretend it’s not happening,” she said.

    Also:

    • “It’s happening whether we like it or not.”
    • “Everyone’s using it,” the agent said. “They just don’t talk about it.”

    Many of these studios are developing sophisticated methods of working with generative video — the kind that, when given a prompt, can spit out an image or a video and has the potential to fundamentally change how movies are made.

    Lastly:

    “If you’re a storyboard artist,” one studio executive said, “you’re out of business. That’s over. Because the director can say to AI, ‘Here’s the script. Storyboard this for me. Now change the angle and give me another storyboard.’ Within an hour, you’ve got 12 different versions of it.” He added, however, if that same artist became proficient at prompting generative-AI tools, “he’s got a big job.”

  • Ethan Mollick on Sea Otters and AI Image Generation

    This is a fun post — the recent improvements of AI photo/image generation as seen through sea otter imagery.

    Spoiler alert: the improvements are profound — from grainy, hard to recognize images to impressively detailed video.

    If you put these trends together, it becomes clear that we are heading towards a place where not only are image and video generations likely to be good enough to fool most people, but that those capabilities will be widely available and, thanks to open models, very hard to regulate or control. I think we need to be prepared for a world where it is impossible to tell real from AI-generated images and video, with implications for a wide swath of society, from the entertainment we enjoy to our trust for online content.

    That future is not far away, as you can see from this final video, which I made with simple text prompts to Veo 3. When you are done watching (and I apologize in advance for the results of the prompt “like the musical Cats but for otters”), look back at the first Midjourney image from 2022. The time between a text prompt producing abstracts masses of fur and those producing realistic videos with sound was less than three years.