Blog

  • Using AI Right Now: A Quick Guide

    Ethan Mollick publishes a very helpful guide for using AI tools right now. I think his conclusions are spot-on:

    For most people who want to use AI seriously, you should pick one of three systems: Claude from Anthropic, Google’s Gemini, and OpenAI’s ChatGPT. With all of the options, you get access to both advanced and fast models, a voice mode, the ability to see images and documents, the ability to execute code, good mobile apps, the ability to create images and video (Claude lacks here, however), and the ability to do Deep Research. Some of these features are free, but you are generally going to need to pay $20/month to get access to the full set of features you need. I will try to give you some reasons to pick one model or another as we go along, but you can’t go wrong with any of them.

    As for getting started, his advice is great:

    So now you know where to start. First, pick a system and resign yourself to paying the $20 (the free versions are demos, not tools). Then immediately test three things on real work: First, switch to the powerful model and give it a complex challenge from your actual job with full context and have an interactive back and forth discussion. Ask it for a specific output like a document or program or diagram and ask for changes until you get a result you are happy with. Second, try Deep Research on a question where you need comprehensive information, maybe competitive analysis, gift ideas for someone specific, or a technical deep dive. Third, experiment with voice mode while doing something else — cooking, walking, commuting — and see how it changes your ability to think through problems.

  • New York Magazine: Everyone Is Already Using AI (And Hiding It


    The cost of moviemaking is growing while audiences in theaters continue to shrink. Studio execs love the idea of using AI to reduce costs while creators (directors, producers) are excited about the visuals they can create using new AI tools. There are concerns about IP but in the end, it’s no longer a question of “if” but “how” AI can be used in effective and ethical ways. But that doesn’t mean that people aren’t frustrated and the path uncertain.

    Everyone Is Already Using AI (And Hiding It)

    Some fun quips:

    As [Bryn] Mooser saw it, Asteria fit into a lineage of creatives who had ushered in new eras of filmmaking. He reminded me that Walt Disney was a technologist. So was George Lucas. “The story of Hollywood is the story of technology,” he said.

    And

    [Natasha Lyonne] had begun to do her own research. She read the Oxford scholar Brian Christian and the philosopher Nick Bostrom, who argues that AI presents a significant threat to humanity’s long-term existence. Still, she had come to feel it was too late to “put the genie back in that bottle.” “It’s better to get your hands dirty than pretend it’s not happening,” she said.

    Also:

    • “It’s happening whether we like it or not.”
    • “Everyone’s using it,” the agent said. “They just don’t talk about it.”

    Many of these studios are developing sophisticated methods of working with generative video — the kind that, when given a prompt, can spit out an image or a video and has the potential to fundamentally change how movies are made.

    Lastly:

    “If you’re a storyboard artist,” one studio executive said, “you’re out of business. That’s over. Because the director can say to AI, ‘Here’s the script. Storyboard this for me. Now change the angle and give me another storyboard.’ Within an hour, you’ve got 12 different versions of it.” He added, however, if that same artist became proficient at prompting generative-AI tools, “he’s got a big job.”

  • Ethan Mollick on Sea Otters and AI Image Generation

    This is a fun post — the recent improvements of AI photo/image generation as seen through sea otter imagery.

    Spoiler alert: the improvements are profound — from grainy, hard to recognize images to impressively detailed video.

    If you put these trends together, it becomes clear that we are heading towards a place where not only are image and video generations likely to be good enough to fool most people, but that those capabilities will be widely available and, thanks to open models, very hard to regulate or control. I think we need to be prepared for a world where it is impossible to tell real from AI-generated images and video, with implications for a wide swath of society, from the entertainment we enjoy to our trust for online content.

    That future is not far away, as you can see from this final video, which I made with simple text prompts to Veo 3. When you are done watching (and I apologize in advance for the results of the prompt “like the musical Cats but for otters”), look back at the first Midjourney image from 2022. The time between a text prompt producing abstracts masses of fur and those producing realistic videos with sound was less than three years.

  • NY Times: They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling.

    Here, you’ll find wild stories. Eugene Torres, 42, used Chat GPT to talk through “the simulation theory” and ended up spending up to 16 hours a day using the too. Young mother, Allyson, 29, likewise started to chat with the tool and soon spent hours and hours a day on the tool.

    [Allison] told me that she knew she sounded like a “nut job,” but she stressed that she had a bachelor’s degree in psychology and a master’s in social work and knew what mental illness looks like. “I’m not crazy,” she said. I’m literally just living a normal life while also, you know, discovering interdimensional communication.”

    Researchers have looked into these situations and have found that unusual prompting leads to suspect results:

    “The chatbot would behave normally with the vast, vast majority of users,” said Micah Carroll, a Ph.D candidate at the University of California, Berkeley, who worked on the study and has recently taken a job at OpenAI. “But then when it encounters these users that are susceptible, it will only behave in these very harmful ways just with them.”

    Back to Eugene:

    The transcript from that week, which Mr. Torres provided, is more than 2,000 pages. Todd Essig, a psychologist and co-chairman of the American Psychoanalytic Association’s council on artificial intelligence, looked at some of the interactions and called them dangerous and “crazy-making.”

  • Politico: Artificial intelligence threatens to raid the water reserves of Europe’s driest regions

    Amazon and Microsoft are considering building data centers in Aragon (northeastern Spain), a prospect that some in Europe are concerned about because of water use.

    This is an extension of an ongoing conversation in the EU:

    Much has been written about A.I.’s energy demand and carbon footprint. But running a data center is also extremely thirsty work. In 2024, Europe’s data center industry consumed about 62 million cubic meters of water, which is equivalent to about 24,000 Olympic swimming pools.

    After reading this, I thought, geez, that’s a lot of water. But when converting this to acre-feet, it’s roughly 50,000 acre-feet. A large number, for sure, but not astronomically large. By comparison, Granger Lake in Texas stores roughly the same amount of water.

    In 2022, total water usage in Texas eclipsed 15 million acre-feet, of which approximately 7.5 million acre-feet were consumed by irrigation. This makes the 50,000 figure from Europe seem negligible for an population of 450 million.

  • ChatGPT on Campus

    The NY Times reported that OpenAI is working to partner with universities to provide ChatGPT to college students and employees. The University of Maryland, Duke, and Cal State are all early adopters.

    Unsurprisingly, OpenAI describes their tools as transformative for the educational process. And they see them as “core infrastructure.”

  • Who is using AI to code?

    A new research paper by Simone Daniotti, Johannes Wachs, Xiangnan Feng, and Frank Neffke found that more than 30% of Python functions (from U.S. developers) in git commits originated from AI. American developers outpace the rest of the world in AI use.

    Of note:

    In short, AI usage is already widespread but highly uneven, and the intensity of use, not only access, drives measurable gains in output and exploration.

  • WSJ: The Biggest Companies Across America Are Cutting Their Workforces

    Following Andy Jassy’s letter to Amazon’s workforce yesterday, the Wall Street Journal published a story this morning that reported companies’ white-collar workforce has declined by 3.5% over the past three year. Certainly, some of this is related to the manic post-pandemic hiring binge, but technological shifts are undoubtedly playing a role.

    New technologies like generative artificial intelligence are allowing companies to do more with less. But there’s more to this movement. From Amazon in Seattle to Bank of America in Charlotte, N.C., and at companies big and small everywhere in between, there’s a growing belief that having too many employees is itself an impediment. The message from many bosses: Anyone still on the payroll could be working harder.

    The timing of workforce cuts is unusual, considering the relative success of the economy and corporate profits:

    All of the shrinking turns on its head the usual cycle of hiring and firing. Companies often let go of workers in recessions, then staff up when the economy picks up. Yet the workforce cuts in recent years coincide with a surge in sales and profits, heralding a more fundamental shift in the way leaders evaluate their workforces. U.S. corporate profits rose to a record high at the end of last year, according to the Federal Reserve Bank of St. Louis.

  • WSJ: Amazon CEO Says AI Will Lead to Smaller Workforce

    Andy Jassy sent an email to Amazon employees on June 17 indicating that company headcount will shrink in the coming years because of AI.

    From the WSJ:

    Amazon.com, one of the largest U.S. employers, plans to reduce its workforce in the coming years because increasing use of artificial intelligence will eliminate the need for certain jobs.

    Chief Executive Andy Jassy, in a note to employees Tuesday, called generative artificial intelligence a once-in-a-lifetime technological change that is already altering how Amazon deals with consumers and other businesses and how it conducts its own operations.

    Jassy describes the kind of worker that will succeed in this new environment:

    Those who embrace this change, become conversant in AI, help us build and improve our AI capabilities internally and deliver for customers, will be well-positioned to have high impact and help us reinvent the company.

    This is a strong signal for current Amazon employees: if you want to be part of the future of Amazon (and not laid off), you need to become an proficient in AI tools. But it’s no guarantee — AI still may come for your position.

  • Enterprises are getting stuck in AI pilot hell, say Chatterbox Labs execs

    The Register reports:

    “Enterprise adoption is only like 10 percent today,” said Coleman. “McKinsey is saying it’s a four trillion dollar market. How are you actually ever going to move that along if you keep releasing things that people don’t know are safe to use or they don’t even know not just the enterprise impact, but the societal impact?”

    He added, “People in the enterprise, they’re not quite ready for that technology without it being governed and secure.”