Blog

  • Google’s NotebookLM Now Lets You Share AI-Powered Notebooks With a Link

    Google’s AI research notebook, NotebookLM, just got a lot more collaborative. You can now share any notebook publicly with a simple link—no sign-in or permissions required.

    I co-worker recently shared one of his NotebookLM creations. It’s hard to overstate how incredibly real the voices sounded. This could be an incredible tool for anyone who travels, rides public transportation, or perhaps walks across a college campus to class.

    https://www.maginative.com/article/googles-notebooklm-now-lets-you-share-ai-powered-notebooks-with-a-link

  • ChatGPT gets greedy

  • Meta Aims to Fully Automate Ad Creation Using AI

    Zuckerburg recently declared that we’ll all have AI friends (lots of them, in fact), and now Meta is working on replacing designers with AI tools:

    The social-media company aims to enable brands to fully create and target ads using artificial intelligence by the end of next year, according to people familiar with the matter.

    But, with all of the data that Facebook has about people, the ads could be personalized and rather interesting:

    Meta also plans to enable advertisers to personalize ads using AI, so that users see different versions of the same ad in real time, based on factors such as geolocation, the people said. A person seeing an advertisement for a car in a snowy place, for example, might see the car driving up a mountain, whereas a person seeing an ad for that same car in an urban area would see it driving on a city street.

    https://www.wsj.com/tech/ai/meta-aims-to-fully-automate-ad-creation-using-ai-7d82e249

  • There’s a Link Between Therapy Culture and Childlessness

    A recent NYTimes essay by Michal Leibowitz explores the growing childlessness and starts by mentioning a number of commonly postulated factors like climate change. But then the twist:

    I suspect there’s some truth in all of these explanations. But I think there’s another reason, too, one that’s often been overlooked. Over the past few decades, Americans have redefined “harm,” “abuse,” “neglect” and “trauma,” expanding those categories to include emotional and relational struggles that were previously considered unavoidable parts of life. Adult children seem increasingly likely to publicly, even righteously, cut off contact with a parent, sometimes citing emotional, physical or sexual abuse they experienced in childhood and sometimes things like clashing values, parental toxicity or feeling misunderstood or unsupported.

    This cultural shift has contributed to a new, nearly impossible standard for parenting. Not only must parents provide shelter, food, safety and love, but we, their children, also expect them to get us started on successful careers and even to hold themselves accountable for our mental health and happiness well into our adult years.

    And

    A result of these changes is that parenthood looks more like a bad deal. For much of history, parent-child relationships were characterized by mutual duties, says Stephanie Coontz, the director of research and public education for the Council on Contemporary Families. Parental duties might include things like feeding and clothing their children, disciplining them and educating them in the tasks and skills they would need in adulthood. Children, in turn, had duties to their parents: to honor and defer to them, to help provide for the family or household, to provide grandchildren.

    Today, parents still have obligations to their children. But it seems the children’s duties have become optional. “With parents and adult children today, the adult child feels like, ‘If you failed me in your responsibility as a parent’ — in ways, of course, that are increasingly hard to define—‘then I owe you nothing as an adult child,’” says Dr. Coleman.

    https://www.nytimes.com/2025/05/30/opinion/therapy-estrangement-childless-millennials.html

  • AI’s positive effect on education:

  • For Some Recent Graduates, the A.I. Job Apocalypse May Already Be Here

    “There are signs that entry-level positions are being displaced by artificial intelligence at higher rates,” the firm wrote in a recent report.

    And

    One tech executive recently told me his company had stopped hiring anything below an L5 software engineer — a midlevel title typically given to programmers with three to seven years of experience — because lower-level tasks could now be done by A.I. coding tools. Another told me that his start-up now employed a single data scientist to do the kinds of tasks that required a team of 75 people at his previous company.

    For companies, the idea of replacing people with cheap tools is certainly appealing, particularly in a time of economic uincertainty.

    “This is something I’m hearing about left and right,” said Molly Kinder, a fellow at the Brookings Institution, a public policy think tank, who studies the impact of A.I. on workers. “Employers are saying, ‘These tools are so good that I no longer need marketing analysts, finance analysts and research assistants.’”

    I wonder, though, if companies stop hiring entry-level employees, what happens to the talent pipeline? How do you get L5 (and higher) employees if you’re not hiring and developing younger employees?

    https://www.nytimes.com/2025/05/30/technology/ai-jobs-college-graduates.html

  • Sam Altman and Sridhar Ramaswamy Say the Quiet Part Out Loud About Enterprise AI

    Altman’s blunt advice? Stop hesitating. “The companies that have the quickest iteration speed and make the cost of making mistakes low—those are the ones that win.” Ramaswamy agreed, adding that curiosity, not caution, is the more valuable trait right now. “A lot of what we assumed about how things work just doesn’t hold anymore,” he said.

    And

    It’s a response that reveals how seriously they take the possibility of AI-driven scientific discovery. Both leaders expect next year will mark another inflection point where companies can assign their most critical problems to AI systems with massive computational resources.

    https://www.maginative.com/article/sam-altman-and-sridhar-ramaswamy-say-the-quiet-part-out-loud-about-enterprise-ai

  • Is AI Stealing Jobs? This Hiring Analyst Says Yes

    “Looking at three years’ worth of job listings, Munyikwa found that the share of listings that include job duties that AI can do has already slipped by 19 percent. Deeper analysis, Business Insider reported, showed the sharp fall in certain types of job listings means companies are hiring fewer people for roles AI can do instead.”

    https://www.inc.com/kit-eaton/is-ai-stealing-jobs-this-hiring-analyst-says-yes/91197705

  • Democrats set out to study young men. Here are their findings.

    The prospectus for the two-year project, Speaking with American Men, was reviewed by the New York Times:

    The prospectus for one new $20 million effort, obtained by The Times, aims to reverse the erosion of Democratic support among young men, especially online. It is code-named SAM — short for “Speaking with American Men: A Strategic Plan” — and promises investment to “study the syntax, language and content that gains attention and virality in these spaces.” It recommends buying advertisements in video games, among other things.

    Cofounder of the project, Ilyse Hogue, talked about the importance of listening and using “language that young men are speaking.” From Politico:

    Hogue said part of SAM’s mission “super charg[ing] social listening” and progressive influencers on Discord, Twitch and other platforms in their fundraising proposal. They’re urging Democratic candidates to use non-traditional digital advertising, especially on YouTube, in-game digital ads and sports and gaming podcasts.

    “Democrats can’t win these folks over if they’re not speaking the language that young men are speaking,” Hogue said. “Most people I talked to, Democratic operatives, have never heard of Red Pill Fitness, which is just huge online.”

    Language and advertising are important, for sure, but it’s hard to believe that these tactics alone would stem the tide.

  • Differences in link hallucination and source comprehension across different large language models

    Mike Caulfield explores the problem of hallucinated links:

    If I am being harsh here it’s because we constantly hear — based on ridiculously dumb benchmarks — that all these models are performing at “graduate level” of one sort or another. They are not, at least out of the box like this. Imagine giving a medical school student this question, and they say — yes the thing that says in the actual conclusion that the lack of sustained differences is probably due to people stopping their medication is proof that medication doesn’t work (scroll to bottom of this screenshot to see). Never mind that in the results it states quite clearly that all groups saw improvement over baseline.

    https://mikecaulfield.substack.com/p/differences-in-link-hallucination