- Maginative: Figma taps Google’s Gemini for Faster, Enterprise-Ready AI Inside its Design Platform (Oct 9, 2025)
Integrations will enhance image generation and editing within Figma and help with enterprise governance, allowing admins to control AI feature access and data usage for model training. - WSJ: Exclusive | Microsoft Tries to Catch Up in AI With Healthcare Push, Harvard Deal (Oct 8, 2025)
Microsoft aims to become a leading AI chatbot provider, reducing its reliance on OpenAI by focusing on healthcare applications for its Copilot assistant. This update, developed in collaboration with Harvard Medical School, will offer more credible health information, and Microsoft is developing tools to help users find healthcare providers. - Google: Introducing the Gemini 2.5 Computer Use model (Oct 7, 2025)
The new model empowers agents to interact directly with user interfaces for tasks like filling forms and navigating web pages. And the possibilities are immense, but software testing seems like a great candidate for tools like this. - NY Times: What the Arrival of A.I. Video Generators Like Sora Means for Us (Oct 9, 2025)
Sora has become so realistic that it undermines the reliability of video as proof of events. It’s simply difficult to distinguish between real and fake videos. - WSJ Opinion: AI and the Fountain of Youth (Oct 8, 2025)
AI is accelerating drug development, analyzing medical data, and improving diagnostics, potentially leading to longer, healthier lives. “Thanks to AI, the process of identifying and developing new drugs, once a decade long slog, is being compressed into months.” - WSJ Opinion: I’ve Seen How AI ‘Thinks.’ I Wish Everyone Could. (Oct 9, 2025)
Understanding how AI models function, including their training data and mathematical structure, is crucial, especially as AI increasingly impacts human endeavors like writing and art. - WSJ: AI Investors Are Chasing a Big Prize. Here’s What Can Go Wrong. (Oct 5, 2025)
Investing in AI is risky due to the high costs, uncertain timelines, and potential for competition. I’d argue that these risks are present in almost any investment decision.
Category: Uncategorized
-
Friday Links (Oct. 10)
-
AI Robot Massage
WSJ: I Pitted an AI Robot Massage Against the Real Thing (July 7, 2025)
The Aescape massage robot has significant limitations compared to the human equivalent (specifically in working on the neck and head), and it has far fewer AI chops than the marketing suggests.
WSJ columnist, Dawn Gilbertson:
The robot can’t reach two areas that are most enjoyable for me, the head and neck. And, in this particular case, I had a wicked stiff neck that needed attention.
-
WSJ: CEOs Start Saying the Quiet Part Out Loud: AI Will Wipe Out Jobs
Analysts have been seeing structural changes in the job market related to AI, and now CEOs are admitting it openly. Ford’s CEO, Jim Farley, suggests that 50% of white-collar jobs will be trimmed. JP Morgan exec Marianna Lake also sees a 10% drop in headcount.
“I think it’s going to destroy way more jobs than the average person thinks,” James Reinhart, CEO of the online resale site ThredUp, said at an investor conference in June.
While Microsoft’s CEO isn’t publicly declaring that AI will cause job losses, the company did announce another reduction this month, bringing their recent layoffs to a total of around 15,000 people.
-
WSJ:How a Bold Plan to Ban State AI Laws Fell Apart—and Divided Trumpworld
As I noted last week, Congressional efforts to block state AI laws in the Big Beautiful Bill lost support and was ultimately dropped from the Senate bill by a close vote of 99-1.
-
Douthat: Conservatives Are Prisoners of Their Own Tax Cuts
As a parent of three, point number 2 on Douthat’s opinion piece resonates with me:
Second (in the voice of a social conservative), the law doesn’t do enough for family and fertility. No problem shadows the world right now like demographic collapse, and while the United States is better off than many countries, the birthrate has fallen well below replacement levels here as well. Family policy can’t reverse these trends, but public support for parents can make an important difference. Yet the law’s extension of the child tax credit leaves it below the inflation-adjusted level established in Trump’s first term.
One of the odd parts of political haggling is the loud voices, particularly those related to tax deductions for high earners in high tax states. (Yes, the SALT deductions). It’s a small group of high earners in a small number of states. Yet, they’ve managed to be squeaky enough to expand the deduction from $10k to $40k. Well done for their lobbying!
From Claude:
Expanding SALT deductions would primarily benefit upper-middle-class and wealthy taxpayers earning $100,000+ annually, particularly those in high-tax states like California, New York, New Jersey, and Connecticut, who own expensive homes and face high state and local tax burdens. The benefits become increasingly concentrated among the highest earners, with the top 1% receiving disproportionate benefits from any expansion.
Back to the child tax credit itself. At $2,200, it represents an expansion but is far below the original law (for inflation adjusted dollars). So it seems that our congress cares more about a handful of high income earners than they do for a large (and important) swath of the country: parents.
-
Maginative: Microsoft’s MAI-DxO Crushes Doctors at Medical Diagnosis while Cutting Costs
Maginative reports on Microsoft’s new AI Diagnostic Orchestrator and how it outperformed doctors in a recent study. (As an aside, I always wonder about reports that use words like crush in the title. Beware of hyperbole!)
From the report’s abstract, you’ll find exciting results:
When paired with OpenAI’s o3 model, MAI-DxO achieves 80% diagnostic accuracy—four times higher than the 20% average of generalist physicians. MAI-DxO also reduces diagnostic costs by 20% compared to physicians, and 70% compared to off-the-shelf o3.
A 4x improvement in diagnostic accuracy. This is transformative stuff.
But when considering the experimental setup:
Physicians were explicitly instructed not to use external resources, including search engines (e.g., Google, Bing), language models (e.g., ChatGPT, Gemini, Copilot, etc), or other online sources of medical information.
Now the results don’t seem quite so impressive. In fact, I have a hard time understanding how this report has much utility due to these extreme restrictions that don’t align with real-world practices.