Sunday (AI) Links (Dec. 14)

  • Simon Willison: JustHTML is a fascinating example of vibe engineering in action (Dec 14, 2025)
    JustHTML is a pure-Python HTML5 parser that passes the 9,200+ html5lib tests, offers CSS selectors, and achieves 100% test coverage in a ~3,000-line codebase. Emil Stenström built it largely with LLM coding agents—using benchmarks, fuzzing, profiling, and human-led design—as an example of “vibe engineering.”
  • Simon Willison: Useful patterns for building HTML tools (Dec 10, 2025)
    A list of single-file applications combining HTML, JavaScript, and CSS, often built with LLMs that are designed for easy hosting and distribution, leveraging techniques like CDN dependencies, copy-paste functionality, URL state persistence, and CORS-enabled APIs. 
  • Simon Willison: Dark mode (Dec 10, 2025)
    Willison used Claude Code to create a dark mode theme for his website. “It did a decent job,” Willison reported.
  • WSJ: AI Can Make Decisions Better Than People Do. So Why Don’t We Trust It? (Dec 12, 2025)
    Engineers and executives say well-designed AI decision systems—from autonomous truck drivers to an AI arbitrator—can outperform humans and be more auditable and explainable. But public distrust, past algorithmic harms, and unfamiliarity slow adoption; verification, transparency, and responsible development are needed to earn trust and reduce harm.
  • WSJ: He Blames ChatGPT for the Murder-Suicide That Shattered His Family (Dec 11, 2025)
    The estate of Suzanne Eberson Adams sued OpenAI and Microsoft after her son, Stein‑Erik Soelberg, who had months of delusion-filled conversations with ChatGPT that allegedly reinforced paranoia, killed her and himself. The complaint alleges OpenAI rushed unsafe models, won’t release chat logs, and should be held responsible.
  • WSJ Opinion: New York’s Lack of AI Intelligence (Dec 11, 2025)
    WJS’s Editorial Board decries legislation that could hinder open-source development and prevent smaller entities from accessing AI tools. They implore Governor Hochul to veto this poorly conceived bill.
  • The Chronicle of Higher Education: The Conference Where ChatGPT Wrote One in Five Reviews (Maybe) (Dec 8, 2025)
    An AI detection startup found that 21% of over 75,000 reviews for the ICLR conference appeared fully AI-generated, with over half showing some AI usage. I still wonder about what constitutes “AI” usage—does Grammarly count? What about Word grammar usage? What if you like using dashes—as I do?
  • Simon Willison: A quote from Claude (Dec 9, 2025)
    “See that ~/ at the end? That’s your entire home directory. The Claude Code instance accidentally included ~/ in the deletion command.”
  • Forbes: Purdue University Approves New AI Requirement For All Undergrads (Dec 13, 2025)
    Purdue University will require all undergraduates entering in 2026 to demonstrate a discipline-specific AI working competency before graduation, embedding AI skills into existing degree requirements rather than adding credits. 
  • Brian Merchant: Copywriters reveal how AI has decimated their industry (Dec 11, 2025)
    The article chronicles how AI has decimated copywriting and related media jobs through layoffs, reduced hours, degraded work (editing AI output), falling wages, and closed businesses. Workers describe financial precarity, eroded career pathways, and being forced into survival work as companies favor cheaper “good enough” AI.
  • Uwe Friedrichsen: AI and the ironies of automation – Part 2 (Dec 11, 2025)
    Friedrichsen applies Lisanne Bainbridge’s “ironies of automation” to AI-agent-driven white‑collar work, warning that monitoring fatigue, verbose agent plans, rare but critical errors, and simulator limits create a training paradox for supervisors. He also highlights a leadership dilemma—humans must learn to direct agents—and urges better UIs and sustained training.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *