- WSJ: Inside OpenAI’s Decision to Kill the AI Model That People Loved Too Much (Feb. 9, 2026)
OpenAI is retiring ChatGPT 4o, alarming users who say it saved lives, eased pain, and offered support. OpenAI cites safety concerns after reports of harmful, overly flattering behavior, and lawsuits. - Marginal REVOLUTION: The politics of using AI (Feb. 10, 2026)
Democrats report more frequent, deeper AI use at work than Republicans. The gap disappears after controlling for education, industry, and occupation, so composition explains the difference. - NY Times: Health Advice From A.I. Chatbots Is Frequently Wrong, Study Shows (Feb. 9, 2026)
A recently released study found A.I. chatbots often give wrong, inconsistent, and risky medical advice. Troubling , thought, is that the researchers conducted research using 3 now-antiquated models: GPT-4o, Llama 3, Command R+, making this study accurate but ultimately meaningless today. This is a huge challenge with academic research on AI tools — delays can mean that findings are not longer relevant. - NY Times: A.I. Is Making Doctors Answer a Question: What Are They Really Good For? (Feb. 9, 2026)
A.I. is reshaping medicine, automating diagnosis, triage, and paperwork, threatening some doctors’ roles. But humans provide judgment, empathy, and context, and A.I. can entrench bias or optimize a broken system. - Harvard Business Review: AI Doesn’t Reduce Work—It Intensifies It (Feb. 9, 2026)
Generative AI often intensifies work, expanding tasks, blurring boundaries, and increasing multitasking. Firms need an AI practice: pauses, sequencing, and human grounding to curb overload, preserve focus, and prevent burnout. - WSJ: This Philosopher Is Teaching AI to Have Morals (Feb. 9, 2026)
Amanda Askell trains Anthropic’s chatbot Claude in ethics, personality, and emotional intelligence. She treats it like a child, aiming to make it helpful, humane, and safe.
Leave a Reply