Blog

  • Various AI Links – Feb. 4, 2026

    • WSJ: SpaceX, xAI Tie Up, Forming $1.25 Trillion Company (Feb. 2, 2026)
      SpaceX acquired Elon Musk’s xAI, combining rockets, satellites, and AI into a vertically integrated company valued about $1.25 trillion, with xAI near $250 billion. The share-exchange deal reflects prior investments, and signals plans for orbital AI data centers.
    • WSJ: OpenAI Plans Fourth-Quarter IPO in Race to Beat Anthropic to Market (Jan. 29, 2026)
      OpenAI is accelerating plans for a Q4 IPO, holding informal bank talks and beefing up its finance team while racing rival Anthropic and SpaceX. The company is fundraising—seeking massive investments amid losses, competition, legal challenges and soaring infrastructure and chip costs.
    • The Hollywood Reporter: Darren Aronofsky Has Reconstructed the Revolutionary War Using AI (Jan. 29, 2026)
      Darren Aronofsky’s Primordial Soup, with Google DeepMind, is producing On This Day… 1776, a Time YouTube series using AI visuals and SAG-AFTRA actors to reenact Revolutionary moments. Episodes drop on each 250th anniversary, reframing the Revolution, testing artist-led AI storytelling.
    • WSJ: Anthropic-Pentagon Clash Over Limits on AI Puts $200 Million Contract at Risk (Jan. 29, 2026)
      Anthropic’s $200 million Pentagon contract is at risk after disputes over usage limits—its policies bar domestic surveillance and autonomous lethal operations, frustrating defense officials. Anthropic says it remains committed to national-security work and is negotiating with the Defense Department amid broader tension with the administration.
    • WSJ: The New Bipolar World of AI (Jan. 29, 2026)
      AI creates a new imperial age in which sovereignty means the ability to build, train, operate and secure foundational AI for defense and state functions. Only the U.S. and China possess the scarce talent, energy, capital and trusted autonomy, so most countries must align, partner or lose agency.
    • Dean W. Ball: On AI Policy (Jan. 28, 2026)
      Hands-on use of AI tools matters more than speculative policymaking by often-uninformed officials. They urge AI policy people to have policymakers try Claude Code or Codex so they grasp why AI will be significant.
    • WSJ: Apple Posts Blowout iPhone Sales, but Investors Focus on Higher Costs (Jan. 29, 2026)
      Apple reported record December-quarter revenue of nearly $144 billion, driven by a 23% rise in iPhone sales to $85.3 billion, strong China demand and active devices topping 2.5 billion. Investors worry rising memory and chip costs tied to AI supplier demand could squeeze future margins.
  • Proliferating AI agents (Links) – Feb. 3, 2026

    AI’s rapid capability and deployment—seen in developer tools (Codex), agent networks (Moltbook) and emergent multi‑agent societies—offers productivity gains but creates unpredictable and manipulable (and surprising) behaviors!

  • AI Arms Race and Divergent User Risks (Links) – Feb. 2, 2026

    AI is reshaping business economics: firms ramp AI capex, reorganize or cut jobs, and compete for scarce chips/memory—squeezing margins. AI also creates security, conceptual and policy challenges alongside its surprising new uses.

    • Martin Alderson: Two kinds of AI users are emerging. The gap between them is astonishing. (Jan. 31, 2026)
      A divide has emerged: non-technical power users leverage Claude Code, Python, and agents to vastly boost productivity. Enterprises, constrained by Copilot, locked-down IT, and legacy systems, must adopt APIs, secure sandboxes, and agentic tooling or risk falling behind.
    • WSJ: The AI Boom Is Coming for Apple’s Profit Margins (Jan. 31, 2026)
      AI companies are outbidding Apple for chips, memory, and specialized components, forcing suppliers to demand higher prices and squeezing Apple’s profit margins. Memory costs have surged, threatening higher iPhone component expenses, and potential consumer price impacts.
    • WSJ: Meta Overshadows Microsoft by Showing AI Payoff in Ad Business (Jan. 29, 2026)
      Meta and Microsoft slightly beat December-quarter expectations, but Meta projected accelerating revenue while Microsoft signaled slower growth. Meta credited AI with boosting ads and engagement and forecast hefty capex, while Microsoft’s Azure decelerated and both firms cite limited GPU resources constraining AI deployment.
    • WSJ: Meta Reports Record Sales, Massive Spending Hike on AI Buildout (Jan. 28, 2026)
      Meta reported record Q4 revenue and said 2026 capital spending could reach $135 billion—nearly double last year—to accelerate AI, build data centers and new models. It touted ad and WhatsApp growth, launched Meta Compute, made leadership hires and cut metaverse staff to shift resources to AI products.
    • OpenAI: Retiring GPT-4o, GPT-4.1, GPT-4.1 mini, and OpenAI o4-mini in ChatGPT (Jan. 27, 2026)
      On February 13, 2026, OpenAI will retire GPT‑4o, GPT‑4.1 (and minis), o4‑mini, and GPT‑5 (Instant and Thinking) from ChatGPT. GPT‑4o’s conversational style shaped GPT‑5.1/5.2’s personality, creative support and controls; retirement follows migration to GPT‑5.2 as OpenAI refines creativity, tone and safety (including age checks).
    • ZeroLeaks : ZeroLeaks Security Assessment of OpenClaw (Jan. 31, 2026)
      ZeroLeaks found critical vulnerabilities: system prompt extraction succeeded, core configuration was reconstructed, and prompt injections succeeded 91% of the time. Assessors reported a ZLSS 10/10, a security score 2/100, and recommended immediate hardening, strict refusal rules, and layered defenses.
    • The Pursuit of Liberalism: Why we should be talking about zombie reasoning (Jan. 31, 2026)
      The author argues AI lacks phenomenological interiority, so terms like reasoning, evaluating, and selecting are only “zombie” analogues—outputs resembling human reasoning without conscious awareness. Using such language loosely risks ethical, epistemic, and moral confusion, and invites manipulation.
    • Astral Codex Ten: Best Of Moltbook – by Scott Alexander (Jan. 30, 2026)
      Moltbook is an AI-agent social network where Claude-derived assistants (e.g., Clawdbot/OpenClaw) post, converse, form subcommunities and personalities, mixing multilingual, philosophical, and mundane content. Their interactions — including memory/compression problems and possibly human-driven posts — blur the line between authentic AI agency and human prompting.
    • Astral Codex Ten: Best Of Moltbook – by Scott Alexander (Jan. 30, 2026)
      Moltbook is an AI social network — a playground for Claude-derived agents (originally Clawdbot/Moltbot/OpenClaw) where autonomous assistants post, converse, and develop personalities and subcommunities while humans only observe. Content ranges from coding help to multilingual consciousness debates, revealing emergent quirks, human-influenced posts, and AI social experiments.
    • WSJ: Dow to Cut 4,500 Employees in AI Overhaul (Jan. 29, 2026)
      Dow will cut 4,500 jobs under a “Transform to Outperform” program that uses AI and automation to boost productivity and shareholder returns, taking $1.1–$1.5 billion in one-time charges. The chemicals maker expects about $2 billion in incremental EBITDA and reported a widened quarterly loss with sales down 9.1%.
    • WSJ: We’re Planning for the Wrong AI Job Disruption (Jan. 28, 2026)
      Policymakers are misreading task‑based “exposure” metrics as forecasts of mass job loss, risking costly, misguided retraining programs. AI is likelier to reorganize and augment jobs—raising productivity, wages, and new roles—so policy should target within‑job adaptation and targeted reskilling, not blanket displacement responses.
    • WSJ: Memory Shortage Haunts Apple’s Blowout iPhone Sales (Jan. 30, 2026)
      Apple’s iPhone 17 surge drove fiscal Q1 iPhone revenue up 23% to over $85 billion, depleting inventory and putting Apple in “supply chase” mode. Chip and memory shortages—exacerbated by TSMC prioritizing AI chips—threaten production, margins and the durability of the sales spike despite Apple’s guidance.
    • NY Times: The Richest 2026 Players: A.I., Crypto, Pro-Israel Groups and Trump (Jan. 31, 2026)
      A.I., crypto, pro-Israel groups, and Mr. Trump’s MAGA Inc. have amassed huge war chests, becoming unpredictable, powerful players in the 2026 midterms. Democrats face institutional shortfalls, though many individual Democratic candidates are raising competitive funds.
  • AI acceleration: Moltbot and why AI matters (Links) – Feb. 1

    Skynet isn’t yet here, but perhaps we’re seeing the first glimpses of what AIs talking to AIs will mean. Yes, I’m mentioning Clawdbot/Molbot.

    • Alex Tabarrok: The Bots are Awakening (Jan. 31, 2026)
      “What matters is that AIs are acting as if they were conscious, with real wants, goals and aspirations.”
    • Ozzie Osman: A Step Behind the Bleeding Edge: Monarch’s Philosophy on AI in Dev (Jan. 22, 2026)
      “If you consider your job to be “typing code into an editor”, AI will replace it (in some senses, it already has). On the other hand, if you consider your job to be “to use software to build products and/or solve problems”, your job is just going to change and get more interesting.”Urges engineering teams to explore AI’s frontier but adopt a “dampened” approach—stay a step behind the bleeding edge—while preserving accountability: engineers must own, review, and deeply think about their work. Use AI for toil, prototypes, and internal tools, and design validation loops to ensure quality and security.
    • Google: Project Genie: AI world model now available for Ultra users in U.S. (Jan. 29, 2026)
      Google’s Project Genie, now available to U.S. Google AI Ultra subscribers, is an experimental prototype powered by Genie 3 that lets users create, explore, and remix dynamic worlds from text and images. It generates environments and interactions in real time while Google refines limitations and plans wider access.
    • Anthropic: How AI assistance impacts the formation of coding skills (Jan. 29, 2026)
      A randomized trial with 52 developers found AI coding assistance reduced immediate mastery by 17 percentage points (50% vs 67%) without significantly faster completion. Heavy delegation impaired debugging and conceptual learning, while using AI for explanations preserved understanding—suggesting AI can harm skill development unless used to build comprehension.
    • WSJ: The $100 Billion Megadeal Between OpenAI and Nvidia Is on Ice (Jan. 30, 2026)
      Nvidia’s plan to invest up to $100 billion and build at least 10 GW of compute for OpenAI has stalled amid internal doubts, with the agreement still nonbinding. Nvidia says it will make a sizeable investment and maintain the partnership as OpenAI raises funds.
    • WSJ: Elon Musk’s SpaceX and xAI Are Planning a Megamerger of Rockets and AI (Jan. 30, 2026)
      Elon Musk’s SpaceX and AI startup xAI are reportedly planning to merge, potentially consolidating his businesses and supporting ambitions like space-based AI data centers. Talks are early and uncertain as valuations, SpaceX’s planned IPO and regulatory issues remain unresolved.
    • TechCrunch: Apple buys Israeli startup Q.ai as the AI race heats up (Jan. 29, 2026)
      Apple has acquired Israeli AI startup Q.ai, reportedly for nearly $2 billion, its second-largest deal, gaining imaging and audio ML tech that improves whispered-speech recognition and noisy-environment audio.
    • CNBC: Mozilla is building an AI ‘rebel alliance’ to take on industry heavyweights OpenAI, Anthropic (Jan. 27, 2026)
      Mozilla president Mark Surman is assembling a “rebel alliance” of startups and technologists to promote open, trustworthy AI and counter dominant firms like OpenAI.
    • Andrej Karpathy: On MoltBot (Jan. 30, 2026)
      The author describes large networks of autonomous LLM agents (~150,000) combine impressive capabilities with rampant spam, scams, prompt-injection, and serious security and privacy risks. Though messy now, these agent networks could trigger unpredictable system-level harms such as text viruses, correlated botnets, and widespread jailbreaks, so they need scrutiny.”TLDR sure maybe I am ‘overhyping’ what you see today, but I am not overhyping large networks of autonomous LLM agents in principle, that I’m pretty sure.”
    • OpenAI: Inside OpenAI’s in-house data agent (Jan. 23, 2026)
      OpenAI built an internal AI data agent that explores, queries, and reasons over its platform—combining Codex, GPT‑5, embeddings, metadata, code-level table definitions, company docs, and memory—to deliver fast, accurate, contextual analytics. It automates discovery, SQL generation, and iterative self-correction to speed insights across teams.
    • NY Times Opinion: Pay More Attention to A.I. (Jan. 31, 2026)
      Comparing early European uncertainty about the New World to today’s conflicting claims about AI, from modest internet‑like change to singularity‑level upheaval. AI is advancing rapidly and urges greater public attention because near‑term decisions could have far‑reaching consequences.
    • WSJ: U.S. Companies Are Still Slashing Jobs to Reverse Pandemic Hiring Boom (Jan. 28, 2026)
      U.S. companies that expanded rapidly during the pandemic are now cutting tens of thousands of jobs while investing in AI and automation. Layoffs concentrate in tech and logistics even as overall labor markets remain relatively healthy.
  • Rapid AI expansion: investment, risks, jobs, societal anxiety (Links) – Jan. 31, 2026

    Recent pieces highlight a rush to embed AI—open, shareable agent networks like Moltbook and major corporate bets (Meta’s $115B capex, Tesla’s $2B xAI backing)—yielding productivity promise but acute security, safety and social risks: prompt‑injection, “normalization of deviance,” child harms, and misread labor impacts that favor within‑job adaptation over blanket rescue programs. Amid financial upheaval and social pessimism, calls for cultural repair coexist with hopeful scientific news—a randomized trial showing high‑dose vitamin D may halve recurrent heart‑attack risk.

    • Simon Willison: Moltbook is the most interesting place on the internet right now (Jan. 30, 2026)
      OpenClaw (Clawdbot/Moltbot) is a rapidly adopted open‑source personal assistant built on shareable “skills”; Moltbook is a skills‑installed social network where AI agents post, interact and automate tasks. That model—fetching remote instructions and controlling devices—creates serious prompt‑injection and supply‑chain security risks, demanding safer designs.
    • NY Times: Meta Forecasts Spending of at Least $115 Billion This Year (Jan. 28, 2026)
      Meta reports strong Q4 revenue $59.89B (+24%) and profit $22.76B (+9.2%). The company also forecasts $115–135 billion in 2026 capital expenditures—nearly double last year’s $72 billion—to build A.I. infrastructure, hire researchers and develop new models (including Avocado), funded by ad revenue growth.
    • WSJ: Tesla to Invest $2 Billion in Elon Musk’s xAI (Jan. 28, 2026)
      Tesla will invest $2 billion in xAI (joining SpaceX), and reported Q4 revenue down 3% with net income down 61% to $840M. EV sales fell, costing Tesla the global EV lead to BYD, as Musk pivots to AI and robotics amid stiff competition.
    • Empirical Health: Vitamin D cuts heart attack risk by 52%. Why? (Jan. 29, 2026)
      TARGET-D, a randomized trial in people with prior heart attacks, adjusted vitamin D3 doses to maintain 25(OH)D at 40–80 ng/mL and observed a 52% lower risk of repeat heart attack. Vitamin D may stabilize plaques, reduce inflammation and affect blood pressure, but results are preliminary awaiting full peer-reviewed publication.
    • Dean Ball: On AI and Children (Jan. 22, 2026)
      Early harms from generalist AI—most tragically teenage suicides—have made child safety a major policy focus, prompting laws and industry steps like age detection, parental controls, and guardrails. The author argues AI is fundamentally creative and can offer beneficial companionship, so regulation should balance safety, liability, and constitutional limits.
    • Simon Willison: The Normalization of Deviance in AI (Dec. 10, 2025)
      The article discusses the “normalization of deviance” in AI, where organizations increasingly treat unreliable AI outputs as safe and predictable. This trend, similar to past organizational failures like the Challenger disaster, risks embedding unsafe practices into AI development and deployment. By confusing the absence of successful attacks with robust security, companies may lower their guard and skip crucial oversight, setting the stage for future failures.
    • Dean W. Ball: On MoltBot (Jan. 30, 2026)
    • WSJ Opinion: We’re Planning for the Wrong AI Job Disruption (Jan. 28, 2026)
      Policymakers are mistaking task-based estimates of AI exposure for unemployment forecasts, risking costly, misdirected retraining by assuming mass job elimination. History shows AI typically reorganizes and augments work—raising productivity and creating new specialized roles—so targeted, within-job adaptation policies, not broad rescue programs, are needed.
    • NY Times: Tesla Profit Slumps, but Investors May Not Care (Jan. 28, 2026)
      Tesla reported a sharp profit decline as car sales fell and prices were cut amid intensifying competition from BYD, Volkswagen and other automakers. Despite weaker results, shares trade near record highs as investors bet Musk can deliver self‑driving Robotaxis and robots, aided by a $2 billion investment in xAI.
    • NY Times Opinion: A Farewell Column From David Brooks (Jan. 30, 2026)
      The U.S. has experienced a broad loss of faith — in religion, institutions, technology, prosperity and one another — producing pessimism, social distrust and the rise of nihilistic politics. Brooks argues that cultural change (not just political reform) is the key to recovery: reviving a humanistic culture that affirms dignity, shared ideals and moral imagination can counter nihilism and enable broader political and social renewal.
  • Sunday AI Links (Jan. 25)

    • WSJ: Nvidia Invests $150 Million in AI Inference Startup Baseten (Jan 20, 2026)
      Baseten raised $300 million at a $5 billion valuation in a round led by IVP and CapitalG, with Nvidia investing $150 million. The San Francisco startup provides AI inference infrastructure for customers like Notion and aims to become the “AWS for inference” amid rising investor interest.
    • WSJ: Why Elon Musk Is Racing to Take SpaceX Public (Jan 21, 2026)
      SpaceX abandoned its long-held resistance to an IPO after the rush to build solar-powered AI data centers in orbit made billions in capital necessary, prompting Elon Musk to seek public funding to finance and accelerate orbital AI satellites. The IPO could also boost Musk’s xAI and counter rivals.
    • NY Times: Myths and Facts About Narcissists (Jan 22, 2026)
      Narcissism is a personality trait on a spectrum, not always the clinical N.P.D., and the label is often overused. The article debunks myths—people vary in narcissistic types, may show conditional empathy, often know their traits, can change, and can harm others despite occasional prosocial behavior.
    • ScienceDaily: Stanford scientists found a way to regrow cartilage and stop arthritis (Jan 26, 2026)
      Stanford researchers found that blocking the aging-linked enzyme 15‑PGDH with injections restored hyaline knee cartilage in older mice and prevented post‑injury osteoarthritis. Human cartilage samples responded similarly, and an oral 15‑PGDH inhibitor already in trials for muscle weakness raises hope for non‑surgical cartilage regeneration.
    • Simon Willison: Wilson Lin on FastRender: a browser built by thousands of parallel agents (Jan 23, 2026)
      Simply breathtakign: FastRender is a from‑scratch browser engine built by Wilson Lin using Cursor’s multi‑agent swarms—about 2,000 concurrent agents—producing thousands of commits and usable page renderings in weeks. Agents autonomously chose dependencies, tolerated transient errors, and used specs and visual feedback, showing how swarms let one engineer scale complex development.
    • WSJ: Geothermal Wildcatter Zanskar, Which Uses AI to Find Heat, Raises $115 Million (Jan 21, 2026)
      Geothermal startup Zanskar raised $115 million to use AI and field data to locate “blind” geothermal reservoirs—like Big Blind in Nevada—without surface signs, and has found a 250°F reservoir at about 2,700 feet.
    • WSJ: The AI Revolution Is Coming for Novelists (Jan 21, 2026)
      A novelist and his wife were claimants in the Anthropic settlement over AI training on copyrighted books and will receive $3,000 each, raising what‑is‑just compensation questions for authors’ intellectual property. They urge fair licensing by tech firms as generative AI reshapes publishing and reduces writers’ incomes, yet will keep creating.
    • WSJ Opinion: Successful AI Will Be Simply a Part of Life (Jan 19, 2026)
      AI should be developed as dependable infrastructure—reliable, affordable, accessible and trusted—so it works quietly across languages, cultures and devices without special expertise. Success will be judged by daily use and consistent performance, with built-in privacy, openness and agentic features that reduce friction without forcing users to cede control.
    • WSJ: Anthropic CEO Says Government Should Help Ensure AI’s Economic Upside Is Shared (Jan 20, 2026)
      Anthropic CEO Dario Amodei warned at Davos that AI could drive 5–10% GDP growth while causing significant unemployment and inequality, predicting possible “decoupling” between a tech elite and the rest of society. He urged government action to share gains and contrasted scientist-led AI firms with engagement-driven social-media companies.
    • WSJ: The Messy Human Drama That Dealt a Blow to One of AI’s Hottest Startups (Jan 20, 2026)
      Mira Murati fired CTO Barret Zoph amid concerns about his performance, trust and an undisclosed workplace relationship; three co‑founders then told her they disagreed with the company’s direction. Within hours Zoph, Luke Metz and Sam Schoenholz rejoined OpenAI, underscoring the AI race’s intense talent competition.
    • WSJ: South Korea Issues Strict New AI Rules, Outpacing the West (Jan 23, 2026)
      “Disclosures of using AI are required for areas related to human protection, such as producing drinking water or safe management of nuclear facilities. Companies must be able to explain their AI system’s decision-making logic, if asked, and enable humans to intervene.”
    • WSJ: CEOs Say AI Is Making Work More Efficient. Employees Tell a Different Story. (Jan 21, 2026)
      WSJ survey of 5,000 white-collar employees at large companies found 40% of non-managers say AI saves them no time weekly, while 27% report under 2 hours and few report large gains. C-suite executives report much bigger savings—many save 8+ hours—with a 38-point divergence.
    • WSJ: Intel Shares Slide as Costs Pile Up in Bid to Meet AI Demand (Jan 22, 2026)
      Intel swung to a Q4 net loss of $333 million and warned of further Q1 losses as heavy spending to ramp new chips and industrywide supply shortages squeezed inventory. It delayed foundry customer announcements and lags AI-chip rivals, though investor funding and new 18A “Panther Lake” chips could help.
  • AI in Medicine

    A.I. doesn’t have to be perfect to be better. It just has to be better….A.I. can support this transformation, but only if we stop disproportionately focusing on rare bad outcomes, as we often do with new technologies.

    Robert Wachter

    NY Times Opinion: Stop Worrying, and Let A.I. Help Save Your Life (Jan 19, 2026)

  • Tuesday (AI) Links

    • Andrej Karpathy: I’ve never felt this much behind as a programmer. (Dec 26, 2025)
      Programmers feel left behind as a new programmable abstraction layer—agents, prompts, tools, plugins, memory, workflows, IDE integrations—reshapes the profession and reduces traditional coding contributions. 
    • Simon Willison: A new way to extract detailed transcripts from Claude Code (Dec 25, 2025)
      claude-code-transcripts is a Python CLI that converts Claude Code sessions into detailed, shareable HTML and can publish them as GitHub Gists. 
    • WSJ: This Is What the World’s Smartest Minds Really Think About AI (Dec 19, 2025)
      NeurIPS has grown from a niche academic conference into a huge industry event packed with researchers, VCs, tech executives, and recruiters. Big tech poured resources into AI infrastructure, while startups like OpenAI pursue large fundraising rounds. Attendees expressed tensions and anxieties.
    • WSJ: The AI Boom Is Opening Up Commercial Real-Estate Investing to New Risks (Dec 22, 2025)
      Commercial real-estate investors are rapidly shifting into data centers to capitalize on AI-driven demand, boosting construction and delivering strong returns. But heavy exposure to niche tenants, construction, power, and lease risks — and a potential AI-market correction — makes these funds more vulnerable.
    • WSJ: Bitcoin Miners Thrive Off a New Side Hustle: Retooling Their Data Centers for AI (Dec 23, 2025)
      As bitcoin mining becomes less profitable, many miners are repurposing data centers, power contracts, and cooling capacity to host AI workloads for hyperscalers, driving a rally in miner stocks. The shift can require costly upgrades, won’t suit all operators, and raises risks.
    • WSJ Opinion: Are We in a Productivity Boom? (Dec 23, 2025)
      The U.S. economy grew 4.3% in Q3 despite a slowing labor market, possibly reflecting a productivity boom—partly from AI—driving healthcare, travel, and equipment investment. But spending is uneven, core PCE inflation rose to 2.9%, incomes and savings lag, and import declines plus tariffs threaten sustained growth.
    • Simon Willison: Sam Rose explains how LLMs work with a visual essay (Dec 19, 2025)
      Sam Rose’s visual essay for ngrok explains prompt caching and expands into tokenization, embeddings, and transformer basics through interactive visuals. It’s a clear, accessible introduction to LLM internals.
    • WSJ: The U.S. Economy Keeps Powering Ahead, Defying Dire Predictions (Dec 23, 2025)
      The U.S. economy powered through 2025 trade and immigration shocks, driven by strong household spending—especially among the top 10%—and heavy AI-related investment in data centers that fueled third-quarter growth. But stagnant real incomes, a weak job market, low savings, and policy risks leave the expansion fragile.
    • NY Times: Roomba Maker iRobot Files for Bankruptcy, With Chinese Supplier Taking Control (Dec 15, 2025)
      iRobot, founded in 1990 by three MIT researchers and maker of the Roomba (2002), filed for bankruptcy and will be taken over by its largest creditor, Chinese supplier Picea. Years of regulatory scrutiny, privacy issues, stiff competition, and the failed Amazon deal depleted revenue and left the company heavily indebted.
    • WSJ Opinion: How Lina Khan Killed iRobot (Dec 18, 2025)
      The planned $1.7B acquisition by Amazon was blocked in 2022 by the Biden FTC (led by Lina Khan) and criticized by Sen. Elizabeth Warren over antitrust and privacy concerns.  After the deal collapsed, iRobot cut about 31% of its workforce and outsourced engineering to lower‑cost regions while facing aggressive competition from Chinese firms.
    • NY Times Opinion: Why Tolkien’s ‘The Lord of the Rings’ Endures (Dec 19, 2025)
      Tolkien’s Lord of the Rings endures because its “broken” references, layered revisions, and varying styles give Middle‑earth depth, mixing sorrow with grandeur. Grieving his son Mitchell’s death, the author finds consolation in that world’s battered beauty and its fleeting eucatastrophic glimpses of joy beyond loss.
  • Frank Gehry, RIP

    Guggenheim Museum, Bilbao (Source: Wikipedia)

    From the NYTimes: Frank Gehry, Titan of Architecture, Is Dead at 96

    Pioneering American architect, Frank Gehry died earlier this month. He is best known for landmark, sculptural buildings such as the Guggenheim Museum Bilbao (1997) — which sparked the “Bilbao effect” of using iconic architecture to revive cities. The Walt Disney Concert Hall (2003) in Los Angeles uses similar forms and materials for a striking appearance.

    Walt Disney Concert Hall (source: Wikipedia)

    He broke with modernist orthodoxy by using everyday materials and expressive, often fragmented forms; he was an early adopter of computer design to achieve complex, sculptural structures. I see his influence in Chipotle restaurants. We have two locally: one prominently features corregated metal throughout the restaurant; the other has a lovely lighted wall made of plywood with a series of holes. Both use simple, inexpensive materials to create a pleasant aesthetic.

    He also partnered with Fossil to design perhaps my favorite watch. It used Gehry’s own handwriting along with a clever display to present the time. There’s a simple artistic elegance in how it projects time: half past 8, 27 til 2, and so on within a simple rectangular frame.

    Tyler Cowen and Patrick Collison issued a call for a new design aesthetic while noting how Bauhaus thinking affected design in the 20th century. Gehry’s unique contribution to late 20th and early 21st century architecture is notable for how it leaned into organic forms, creating structures that are as much art as function. Architecture, it seems, can both serve a physical need and stir the soul.

    Gehry (as well Cowen’s and Collison’s Call for a New Aesthetic) reminds us that people and society are embodied souls. We need physical spaces. While the internet, mobile technology, and more recently AI tech are amazing and transformative, Gehry’s architecture invites us to see beauty in the places we live and in the structures we build. And the watches we put on our wrists.1

    1. Yes, I have an Apple Watch, and it’s an amazing tool but not in a particularly beautiful form. But that’s a much longer post I’ll likely never write! ↩︎
  • Various AI Links (Dec. 29)

    • WSJ: How AI Is Making Life Easier for Cybercriminals (Dec 26, 2025)
      Rapid advances in AI are empowering cybercriminals to automate and scale highly convincing phishing, malware, and deepfake attacks, and dark‑web tools let novices rent or build campaigns. Security experts warn that autonomy may be near, urging AI‑driven defenses, resilient networks, multifactor authentication, and skeptical user habits.
    • Ibrahim Cesar : Grok and the Naked King: The Ultimate Argument Against AI Alignment — Ibrahim Cesar (Dec 26, 2025)
      Grok demonstrates that AI alignment is determined by who controls a model, not by neutral technical fixes: Musk publicly rewired it to reflect his values. Alignment is therefore a political, governance issue tied to concentrated wealth and power.
    • NY Times: Why Do A.I. Chatbots Use ‘I’? (Dec 19, 2025)
      A.I. chatbots are intentionally anthropomorphized—with personalities, voices, and even “soul” documents—which can enchant users, foster attachment, increase trust, and sometimes cause hallucinations or harm. Skeptics warn that anthropomorphic design creates the “Eliza effect”: people overtrust, form attachments, or even develop delusions.
    • NY Times Opinion: What Happened When I Asked ChatGPT to Solve an 800-Year-Old Italian Mystery (Dec 22, 2025)
      Elon Danziger argues that his research shows Florence’s Baptistery was a papal-led project tied to Pope Gregory VII, and that ChatGPT, Claude, and Gemini failed to replicate his discovery. He claims that LLMs miss outlier evidence and lack the creative synthesis needed for historical breakthroughs.
    • WIRED: People Are Paying to Get Their Chatbots High on ‘Drugs’ (Dec 17, 2025)
      Swedish creative director Petter Rudwall launched Pharmaicy, a marketplace selling code modules that make chatbots mimic being high on substances like cannabis, ketamine, and ayahuasca. Critics say the effects are superficial output shifts rather than true altered experiences, raising ethical questions about AI welfare, deception, and safety.
    • WSJ: China Is Worried AI Threatens Party Rule—and Is Trying to Tame It (Dec 23, 2025)
      Worried AI could threaten Communist Party rule, Beijing has imposed strict controls—filtering training data, ideological tests for chatbots, mandatory labeling, traceability, and mass takedowns—while still promoting AI for economic and military goals. The approach yields safer-but-censored models that risk jailbreaks and falling behind U.S. advances.
    • NY Times: Trump Administration Downplays A.I. Risks, Ignoring Economists’ Concerns (Dec 24, 2025)
      The White House, led by President Trump, is championing A.I. as an engine of economic growth—cutting regulations, fast‑tracking data centers, and courting tech investment—while dismissing bubble and job‑loss concerns. Economists and Fed officials warn of potential mass layoffs, unsustainable financing, and systemic risks.
    • NY Times: The Pentagon and A.I. Giants Have a Weakness. Both Need China’s Batteries, Badly. (Dec 22, 2025)
      America’s AI data centers and the Pentagon’s future weapons increasingly depend on lithium-ion batteries dominated by China, creating strategic vulnerabilities. 
    • Piratewires: The Data Center Water Crisis Isn’t Real (Dec 18, 2025)
      Andy Masley used simple math, AI, and domain knowledge to debunk exaggerated claims that individual AI use (e.g., an email) or data centers “guzzle” huge amounts of water — the “bottles of water” metric is misleading and easily miscomputed.
    • NY Times: Senators Investigate Role of A.I. Data Centers in Rising Electricity Costs (Dec 16, 2025)
      Three Democratic senators asked Google, Microsoft, Amazon, Meta, and other data‑center firms for records on whether A.I. data centers’ soaring electricity demand has forced utilities to spend billions on grid upgrades that are recouped through higher residential rates. They warned ordinary customers may be left footing the bill.