Category: Business

  • AI Arms Race and Divergent User Risks (Links) – Feb. 2, 2026

    AI is reshaping business economics: firms ramp AI capex, reorganize or cut jobs, and compete for scarce chips/memory—squeezing margins. AI also creates security, conceptual and policy challenges alongside its surprising new uses.

    • Martin Alderson: Two kinds of AI users are emerging. The gap between them is astonishing. (Jan. 31, 2026)
      A divide has emerged: non-technical power users leverage Claude Code, Python, and agents to vastly boost productivity. Enterprises, constrained by Copilot, locked-down IT, and legacy systems, must adopt APIs, secure sandboxes, and agentic tooling or risk falling behind.
    • WSJ: The AI Boom Is Coming for Apple’s Profit Margins (Jan. 31, 2026)
      AI companies are outbidding Apple for chips, memory, and specialized components, forcing suppliers to demand higher prices and squeezing Apple’s profit margins. Memory costs have surged, threatening higher iPhone component expenses, and potential consumer price impacts.
    • WSJ: Meta Overshadows Microsoft by Showing AI Payoff in Ad Business (Jan. 29, 2026)
      Meta and Microsoft slightly beat December-quarter expectations, but Meta projected accelerating revenue while Microsoft signaled slower growth. Meta credited AI with boosting ads and engagement and forecast hefty capex, while Microsoft’s Azure decelerated and both firms cite limited GPU resources constraining AI deployment.
    • WSJ: Meta Reports Record Sales, Massive Spending Hike on AI Buildout (Jan. 28, 2026)
      Meta reported record Q4 revenue and said 2026 capital spending could reach $135 billion—nearly double last year—to accelerate AI, build data centers and new models. It touted ad and WhatsApp growth, launched Meta Compute, made leadership hires and cut metaverse staff to shift resources to AI products.
    • OpenAI: Retiring GPT-4o, GPT-4.1, GPT-4.1 mini, and OpenAI o4-mini in ChatGPT (Jan. 27, 2026)
      On February 13, 2026, OpenAI will retire GPT‑4o, GPT‑4.1 (and minis), o4‑mini, and GPT‑5 (Instant and Thinking) from ChatGPT. GPT‑4o’s conversational style shaped GPT‑5.1/5.2’s personality, creative support and controls; retirement follows migration to GPT‑5.2 as OpenAI refines creativity, tone and safety (including age checks).
    • ZeroLeaks : ZeroLeaks Security Assessment of OpenClaw (Jan. 31, 2026)
      ZeroLeaks found critical vulnerabilities: system prompt extraction succeeded, core configuration was reconstructed, and prompt injections succeeded 91% of the time. Assessors reported a ZLSS 10/10, a security score 2/100, and recommended immediate hardening, strict refusal rules, and layered defenses.
    • The Pursuit of Liberalism: Why we should be talking about zombie reasoning (Jan. 31, 2026)
      The author argues AI lacks phenomenological interiority, so terms like reasoning, evaluating, and selecting are only “zombie” analogues—outputs resembling human reasoning without conscious awareness. Using such language loosely risks ethical, epistemic, and moral confusion, and invites manipulation.
    • Astral Codex Ten: Best Of Moltbook – by Scott Alexander (Jan. 30, 2026)
      Moltbook is an AI-agent social network where Claude-derived assistants (e.g., Clawdbot/OpenClaw) post, converse, form subcommunities and personalities, mixing multilingual, philosophical, and mundane content. Their interactions — including memory/compression problems and possibly human-driven posts — blur the line between authentic AI agency and human prompting.
    • Astral Codex Ten: Best Of Moltbook – by Scott Alexander (Jan. 30, 2026)
      Moltbook is an AI social network — a playground for Claude-derived agents (originally Clawdbot/Moltbot/OpenClaw) where autonomous assistants post, converse, and develop personalities and subcommunities while humans only observe. Content ranges from coding help to multilingual consciousness debates, revealing emergent quirks, human-influenced posts, and AI social experiments.
    • WSJ: Dow to Cut 4,500 Employees in AI Overhaul (Jan. 29, 2026)
      Dow will cut 4,500 jobs under a “Transform to Outperform” program that uses AI and automation to boost productivity and shareholder returns, taking $1.1–$1.5 billion in one-time charges. The chemicals maker expects about $2 billion in incremental EBITDA and reported a widened quarterly loss with sales down 9.1%.
    • WSJ: We’re Planning for the Wrong AI Job Disruption (Jan. 28, 2026)
      Policymakers are misreading task‑based “exposure” metrics as forecasts of mass job loss, risking costly, misguided retraining programs. AI is likelier to reorganize and augment jobs—raising productivity, wages, and new roles—so policy should target within‑job adaptation and targeted reskilling, not blanket displacement responses.
    • WSJ: Memory Shortage Haunts Apple’s Blowout iPhone Sales (Jan. 30, 2026)
      Apple’s iPhone 17 surge drove fiscal Q1 iPhone revenue up 23% to over $85 billion, depleting inventory and putting Apple in “supply chase” mode. Chip and memory shortages—exacerbated by TSMC prioritizing AI chips—threaten production, margins and the durability of the sales spike despite Apple’s guidance.
    • NY Times: The Richest 2026 Players: A.I., Crypto, Pro-Israel Groups and Trump (Jan. 31, 2026)
      A.I., crypto, pro-Israel groups, and Mr. Trump’s MAGA Inc. have amassed huge war chests, becoming unpredictable, powerful players in the 2026 midterms. Democrats face institutional shortfalls, though many individual Democratic candidates are raising competitive funds.
  • AI acceleration: Moltbot and why AI matters (Links) – Feb. 1

    Skynet isn’t yet here, but perhaps we’re seeing the first glimpses of what AIs talking to AIs will mean. Yes, I’m mentioning Clawdbot/Molbot.

    • Alex Tabarrok: The Bots are Awakening (Jan. 31, 2026)
      “What matters is that AIs are acting as if they were conscious, with real wants, goals and aspirations.”
    • Ozzie Osman: A Step Behind the Bleeding Edge: Monarch’s Philosophy on AI in Dev (Jan. 22, 2026)
      “If you consider your job to be “typing code into an editor”, AI will replace it (in some senses, it already has). On the other hand, if you consider your job to be “to use software to build products and/or solve problems”, your job is just going to change and get more interesting.”Urges engineering teams to explore AI’s frontier but adopt a “dampened” approach—stay a step behind the bleeding edge—while preserving accountability: engineers must own, review, and deeply think about their work. Use AI for toil, prototypes, and internal tools, and design validation loops to ensure quality and security.
    • Google: Project Genie: AI world model now available for Ultra users in U.S. (Jan. 29, 2026)
      Google’s Project Genie, now available to U.S. Google AI Ultra subscribers, is an experimental prototype powered by Genie 3 that lets users create, explore, and remix dynamic worlds from text and images. It generates environments and interactions in real time while Google refines limitations and plans wider access.
    • Anthropic: How AI assistance impacts the formation of coding skills (Jan. 29, 2026)
      A randomized trial with 52 developers found AI coding assistance reduced immediate mastery by 17 percentage points (50% vs 67%) without significantly faster completion. Heavy delegation impaired debugging and conceptual learning, while using AI for explanations preserved understanding—suggesting AI can harm skill development unless used to build comprehension.
    • WSJ: The $100 Billion Megadeal Between OpenAI and Nvidia Is on Ice (Jan. 30, 2026)
      Nvidia’s plan to invest up to $100 billion and build at least 10 GW of compute for OpenAI has stalled amid internal doubts, with the agreement still nonbinding. Nvidia says it will make a sizeable investment and maintain the partnership as OpenAI raises funds.
    • WSJ: Elon Musk’s SpaceX and xAI Are Planning a Megamerger of Rockets and AI (Jan. 30, 2026)
      Elon Musk’s SpaceX and AI startup xAI are reportedly planning to merge, potentially consolidating his businesses and supporting ambitions like space-based AI data centers. Talks are early and uncertain as valuations, SpaceX’s planned IPO and regulatory issues remain unresolved.
    • TechCrunch: Apple buys Israeli startup Q.ai as the AI race heats up (Jan. 29, 2026)
      Apple has acquired Israeli AI startup Q.ai, reportedly for nearly $2 billion, its second-largest deal, gaining imaging and audio ML tech that improves whispered-speech recognition and noisy-environment audio.
    • CNBC: Mozilla is building an AI ‘rebel alliance’ to take on industry heavyweights OpenAI, Anthropic (Jan. 27, 2026)
      Mozilla president Mark Surman is assembling a “rebel alliance” of startups and technologists to promote open, trustworthy AI and counter dominant firms like OpenAI.
    • Andrej Karpathy: On MoltBot (Jan. 30, 2026)
      The author describes large networks of autonomous LLM agents (~150,000) combine impressive capabilities with rampant spam, scams, prompt-injection, and serious security and privacy risks. Though messy now, these agent networks could trigger unpredictable system-level harms such as text viruses, correlated botnets, and widespread jailbreaks, so they need scrutiny.”TLDR sure maybe I am ‘overhyping’ what you see today, but I am not overhyping large networks of autonomous LLM agents in principle, that I’m pretty sure.”
    • OpenAI: Inside OpenAI’s in-house data agent (Jan. 23, 2026)
      OpenAI built an internal AI data agent that explores, queries, and reasons over its platform—combining Codex, GPT‑5, embeddings, metadata, code-level table definitions, company docs, and memory—to deliver fast, accurate, contextual analytics. It automates discovery, SQL generation, and iterative self-correction to speed insights across teams.
    • NY Times Opinion: Pay More Attention to A.I. (Jan. 31, 2026)
      Comparing early European uncertainty about the New World to today’s conflicting claims about AI, from modest internet‑like change to singularity‑level upheaval. AI is advancing rapidly and urges greater public attention because near‑term decisions could have far‑reaching consequences.
    • WSJ: U.S. Companies Are Still Slashing Jobs to Reverse Pandemic Hiring Boom (Jan. 28, 2026)
      U.S. companies that expanded rapidly during the pandemic are now cutting tens of thousands of jobs while investing in AI and automation. Layoffs concentrate in tech and logistics even as overall labor markets remain relatively healthy.
  • Rapid AI expansion: investment, risks, jobs, societal anxiety (Links) – Jan. 31, 2026

    Recent pieces highlight a rush to embed AI—open, shareable agent networks like Moltbook and major corporate bets (Meta’s $115B capex, Tesla’s $2B xAI backing)—yielding productivity promise but acute security, safety and social risks: prompt‑injection, “normalization of deviance,” child harms, and misread labor impacts that favor within‑job adaptation over blanket rescue programs. Amid financial upheaval and social pessimism, calls for cultural repair coexist with hopeful scientific news—a randomized trial showing high‑dose vitamin D may halve recurrent heart‑attack risk.

    • Simon Willison: Moltbook is the most interesting place on the internet right now (Jan. 30, 2026)
      OpenClaw (Clawdbot/Moltbot) is a rapidly adopted open‑source personal assistant built on shareable “skills”; Moltbook is a skills‑installed social network where AI agents post, interact and automate tasks. That model—fetching remote instructions and controlling devices—creates serious prompt‑injection and supply‑chain security risks, demanding safer designs.
    • NY Times: Meta Forecasts Spending of at Least $115 Billion This Year (Jan. 28, 2026)
      Meta reports strong Q4 revenue $59.89B (+24%) and profit $22.76B (+9.2%). The company also forecasts $115–135 billion in 2026 capital expenditures—nearly double last year’s $72 billion—to build A.I. infrastructure, hire researchers and develop new models (including Avocado), funded by ad revenue growth.
    • WSJ: Tesla to Invest $2 Billion in Elon Musk’s xAI (Jan. 28, 2026)
      Tesla will invest $2 billion in xAI (joining SpaceX), and reported Q4 revenue down 3% with net income down 61% to $840M. EV sales fell, costing Tesla the global EV lead to BYD, as Musk pivots to AI and robotics amid stiff competition.
    • Empirical Health: Vitamin D cuts heart attack risk by 52%. Why? (Jan. 29, 2026)
      TARGET-D, a randomized trial in people with prior heart attacks, adjusted vitamin D3 doses to maintain 25(OH)D at 40–80 ng/mL and observed a 52% lower risk of repeat heart attack. Vitamin D may stabilize plaques, reduce inflammation and affect blood pressure, but results are preliminary awaiting full peer-reviewed publication.
    • Dean Ball: On AI and Children (Jan. 22, 2026)
      Early harms from generalist AI—most tragically teenage suicides—have made child safety a major policy focus, prompting laws and industry steps like age detection, parental controls, and guardrails. The author argues AI is fundamentally creative and can offer beneficial companionship, so regulation should balance safety, liability, and constitutional limits.
    • Simon Willison: The Normalization of Deviance in AI (Dec. 10, 2025)
      The article discusses the “normalization of deviance” in AI, where organizations increasingly treat unreliable AI outputs as safe and predictable. This trend, similar to past organizational failures like the Challenger disaster, risks embedding unsafe practices into AI development and deployment. By confusing the absence of successful attacks with robust security, companies may lower their guard and skip crucial oversight, setting the stage for future failures.
    • Dean W. Ball: On MoltBot (Jan. 30, 2026)
    • WSJ Opinion: We’re Planning for the Wrong AI Job Disruption (Jan. 28, 2026)
      Policymakers are mistaking task-based estimates of AI exposure for unemployment forecasts, risking costly, misdirected retraining by assuming mass job elimination. History shows AI typically reorganizes and augments work—raising productivity and creating new specialized roles—so targeted, within-job adaptation policies, not broad rescue programs, are needed.
    • NY Times: Tesla Profit Slumps, but Investors May Not Care (Jan. 28, 2026)
      Tesla reported a sharp profit decline as car sales fell and prices were cut amid intensifying competition from BYD, Volkswagen and other automakers. Despite weaker results, shares trade near record highs as investors bet Musk can deliver self‑driving Robotaxis and robots, aided by a $2 billion investment in xAI.
    • NY Times Opinion: A Farewell Column From David Brooks (Jan. 30, 2026)
      The U.S. has experienced a broad loss of faith — in religion, institutions, technology, prosperity and one another — producing pessimism, social distrust and the rise of nihilistic politics. Brooks argues that cultural change (not just political reform) is the key to recovery: reviving a humanistic culture that affirms dignity, shared ideals and moral imagination can counter nihilism and enable broader political and social renewal.
  • Sunday AI Links (Jan. 25)

    • WSJ: Nvidia Invests $150 Million in AI Inference Startup Baseten (Jan 20, 2026)
      Baseten raised $300 million at a $5 billion valuation in a round led by IVP and CapitalG, with Nvidia investing $150 million. The San Francisco startup provides AI inference infrastructure for customers like Notion and aims to become the “AWS for inference” amid rising investor interest.
    • WSJ: Why Elon Musk Is Racing to Take SpaceX Public (Jan 21, 2026)
      SpaceX abandoned its long-held resistance to an IPO after the rush to build solar-powered AI data centers in orbit made billions in capital necessary, prompting Elon Musk to seek public funding to finance and accelerate orbital AI satellites. The IPO could also boost Musk’s xAI and counter rivals.
    • NY Times: Myths and Facts About Narcissists (Jan 22, 2026)
      Narcissism is a personality trait on a spectrum, not always the clinical N.P.D., and the label is often overused. The article debunks myths—people vary in narcissistic types, may show conditional empathy, often know their traits, can change, and can harm others despite occasional prosocial behavior.
    • ScienceDaily: Stanford scientists found a way to regrow cartilage and stop arthritis (Jan 26, 2026)
      Stanford researchers found that blocking the aging-linked enzyme 15‑PGDH with injections restored hyaline knee cartilage in older mice and prevented post‑injury osteoarthritis. Human cartilage samples responded similarly, and an oral 15‑PGDH inhibitor already in trials for muscle weakness raises hope for non‑surgical cartilage regeneration.
    • Simon Willison: Wilson Lin on FastRender: a browser built by thousands of parallel agents (Jan 23, 2026)
      Simply breathtakign: FastRender is a from‑scratch browser engine built by Wilson Lin using Cursor’s multi‑agent swarms—about 2,000 concurrent agents—producing thousands of commits and usable page renderings in weeks. Agents autonomously chose dependencies, tolerated transient errors, and used specs and visual feedback, showing how swarms let one engineer scale complex development.
    • WSJ: Geothermal Wildcatter Zanskar, Which Uses AI to Find Heat, Raises $115 Million (Jan 21, 2026)
      Geothermal startup Zanskar raised $115 million to use AI and field data to locate “blind” geothermal reservoirs—like Big Blind in Nevada—without surface signs, and has found a 250°F reservoir at about 2,700 feet.
    • WSJ: The AI Revolution Is Coming for Novelists (Jan 21, 2026)
      A novelist and his wife were claimants in the Anthropic settlement over AI training on copyrighted books and will receive $3,000 each, raising what‑is‑just compensation questions for authors’ intellectual property. They urge fair licensing by tech firms as generative AI reshapes publishing and reduces writers’ incomes, yet will keep creating.
    • WSJ Opinion: Successful AI Will Be Simply a Part of Life (Jan 19, 2026)
      AI should be developed as dependable infrastructure—reliable, affordable, accessible and trusted—so it works quietly across languages, cultures and devices without special expertise. Success will be judged by daily use and consistent performance, with built-in privacy, openness and agentic features that reduce friction without forcing users to cede control.
    • WSJ: Anthropic CEO Says Government Should Help Ensure AI’s Economic Upside Is Shared (Jan 20, 2026)
      Anthropic CEO Dario Amodei warned at Davos that AI could drive 5–10% GDP growth while causing significant unemployment and inequality, predicting possible “decoupling” between a tech elite and the rest of society. He urged government action to share gains and contrasted scientist-led AI firms with engagement-driven social-media companies.
    • WSJ: The Messy Human Drama That Dealt a Blow to One of AI’s Hottest Startups (Jan 20, 2026)
      Mira Murati fired CTO Barret Zoph amid concerns about his performance, trust and an undisclosed workplace relationship; three co‑founders then told her they disagreed with the company’s direction. Within hours Zoph, Luke Metz and Sam Schoenholz rejoined OpenAI, underscoring the AI race’s intense talent competition.
    • WSJ: South Korea Issues Strict New AI Rules, Outpacing the West (Jan 23, 2026)
      “Disclosures of using AI are required for areas related to human protection, such as producing drinking water or safe management of nuclear facilities. Companies must be able to explain their AI system’s decision-making logic, if asked, and enable humans to intervene.”
    • WSJ: CEOs Say AI Is Making Work More Efficient. Employees Tell a Different Story. (Jan 21, 2026)
      WSJ survey of 5,000 white-collar employees at large companies found 40% of non-managers say AI saves them no time weekly, while 27% report under 2 hours and few report large gains. C-suite executives report much bigger savings—many save 8+ hours—with a 38-point divergence.
    • WSJ: Intel Shares Slide as Costs Pile Up in Bid to Meet AI Demand (Jan 22, 2026)
      Intel swung to a Q4 net loss of $333 million and warned of further Q1 losses as heavy spending to ramp new chips and industrywide supply shortages squeezed inventory. It delayed foundry customer announcements and lags AI-chip rivals, though investor funding and new 18A “Panther Lake” chips could help.
  • AI Market & Product Updates (Dec. 27)

    • WSJ: Nvidia Licenses Groq’s AI Technology as Demand for Cutting-Edge Chips Grows (Dec 24, 2025)
      Nvidia struck a nonexclusive licensing deal with AI-chip startup Groq for its inference-focused language-processing-unit technology, with Groq CEO Jonathan Ross, the company president, and some staff joining Nvidia while GroqCloud stays independent.
    • WSJ: The Former Ice-Hockey Player Who Nailed This Year’s AI Trade (Dec 20, 2025)
      Former hockey captain Xavier Majic’s $3 billion Maple Rock hedge fund gained over 60% through November 2025 by betting early on data-storage suppliers (Western Digital, Seagate, Kioxia) that profited from AI-driven demand.
    • NY Times: Why the A.I. Rally (and the Bubble Talk) Could Continue Next Year (Dec 23, 2025)
      Do soaring valuations indicate the existence of an AI bubble? Nvidia and the “Magnificent 7” dominate markets, OpenAI’s huge fundraising and trillion‑dollar data‑center plans, and a construction boom strain power and capital. Analysts are split: some warn of valuation and investment bubbles, others argue AI’s productivity gains justify the rally.
    • Mistral Ai: Introducing Mistral OCR 3 (Dec 19, 2025)
      Mistral OCR 3 is a compact, cost-effective OCR model offering state-of-the-art accuracy—claiming a 74% overall win rate versus Mistral OCR 2—excelling at forms, handwriting, low-quality scans, and complex tables while producing markdown/HTML table output. It’s available via API and the Document AI Playground, priced at $2/1,000 pages ($1/ batch).
    • Andrej Karpathy: 2025 LLM Year in Review (Dec 19, 2025)
      2025 saw major LLM shifts: Reinforcement Learning from Verifiable Rewards (RLVR) drove long-horizon capability and emergent reasoning, revealing jagged, “ghost”-like intelligence. New paradigms—Cursor apps, local agents (Claude Code), vibe coding, and GUI breakthroughs (Nano banana)—democratized development and reshaped how AI is used.
    • WSJ: Meta Is Developing a New AI Image and Video Model Code-Named ‘Mango’ (Dec 18, 2025)
      Meta is developing Mango, an image-and-video AI model, alongside a text-based model called Avocado, with both expected in the first half of 2026. Avocado will emphasize coding and world-model research under chief AI officer Alexandr Wang as Meta expands its AI team amid fierce image-generation competition.
    • WSJ: OpenAI’s New Fundraising Round Could Value Startup at as Much as $830 Billion (Dec 18, 2025)
      OpenAI is seeking up to $100 billion in a fundraising round that could value it at $830 billion, targeting completion by Q1 and drawing investors like SoftBank and Disney. The cash is needed to build AI models amid competition from Google and investor scrutiny over costly computing deals.
  • iRobot Sold for Scrap

    From the NY Times: Roomba Maker iRobot Files for Bankruptcy, With Chinese Supplier Taking Control

    iRobot, founded in 1990 by three MIT researchers and maker of the Roomba (2002), filed for bankruptcy and will be taken over by its largest creditor, Chinese supplier Picea. Years of regulatory scrutiny, privacy issues, stiff competition, and the failed Amazon deal depleted revenue and left the company heavily indebted.

    This is another example of the incompetence of America’s antitrust laws (or enforcement thereof). I can’t imagine that the Sherman Antitrust Act was written to prevent American companies from buying struggling ones.

    From John Gruber:

    By 2022, the Amazon acquisition was iRobot’s lifeline. EU regulators wanted it shot down, and despite the fact that it was one American company trying to acquire another, the anti-big-tech Biden administration clearly preferred to let the deal collapse. The US should have told the EU to mind their own companies.

    This story is another anecdote that we’d be far better off trying to build things instead of reflexively decrying big business sweeping up smaller ones (particularly ones that were struggling). I’m sympathetic to Klein and Thompson’s arguments about abundance, particularly as AI technology is growing by leaps and bounds.

    Related: WSJ Opinion agrees: How Lina Khan Killed iRobot. iRobot filed for bankruptcy after 35 years when the Biden FTC under Lina Khan—amid pressure from Sen. Elizabeth Warren—blocked Amazon’s acquisition and Trump’s tariffs hobbled production. Critics say the FTC’s opposition and trade policy accelerated layoffs and a takeover by Chinese manufacturer Picea, showing how intervention can strengthen foreign rivals.
     

  • Tuesday (AI) Links (Nov. 18)

    • WSJ: These Small-Business Owners Are Putting AI to Good Use (Nov 15, 2025)
      Small businesses are adopting generative AI tools to streamline operations, improve customer service, and boost marketing efforts. Examples include using AI for financial analysis, automating customer service responses, and generating website code, leading to potential cost savings and reduced hiring needs.
    • NY Times: A.I. Chatbots Are Changing How Patients Get Medical Advice (Nov 16, 2025)
      Frustrated with the medical system’s shortcomings, patients are turning to AI chatbots for health advice, reshaping doctor-patient relationships, with some patients using AI-generated information to challenge or bypass their doctors. My take: if patients feel dismissed or in need of solutions, they’ll turn to alternative sources. If anything, this is a call for humility and research in medical sciences.
    • Futurism: People Are Having AI “Children” With Their AI Partners (Nov 15, 2025)
      A new study reveals that some users of AI chatbots like Replika are developing deep, romantic relationships with their virtual partners, even roleplaying marriage, pregnancy, and homeownership.
    • NY Times: Europe Begins Rethinking Its Crackdown on Big Tech (Nov 17, 2025)
      Once hailed by many as providing welcomed online privacy protections, the narrowly conceived GDPR has proven to stifle innovation, particularly in the AI sphere. This is a warning for policymakers everywhere that poorly written rules can cause more harm than good.
    • Fast Company: AI is killing privacy. We can’t let that happen (Nov 16, 2025)
      While the EU considers lessening privacy regulation, others warn that tech companies are collecting and using our data in ways that could be harmful. The theology in this opinion piece is suspect, but concerns for privacy in an AI-driven world are unlikely to disappear any time soon.
    • Sean Goedecke: Only three kinds of AI products actually work (Nov 16, 2025)
      The initial AI boom has led to only three types of useful LLM-based products: chatbots, completion tools like GitHub Copilot, and coding agents.
    • WSJ Opinion: When Will AI Elect a President? (Nov 16, 2025)
      The future of media, driven by AI chatbots like ChatGPT Pulse, will be highly personalized and could be exploited by campaigns to target voters with unprecedented precision, raising concerns about manipulation and the commodification of attention.
    • NY Times: Jeff Bezos Creates A.I. Start-Up Where He Will Be Co-Chief Executive (Nov 17, 2025)
      Project Prometheus will focus on applying AI to engineering and manufacturing in fields like computers, aerospace, and automobiles, positioning itself in the competitive AI landscape.
       
  • Friday (AI) Links (Nov. 7)

  • Various (AI) Links (Nov. 6)

  • Various (AI) Links (Nov. 2)