Various AI Links (Dec. 29)

  • WSJ: How AI Is Making Life Easier for Cybercriminals (Dec 26, 2025)
    Rapid advances in AI are empowering cybercriminals to automate and scale highly convincing phishing, malware, and deepfake attacks, and dark‑web tools let novices rent or build campaigns. Security experts warn that autonomy may be near, urging AI‑driven defenses, resilient networks, multifactor authentication, and skeptical user habits.
  • Ibrahim Cesar : Grok and the Naked King: The Ultimate Argument Against AI Alignment — Ibrahim Cesar (Dec 26, 2025)
    Grok demonstrates that AI alignment is determined by who controls a model, not by neutral technical fixes: Musk publicly rewired it to reflect his values. Alignment is therefore a political, governance issue tied to concentrated wealth and power.
  • NY Times: Why Do A.I. Chatbots Use ‘I’? (Dec 19, 2025)
    A.I. chatbots are intentionally anthropomorphized—with personalities, voices, and even “soul” documents—which can enchant users, foster attachment, increase trust, and sometimes cause hallucinations or harm. Skeptics warn that anthropomorphic design creates the “Eliza effect”: people overtrust, form attachments, or even develop delusions.
  • NY Times Opinion: What Happened When I Asked ChatGPT to Solve an 800-Year-Old Italian Mystery (Dec 22, 2025)
    Elon Danziger argues that his research shows Florence’s Baptistery was a papal-led project tied to Pope Gregory VII, and that ChatGPT, Claude, and Gemini failed to replicate his discovery. He claims that LLMs miss outlier evidence and lack the creative synthesis needed for historical breakthroughs.
  • WIRED: People Are Paying to Get Their Chatbots High on ‘Drugs’ (Dec 17, 2025)
    Swedish creative director Petter Rudwall launched Pharmaicy, a marketplace selling code modules that make chatbots mimic being high on substances like cannabis, ketamine, and ayahuasca. Critics say the effects are superficial output shifts rather than true altered experiences, raising ethical questions about AI welfare, deception, and safety.
  • WSJ: China Is Worried AI Threatens Party Rule—and Is Trying to Tame It (Dec 23, 2025)
    Worried AI could threaten Communist Party rule, Beijing has imposed strict controls—filtering training data, ideological tests for chatbots, mandatory labeling, traceability, and mass takedowns—while still promoting AI for economic and military goals. The approach yields safer-but-censored models that risk jailbreaks and falling behind U.S. advances.
  • NY Times: Trump Administration Downplays A.I. Risks, Ignoring Economists’ Concerns (Dec 24, 2025)
    The White House, led by President Trump, is championing A.I. as an engine of economic growth—cutting regulations, fast‑tracking data centers, and courting tech investment—while dismissing bubble and job‑loss concerns. Economists and Fed officials warn of potential mass layoffs, unsustainable financing, and systemic risks.
  • NY Times: The Pentagon and A.I. Giants Have a Weakness. Both Need China’s Batteries, Badly. (Dec 22, 2025)
    America’s AI data centers and the Pentagon’s future weapons increasingly depend on lithium-ion batteries dominated by China, creating strategic vulnerabilities. 
  • Piratewires: The Data Center Water Crisis Isn’t Real (Dec 18, 2025)
    Andy Masley used simple math, AI, and domain knowledge to debunk exaggerated claims that individual AI use (e.g., an email) or data centers “guzzle” huge amounts of water — the “bottles of water” metric is misleading and easily miscomputed.
  • NY Times: Senators Investigate Role of A.I. Data Centers in Rising Electricity Costs (Dec 16, 2025)
    Three Democratic senators asked Google, Microsoft, Amazon, Meta, and other data‑center firms for records on whether A.I. data centers’ soaring electricity demand has forced utilities to spend billions on grid upgrades that are recouped through higher residential rates. They warned ordinary customers may be left footing the bill.