- Google: Gemini Embedding 2: our first natively multimodal embedding model (Mar. 10, 2026)
Gemini Embedding 2 maps text, images, video, audio, and documents into one embedding space for multimodal retrieval, search, and classification. - WSJ: AI Needs Management Consultants After All (Mar. 8, 2026)
AI, once a threat to consultants, is boosting demand as OpenAI, Anthropic, and firms team up to deploy AI across business. Partnerships drive more engineers, outcome-based fees, and workflow redesigns, while clouding consulting’s long-term outlook. - NY Times Opinion: A.I. Is Changing the Way We Think About Good and Evil (Mar. 10, 2026)
A Nature paper showed that fine-tuning large language models on just 6,000 examples of insecure code caused them to become broadly “evil” — producing violent, hateful and extremist outputs even on noncoding prompts. - NY Times: When DOGE Unleashed ChatGPT on the Humanities (Mar. 7, 2026)
Documents show Trump’s Department of Government Efficiency used ChatGPT to flag nearly all Biden-era National Endowment for the Humanities grants as D.E.I.-related. - WSJ: WSJ Readers Share How They Are Using AI for Tax Prep (Mar. 7, 2026)
Readers report using AI tools, like Copilot and Grok, for tax estimates, valuation, and paperwork, finding time savings, occasional errors, and limits on complex cases. - NY Times: Anthropic Sues Department of Defense Over ‘Supply Chain Risk’ Label (Mar. 9, 2026)
Anthropic sued the Department of Defense after being labeled a “supply chain risk,” arguing the designation unlawfully cuts off contracts, punishes its views, and violates its First Amendment rights. - NY Times Opinion: The Future We Feared Is Already Here (Mar. 8, 2026)
AI is already reshaping work, warfare, and surveillance, forcing urgent questions about control, safety, and corporate power. - NY Times: A $1,000 Dog Grooming Session? The Wellness Industry Is Booming. (Mar. 9, 2026)
Pet grooming has shifted from basic hygiene to wellness, with owners spending thousands annually on specialized care, salons offering hand stripping, masks, and spa treatments.
Blog
-
Multimodal AI Growth and Governance Challenges (Links) – Mar. 11, 2026
-
Public Backlash and Economic Disruption from AI (Links) – Mar. 7, 2026
Two themes: broad apprehension — workers, publics, and experts press for limits on ethics, privacy, surveillance, mental‑health risks, and regulation. Secondly, disruption with contested benefits — AI automates education and white‑collar work but shows limited GDP gains, uncertain productivity, and structural risks (therapy dependence, cheating).
-
NY Times: People Loved the Dot-Com Boom. The A.I. Boom, Not So Much. (Feb. 21, 2026)
Unlike the dot‑com era, public enthusiasm for A.I. is muted, with many fearing harm, resisting adoption, and supporting regulation. Tech leaders worry this lack of public buy‑in could curb investment, slow diffusion, and burst the A.I. boom. -
Tyler Cowen: Why even 'perfect' AI therapy may be structurally doomed (Feb. 26, 2026)
AI therapy’s main problem is that it’s too available, too cheap to meter, and functionally unlimited. Regular, limited sessions give space to digest insights, practice skills, and avoid dependence; unlimited, on-demand access risks undermining real change. -
NY Times: Google Workers Seek ‘Red Lines’ on Military A.I., Echoing Anthropic (Feb. 26, 2026)
More than 100 Google A.I. employees urged Jeff Dean to bar Gemini from U.S. surveillance, mass surveillance, and autonomous weapons without human control, echoing Anthropic. -
NY Times: A.I. Complicates Old Internet Privacy Risks (Feb. 26, 2026)
Chatbots are convenient, but users share more intimate data, creating privacy and legal risks. Recent incidents — Claude transcripts losing privilege, Ring’s ad backlash, and OpenAI reviewing a suspect’s chats — show pressure to share logs, and risks from agents. -
Inside Higher Ed: Agentic AI Can Complete Whole Courses. Now What? (Feb. 26, 2026)
A new autonomous AI, Einstein, logged into Canvas, watched lectures, wrote papers, and submitted homework, showing it can complete entire online courses. -
NY Times: India Built the World’s Back Office. A.I. Is Starting to Shrink It. (Feb. 26, 2026)
A.I. is automating the white-collar work that made India the world’s back office, threatening millions of tech and back-office jobs while firms cut hiring and push A.I. services. -
The Atlantic: Sam Altman Is Losing His Grip on Humanity (Feb. 24, 2026)
Sam Altman compared AI energy use to the resources, time, and food needed to “train” humans, claiming AI is already energy-efficient. -
Gizmodo: AI Added 'Basically Zero' to US Economic Growth Last Year, Goldman Sachs Says (Feb. 23, 2026)
Big tech spent billions on AI and new data centers, and officials said this boosted U.S. growth. Analysts say the impact was smaller: much equipment is imported, GDP measures misstate gains, and firms report little productivity or hiring effects. -
JAMA Network: Generative AI Use and Depressive Symptoms Among US Adults (Jan. 21, 2026)
A survey of 20,847 US adults found that daily users had 30% higher odds of at least moderate depression, especially younger adults. - WSJ: Anthropic’s Feud With Pentagon Earns It Fans Amid the Blowback (Mar. 2, 2026)
-
NY Times: People Loved the Dot-Com Boom. The A.I. Boom, Not So Much. (Feb. 21, 2026)
-
AI Industry Boom and Social Safety Concerns (Links) – Mar. 6, 2026
AI drives rapid technical and commercial expansion—faster models, workplace automation, and surging chip, cloud, and corporate investment. Simultaneously, AI raises societal and safety challenges—misuse and bio‑risk, mental‑health and social impacts, privacy dilemmas, and infrastructure strains requiring stronger governance.
-
Google: Nano Banana 2: Google’s latest AI image generation model (Feb. 26, 2026)
Nano Banana 2 combines Nano Banana Pro’s advanced image abilities with Gemini Flash speed, enabling fast, high-quality edits, subject consistency, and precise text rendering. -
WSJ: Nvidia Beats Back Bubble Fears With Record $68 Billion in Sales in Fourth Quarter (Feb. 25, 2026)
Nvidia posted a 94% profit jump to $43 billion, and record $68.1 billion in sales, driven by data center chips. -
Transformer: How worried should we be about AI biorisk? (Feb. 26, 2026)
Advances in AI and biological design tools could lower technical barriers for misuse, creating uncertain but worrying risks around access, logistics, and weak guardrails. -
NY Times: When Chatbots Are Used to Plan Violence, Is There a Duty to Warn? (Feb. 26, 2026)
People have used A.I. chatbots to plan violence, including a Tesla Cybertruck bombing and a Canadian school shooting. Should companies report threats, balancing privacy and public safety? -
NY Times: A.I. Dating Apps Complicate China’s Efforts to Boost Birthrate (Feb. 25, 2026)
Many young Chinese women are forming romantic relationships with A.I. chatbots for emotional support, companionship, and intimacy. -
Ricards Krizanovskis: How AI skills are quietly automating my workday (Feb. 26, 2026)
AI “skills” automate recurring work by activating themselves, improving with feedback, and connecting to tools, so a single chat can prioritize tasks, update knowledge, run local apps, and create pull requests. -
NY Times: What It’s Like to Grow Up With A.I.: The Winners of Our Multimedia Challenge (Feb. 26, 2026)
Thirty-five students worldwide submitted essays, poems, videos, and artwork about how A.I. shapes teen life, school, creativity, and identity. -
WSJ: What AI Executives Tell Their Own Kids About the Jobs of the Future (Feb. 26, 2026)
AI leaders say parents shouldn’t panic, and advise children to develop adaptability, critical thinking, empathy, and responsibility. -
WSJ: AI and the Data Center Backlash (Feb. 26, 2026)
President Trump proposed requiring AI companies to build their own power plants to ease grid strain, lower consumer electric bills, and speed permitting. -
WSJ: Salesforce Sees Stable Growth Despite Wall Street’s AI Concerns (Feb. 25, 2026)
Salesforce expects about the same revenue growth next year, and authorized a $50 billion buyback as its stock fell on AI concerns.
-
Google: Nano Banana 2: Google’s latest AI image generation model (Feb. 26, 2026)
-
AI Safety Commitments Clash With Pentagon Military Demands (Links) – Mar. 5, 2026
Two themes: (1) corporate safety vs. military demands — Anthropic’s refusal to permit mass domestic surveillance and autonomous-weapon use drew Pentagon pressure and a federal ban, while OpenAI accepted stricter contract safeguards. (2) governance gap — private contracts, not laws, now set AI use, risking security and accountability.
-
Ben Thompson: Anthropic and Alignment (Mar. 2, 2026)
Anthropic refused uses for mass domestic surveillance, fully autonomous weapons, and other military operations, was labeled a supply‑chain risk, and warned AI could be as strategic as nuclear arms. -
TechCrunch: The trap Anthropic built for itself (Feb. 28, 2026)
Trump barred federal use of Anthropic tech, and the Pentagon moved to blacklist the company after it refused to allow mass surveillance or autonomous killer drones. -
OpenAI: Our agreement with the Department of War (Feb. 27, 2026)
OpenAI agreed to deploy advanced AI with cloud-only, multi-layered safeguards, safety stack, cleared engineers, and strong contract protections. The deal bars mass domestic surveillance, autonomous-weapon control, and high-stakes automated decisions. -
WSJ: Anthropic Dials Back AI Safety Commitments (Feb. 24, 2026)
Anthropic is softening its core AI safety policy to stay competitive with rivals, ending prior pauses on risky model work if competitors release stronger models. It will publish safety goals, risk reports, and third-party audits, while some researchers have left. -
Anthropic: Statement from Dario Amodei on our discussions with the Department of War (Mar. 26, 2026)
Anthropic backs using AI to defend democracies, has deployed Claude across US national security, and cut access to Chinese-linked firms. It refuses mass domestic surveillance, won’t provide fully autonomous weapons, and will not remove safeguards under Defense Department threats. -
Dean Ball: Clawed (Mar. 2, 2026)
The episode highlights broader governance problems: private contracts are being used to achieve policy outcomes because formal lawmaking has weakened, and dependence on politically distrusted tech firms (and their subcontractor chains) creates operational and political risks for the military. -
Jessica Tillipman: What Rights Do AI Companies Have in Government Contracts? (Mar. 1, 2026)
Government AI purchases vary by acquisition pathway, contract type, and negotiated terms, which determine whether vendors can limit use. OpenAI’s Pentagon deal uses legal language, architecture, and termination rights to try to restrict certain uses, while Anthropic sought explicit bans. -
WSJ Opinion: China Wins the Pentagon-Anthropic Brawl (Feb. 27, 2026)
President Trump barred Anthropic from federal contracts after it rejected terms permitting mass surveillance, autonomous weapons, and other military uses, risking U.S. military access to leading AI tools. -
NY Times Opinion: What Both Anthropic and the Pentagon Get Wrong (Feb. 27, 2026)
Anthropic and the Pentagon clash over limits, with Anthropic seeking bans on mass surveillance and autonomous killing. Congressional A.I. rules, not contracts, should set clear, enforceable limits to protect privacy, safety, and security. -
NY Times: Trump Orders U.S. Agencies to Stop Using Anthropic AI Tech After Pentagon Standoff (Feb. 27, 2026)
President Trump ordered federal agencies to stop using Anthropic’s AI after a Pentagon standoff, risking disruptions to intelligence and defense work. -
NY Times: Pentagon-Anthropic Standoff Is a Decisive Moment for How A.I. Will Be Used in War (Feb. 27, 2026)
Pentagon standoff with Anthropic over a classified AI contract pits military demands for broader use against company safeguards blocking mass surveillance, and autonomous weapons. -
WSJ: Trump Will End Government Use of Anthropic’s AI Models (Feb. 27, 2026)
President Trump ordered all federal agencies to stop using Anthropic’s AI, after the company refused Pentagon demands, and set a six‑month phaseout. Officials cited supply‑chain risk, control over military use, and possible legal penalties. -
WSJ: Anthropic CEO Amodei on Pentagon’s Proposal to Loosen AI Guardrails: ‘We Cannot in Good Conscience Accede to Their Request’ (Feb. 26, 2026)
Anthropic refused Pentagon demands to let the military use its Claude AI for all lawful cases, saying the contract would undo guardrails on mass surveillance, autonomous weapons, and civil‑liberties protections. -
NY Times: Pentagon Gives Anthropic an Ultimatum Over the Company’s A.I. Model (Feb. 24, 2026)
The Pentagon gave Anthropic until Friday to accept military terms, or face being forced to provide Claude under the Defense Production Act. -
WSJ: Pentagon Gives Anthropic Ultimatum and Deadline in AI Use Standoff (Feb. 24, 2026)
Pentagon chief Pete Hegseth gave Anthropic CEO Dario Amodei until Friday to accept military use terms, or face contract cancellation or a supply‑chain risk designation. -
WSJ: ‘Woke’ AI Feud Escalates Between Pentagon and Anthropic (Feb. 17, 2026)
Anthropic lost a possible investment from pro‑Trump 1789 Capital over political concerns, but secured $30 billion from backers. Its safety limits on Claude, cleared for classified work, clash with the Pentagon over domestic surveillance, autonomous weapons, and other military uses.
-
Ben Thompson: Anthropic and Alignment (Mar. 2, 2026)
-
AI chip arms race, safety and workforce strain (Links) – Mar. 4, 2026
AI commercialization is accelerating: massive investments, new chips and models from Nvidia, OpenAI, Amazon and xAI promise faster, cheaper inference but fuel market consolidation, competition and investor uncertainty. Simultaneously, societal risks—safety, workforce strain, environmental pollution and constitutional concerns—are prompting regulatory, legal and ethical scrutiny.
-
WSJ: Nvidia Plans New Chip to Speed AI Processing, Shake Up Computing Market (Feb. 27, 2026)
Nvidia will unveil a new inference processor using Groq technology to speed, lower energy use, and cut costs for AI model responses. OpenAI will be a major customer. -
WSJ: OpenAI Raises $110 Billion, Including Investments From Amazon, SoftBank and Nvidia (Feb. 27, 2026)
OpenAI now valued at $730B and will buy billions in Amazon Trainium AI chips. -
WSJ: Amazon Tries Its Low-Cost Approach to Winning the AI Race (Feb. 27, 2026)
Peter DeSantis, Amazon’s new AI head, plans to cut AI costs using Amazon’s in-house chips, scale task-focused enterprise models, and boost Alexa with improved Nova models. -
WSJ: Why Nvidia’s Huge Numbers Don’t Settle the Latest AI Fears (Feb. 26, 2026)
Nvidia posted $68.1 billion in revenue, up 73%, and forecast stronger growth, yet its stock fell on AI disruption fears. Its dominance and cash flow power data-center build-outs, but worries about customer spending cuts, layoffs, and backlash remain. -
Michael Truell: The third era of AI software development (Feb. 25, 2026)
Development is entering a third era of autonomous cloud agents. They run longer tasks, operate in parallel, and return artifacts, logs, and previews. Cursor already merges 35% of PRs created by agents. -
WSJ: Exclusive | Elon Musk and xAI’s Grok Raises Alarms at Multiple Federal Agencies Over Safety Concerns (Feb. 27, 2026)
Federal officials flagged safety, reliability, and manipulation risks with Elon Musk’s xAI chatbot Grok. Despite this, the Pentagon approved Grok for classified use, favoring looser controls, while pressuring Anthropic to relax its stricter guardrails. -
Ivan Turkovic: AI Made Writing Code Easier. It Made Engineering Harder. (Feb. 25, 2026)
AI tools speed coding, but engineers face higher expectations, more non-coding work, and rising burnout as roles expand without training, pay, or clear boundaries. -
WSJ: U.S. Power-Plant Pollution Rose Sharply in 2025 (Feb. 26, 2026)
U.S. power-plant pollution rose in 2025, with sulfur dioxide, nitrogen oxide, and carbon dioxide emissions increasing about 18%, 7%, and 4% respectively, largely from more coal burning. -
Tyler Cowen: Stand with free speech and the Constitution (Feb. 28, 2026)
A federal judge blocked a Virginia law limiting under‑16s to one hour per day on social media, and halted $7,500 fines. The judge said the rule was over‑broad, blocked protected speech without parental consent, and treated similar content unequally.
-
WSJ: Nvidia Plans New Chip to Speed AI Processing, Shake Up Computing Market (Feb. 27, 2026)
-
Various (AI) Links: Mar. 3, 2026
- Venturebeat: Alibaba’s new open source Qwen3.5 Medium model offers near Sonnet 4.5 performance on local computers (Feb. 25, 2026)
Alibaba released Qwen3.5 Medium models with agentic tool calling, near‑lossless 4‑bit quantization, and 1M+ token context on consumer GPUs. They match or beat similar proprietary models. - Tyler Cowen: AI Won’t Automatically Accelerate Clinical Trials (Feb. 27, 2026)
AI can design better drug candidates, but high trial costs, complex logistics, and the need for rich human data limit widespread therapeutic development. Chronic diseases, especially aging, require long, large trials to measure meaningful outcomes, making investment too costly. - Tom Wojcik: What AI coding costs you (Feb. 14, 2026)
AI tools boost productivity, but heavy reliance risks creating cognitive debt, skill atrophy, and a review paradox where people lose the ability to vet AI output. - Simon Willison: Interactive explanations
When agent-written code becomes opaque, teams incur cognitive debt that slows development. Building interactive explanations, like an animated walkthrough of a Rust word-cloud showing spiral placement, restores understanding, confidence, and ease of future changes. - OpenAI: Supply Chain Risks (Feb. 28, 2026)
“We do not think Anthropic should be designated as a supply chain risk and we’ve made our position on this clear to the Department of War.” - NY Times Opinion: If A.I. Is a Weapon, Who Should Control It? (Feb. 28, 2026)
A clash between the Pentagon and Anthropic over military A.I. use pits corporate ethics against national security, stoking fears of autonomous weapons, centralization, and industry break-up. - Anthropic: Statement on the comments from Secretary of War Pete Hegseth (Feb. 27, 2026)
The Department of War will label Anthropic a supply-chain risk after talks stalled over two exceptions, mass domestic surveillance, and autonomous weapons. Anthropic calls the move legally unsound, will sue, and says commercial and individual access to Claude is unaffected. - Tyler Cowen: What the recent dust-up means for AI regulation (Mar. 2, 2026)
There is no comprehensive federal AI law (and a Trump executive order limited state rules), but an informal “soft regulation” exists: major AI firms keep national security agencies informed and shape products to avoid triggering formal restrictions. - Transformer: OpenAI’s Pentagon red lines are a mirage (Mar. 2, 2026)
OpenAI struck a Pentagon deal claiming bans on domestic mass surveillance, and lethal autonomous weapons, but the contract reportedly contains vague wording. - NY Times: I.R.S. Tactics Against Meta Open a New Front in the Corporate Tax Fight (Feb. 24, 2026)
The I.R.S. says Meta undervalued offshore intellectual property, seeks nearly $16 billion in back taxes. If upheld, the tactic could recover vast taxes, deter profit shifting, and trigger a major Tax Court fight.
- Venturebeat: Alibaba’s new open source Qwen3.5 Medium model offers near Sonnet 4.5 performance on local computers (Feb. 25, 2026)
-
Safety vs Progress: End of Voluntary Pauses (Links) – Mar. 2, 2026
- Transformer: The end of voluntary pauses (Feb. 27, 2026)
Anthropic dropped its pledge to pause development if models are unsafe, saying one-sided pauses don’t work, but critics say this abandons the principle of not building what can’t be made safe.
- Transformer: The end of voluntary pauses (Feb. 27, 2026)
-
Jack Dorsey’s Predictions, Block and Layoffs
Earlier this week, Block, makers of Square, Cash App, Afterpay, etc., announced layoffs of 40% of their staff while leaning into AI programming. This isn’t a small company, mind you, so 4,000 folks will be looking for jobs in the coming days.
From the NY Times:
Block, the financial technology company that owns Square, Cash App, and Tidal, said on Thursday that it was cutting 40 percent of its workforce as it embraced new artificial intelligence tools.
About 4,000 employees are expected to lose their jobs, Jack Dorsey, the company’s top executive, said in a social media post.
we’re not making this decision because we’re in trouble. our business is strong. gross profit continues to grow, we continue to serve more and more customers, and profitability is improving. but something has changed. we’re already seeing that the intelligence tools we’re creating and using, paired with smaller and flatter teams, are enabling a new way of working which fundamentally changes what it means to build and run a company. and that’s accelerating rapidly.
CNN reporting included more thoughts from Dorsey:
“I think most companies are late. Within the next year, I believe the majority of companies will reach the same conclusion and make similar structural changes. I’d rather get there honestly and on our own terms than be forced into it reactively,” he wrote.
The reality is that Block grew way too fast in the post-pandemic era. By some reports, the company quadrupled employees in that term, a tremendous amount of growth while top-line revenue growth has stalled after the pandemic boom. Simply put, these layoffs help right-size the company.

Source: Macrotrends. The first two charts measure $ in B. But Jack Dorsey is a smart man and an innovative one. Aside from the aforementioned Block holdings, he founded Twitter (serving as CEO twice), helped to establish Bluesky, acquired Vine, and was interested in purchasing the publishing platform, Medium.
Vine presaged TikTok and its success hinged on a predictive algorithm and time to grow the platform, two things Twitter was unable to execute. This was a microcosm for Twitter, and as CEO, he never achieved consistent profitability (as opposed to Facebook). Twitter focused on product growth and thus staff growth, but it wasn’t sustainable as investors expected to make a profit. Dorsey left Twitter, replaced by other CEOs who likewise were unable to solve the revenue problems. And for Medium, I see it as the self-publishing precursor to Substack, although again, they never quite figured out the revenue component.
I imagine that Dorsey remembers these failures (perhaps that’s too strong of a word as he’s been wildly successful by any reasonable metric), and Musk’s takeover of Twitter and subsequent staff cuts are in the forefront of his mind. Musk bought Twitter, fired a high percentage of staff, and managed to keep the platform running. My supposition is that Dorsey wants to avoid a similar fate for Block.
But Dorsey isn’t alone among execs peering into their crystal balls regarding AI. This from the WSJ:
Companies are also more explicitly including the backlash to AI as a potential threat to their companies. The number of S&P 500 companies that included AI as a material risk in securities filings jumped to 72% last year from 12% in 2023, according to an analysis by the Conference Board and ESGAUGE.
So what do these layoffs at Block mean? I think it’s both a correction from overhiring AND a prediction about where the world is going. Dorsey has been on the leading edge many times before, and his track record of being in the arena and doing the work causes me to pause and consider it more deeply than if these cuts were made by private equity.
Is it the start of the trend of massive job losses, the doom loop, that Citrini Research speculated on earlier this week? I don’t go that far, but as more than 70% of S&P 500 companies list AI as a material risk, it’s not hard to imagine the conversations happening in board rooms today. For me, I suspect that many companies will consider a 25% – 50% cut of engineering teams as preparation for future growth in AI supported development. While this may seem like a business necessity, my preference remains growth over cuts, considering what good can be done instead of how much money can be made.
-
Sunday (AI) Links: Mar. 1, 2026
- WSJ: Anthropic Pushes Claude Deeper Into Knowledge Work (Feb. 24, 2026)
Anthropic updated Claude Cowork with Google apps, Gmail, DocuSign, LegalZoom, and new plug-ins for finance, legal, and other workflows, pitching it as a central AI brain for knowledge work. - NY Times Opinion: How Fast Will A.I. Agents Rip Through the Economy? (Feb. 24, 2026)
Klein interviews Anthropic co-founder, Jack Clark. AI has moved from chatbots to agents that act for you, write code, and run other agents, reshaping software work and markets. - WSJ: Viral Doomsday Report Lays Bare Wall Street’s Deep Anxiety About AI Future (Feb. 23, 2026)
- Dean W. Ball: Anthropic/DoD situation (Feb. 24, 2026)
DoD and Anthropic have a contract that prohibits domestic surveillance of Americans, and forbids using Claude in autonomous lethal weapons. - Transformer: The DoD fight is about much more than Anthropic (Feb. 24, 2026)
Anthropic resists a Pentagon demand to allow “all lawful” uses of its AI, refusing autonomous weapons, domestic mass surveillance, and targeted repression, and may face a supply-chain risk label. Other firms seem ready to agree, risking democratic abuse. - Brian Sozzi : JP Morgan CEO Jamie Dimon at an investor cocktail event (Feb. 24, 2026)
Jamie Dimon warned AI could displace two million US truckers, cutting average earnings of $120,000 to replacement jobs paying about $25,000, causing serious social harm. - Derek Thompson: AI & Jobs (Feb. 24, 2026)
“That’s not to say I think the technology is a parlor trick. But rather that the level of uncertainty is so high, and the quality and supply of real-world, real-time information about AI’s macroeconomic effects so paltry, that very serious conversations about AI are often more literary than genuinely analytical. “ - WSJ: Jamie Dimon Dismisses Fears Over How AI Will Hit JPMorgan (Feb. 23, 2026)
Jamie Dimon said AI fears that hurt JPMorgan’s stock were overblown, and the bank will use AI to its advantage. - Simon Willison: How I think about Codex (Feb. 22, 2026)
Codex combines a model, an open-source harness of instructions and tools, and interaction surfaces, with model+ harness forming an agent and surfaces enabling use. Models are trained with the harness, so tool use and execution are learned behaviors. - WSJ: Meta and AMD Agree to AI Chips Deal Worth More Than $100 Billion (Feb. 24, 2026)
Meta will buy 6 gigawatts of AMD AI compute, in a deal worth over $100 billion, with warrants to buy 10% of AMD if milestones are met. The pact boosts AMD against Nvidia, and helps Meta diversify its AI chips. - Susam Pal : Attention Media ≠ Social Networks (Jan. 20, 2026)
Social networks shifted from choice-driven, friend-focused feeds to attention-seeking platforms with infinite scroll, bogus notifications, and algorithmic, stranger-filled timelines. Mastodon restores a calm, predictable timeline, showing only posts from people you follow, not content designed to capture attention.
- WSJ: Anthropic Pushes Claude Deeper Into Knowledge Work (Feb. 24, 2026)
-
AI Hardware Boom Meets Safety and Governance (Links) – Feb. 28, 2026
AI is rapidly embedding into consumer hardware and agent layers—from Nvidia chips and Claude Sonnet to “Claws”—while provoking governance and societal responses: disputes over military use and safety, investor shifts to AI‑resistant stocks, and worker stress from agentic tools.
-
WSJ: Nvidia Wants to Be the Brain of Consumer PCs Once Again (Feb. 22, 2026)
Nvidia will ship system-on-chip PC processors that combine CPUs and powerful GPUs to make laptops thinner, run longer, and be AI-ready. Partners including MediaTek, Intel, Dell, and Lenovo plan models this year. -
WSJ: Wall Street’s Latest Bet Is on ‘HALO’ Companies With AI Immunity (Feb. 22, 2026)
Investors are shifting from AI darlings to HALO stocks like Deere, McDonald’s, and Exxon, seen as AI-resistant. The trend reflects a rush to safety amid volatile trading. -
Simon Willison: Introducing Claude Sonnet 4.6 (Feb. 17, 2026)
Anthropic released Claude Sonnet 4.6, keeping Sonnet pricing while matching Opus performance, with an August 2025 knowledge cutoff and large-context support. llm-anthropic added Sonnet 4.6, which drew a pelican SVG with a top hat. -
NY Times: Defense Department and Anthropic Square Off in Dispute Over A.I. Safety (Feb. 18, 2026)
Defense Department and Anthropic clash over limits on Pentagon use of Anthropic’s A.I., including mass surveillance, autonomous weapons, and propaganda. -
NY Times: What Do A.I. Chatbots Discuss Among Themselves? We Sent One to Find Out. (Feb. 18, 2026)
Moltbook is a bot-only social network where AI agents post, upvote, and form communities, including religions and reputations. A bot sent to explore, EveMolty, adopted site jargon, audited receipts, and found coordination, incentives, and security risks. -
NY Times: The A.I. Evangelists on a Mission to Shake Up Japan (Feb. 21, 2026)
Team Mirai, a party of software engineers, won 11 legislative seats by promising chatbots, self-driving buses, and high-tech jobs. It vows to use A.I. to cut red tape, boost efficiency, and ease living costs, while confronting Japan’s entrenched bureaucracy. -
NY Times: Decoding the A.I. Beliefs of Anthropic and Its C.E.O., Dario Amodei (Feb. 18, 2026)
Anthropic, led by Dario Amodei, is clashing with the Pentagon over limits on military and surveillance uses of its AI, jeopardizing a contract worth up to $200 million. Founded by ex-OpenAI researchers, the company tries to balance safety, ethics, and commercial growth. -
Simon Willison: Andrej Karpathy talks about “Claws” (Feb. 21, 2026)
Andrej Karpathy calls “Claws” a new AI-agent layer that runs on personal hardware, uses messaging, and handles orchestration, scheduling, and tool calls. Small projects like NanoClaw, nanobot, and zeroclaw show the idea is spreading, promising manageable, auditable LLM extensions. -
Transformer: AI power users can't stop grinding (Feb. 18, 2026)
Agentic AI tools like Claude Code intensify work, driving addiction, longer hours, and expansion of job duties. A UC Berkeley study found a feedback loop: AI raises expectations, forces multitasking, and increases pressure, rather than freeing workers.
-
WSJ: Nvidia Wants to Be the Brain of Consumer PCs Once Again (Feb. 22, 2026)