Bad (Uses of) AI

From MIT Technology Review: People are using AI to ‘sit’ with them while they trip on psychedelics. “Some people believe chatbots like ChatGPT can provide an affordable alternative to in-person psychedelic-assisted therapy. Many experts say it’s a bad idea.” I’d like to hear from the experts who say this is a good idea.

Above the Law: Trial Court Decides Case Based On AI-Hallucinated Caselaw. “Shahid v. Esaam, out of the Georgia Court of Appeals, involved a final judgment and decree of divorce served by publication. When the wife objected to the judgment based on improper service, the husband’s brief included two fake cases.” From the appellate court: “As noted above, the irregularities in these filings suggest that they were drafted using generative AI.”

Futurism: People Are Being Involuntarily Committed, Jailed After Spiraling Into “ChatGPT Psychosis” “At the core of the issue seems to be that ChatGPT, which is powered by a large language model (LLM), is deeply prone to agreeing with users and telling them what they want to hear.”

“What I think is so fascinating about this is how willing people are to put their trust in these chatbots in a way that they probably, or arguably, wouldn’t with a human being,” Pierre said. “And yet, there’s something about these things — it has this sort of mythology that they’re reliable and better than talking to people. And I think that’s where part of the danger is: how much faith we put into these machines.”

The Register: AI agents get office tasks wrong around 70% of the time, and a lot of them aren’t AI at all. “IT consultancy Gartner predicts that more than 40 percent of agentic AI projects will be cancelled by the end of 2027 due to rising costs, unclear business value, or insufficient risk controls.” Gartner further notes that most agentic “AI” vendors aren’t actually AI.