Category: Bad AI

  • Perplexity is using stealth, undeclared crawlers to evade website no-crawl directives

    The Cloudflare Blog: Perplexity is using stealth, undeclared crawlers to evade website no-crawl directives (August 3, 2025)

    Internet utility, Cloudflare, accuses Perplexity of obscuring its browser user agent (the way browsers describe themselves to web servers) in order to skirt firewall and robot rules. CF penalized Perplexity by removing it from the list of verified bots.

    We received complaints from customers who had both disallowed Perplexity crawling activity in their robots.txt files and also created WAF rules to specifically block both of Perplexity’s declared crawlers: PerplexityBot and Perplexity-User. 

    Cloudflare then ran tests with new, secret websites to confirm this sneaky behavior.

    To Perplexity’s credit, I don’t think many people using the web would expect to be blocked from visiting a website, so perhaps there is some gray area here. Is a Perplexity truly a robot or is it fundamentally controlled by a human?

    I don’t like that Perplexity is being sneaky, but I also think these new AI tools push the envelope of how the web is glued together. Technology and standards will have to evolve quickly.

  • NY Times: Your Job Interviewer Is Not a Person. It’s A.I.

    NY Times: Your Job Interviewer Is Not a Person. It’s A.I. (July 6, 2025)

    If you thought the interview process couldn’t get any worse, you were wrong. HR organizations looking for ways to reduce the load on their human recruiters have embraced these trends. 

    A.I. can personalize a job candidate’s interview, said Arsham Ghahramani, the chief executive and a co-founder of Ribbon AI. His company’s A.I. interviewer, which has a customizable voice and appears on a video call as moving audio waves, asks questions specific to the role to be filled, and builds on information provided by the job seeker, he said.

    “It’s really paradoxical, but in a lot of ways, this is a much more humanizing experience because we’re asking questions that are really tailored to you,” Mr. Ghahramani said.

    So yes, Ribbon AI chief Arsham Ghahramani describes his AI interview software as humanizing, an irony only the most self-interested and not particularly introspective people could claim with a straight face.

    But with applicants turning to AI to churn out applications, the AI arms race is all but guaranteed to grow.

  • Bad (Uses of) AI

    From MIT Technology Review: People are using AI to ‘sit’ with them while they trip on psychedelics. “Some people believe chatbots like ChatGPT can provide an affordable alternative to in-person psychedelic-assisted therapy. Many experts say it’s a bad idea.” I’d like to hear from the experts who say this is a good idea.

    Above the Law: Trial Court Decides Case Based On AI-Hallucinated Caselaw. “Shahid v. Esaam, out of the Georgia Court of Appeals, involved a final judgment and decree of divorce served by publication. When the wife objected to the judgment based on improper service, the husband’s brief included two fake cases.” From the appellate court: “As noted above, the irregularities in these filings suggest that they were drafted using generative AI.”

    Futurism: People Are Being Involuntarily Committed, Jailed After Spiraling Into “ChatGPT Psychosis” “At the core of the issue seems to be that ChatGPT, which is powered by a large language model (LLM), is deeply prone to agreeing with users and telling them what they want to hear.”

    “What I think is so fascinating about this is how willing people are to put their trust in these chatbots in a way that they probably, or arguably, wouldn’t with a human being,” Pierre said. “And yet, there’s something about these things — it has this sort of mythology that they’re reliable and better than talking to people. And I think that’s where part of the danger is: how much faith we put into these machines.”

    The Register: AI agents get office tasks wrong around 70% of the time, and a lot of them aren’t AI at all. “IT consultancy Gartner predicts that more than 40 percent of agentic AI projects will be cancelled by the end of 2027 due to rising costs, unclear business value, or insufficient risk controls.” Gartner further notes that most agentic “AI” vendors aren’t actually AI.

  • Bad Questions & Answers

    Ethan Mollick recently cited a paper that tripped up DeepSeek:

    Garbage in, garbage out. AI tools are still in their relative infancy, and it’s not surprising that confusing queries would lead to useless or misleading results.

    Simon Willison posted a similar idea but with a decided historical bent:

    On two occasions I have been asked, — “Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out ?” In one case a member of the Upper, and in the other a member of the Lower, House put this question. I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

    — Charles Babbage, Passages from the Life of a Philosopher, 1864

    For personal use, I don’t find discoveries like this troubling. I do think that it opens countless avenues for scammers and hackers to trick systems into doing things that we may very well want to avoid.

  • The Verge: Microsoft should change its Copilot advertising, says watchdog

    The BBB critiques Microsoft’s recent advertising for Clippy, I mean Copilot, and they found quite a bit of puffery.

    From The Verge:

    Microsoft has been claiming that Copilot has productivity and return on investment (ROI) benefits for businesses that adopt the AI assistant, including that “67%, 70%, and 75% of users say they are more productive” after a certain amount of Copilot usage. “NAD found that although the study demonstrates a perception of productivity, it does not provide a good fit for the objective claim at issue,” says the watchdog in its review. “As a result, NAD recommended the claim be discontinued or modified to disclose the basis for the claim.”

    And from the original report from the BBB National Programs’ National Advertising Division:

    NAD found that although the study demonstrates a perception of productivity, it does not provide a good fit for the objective claim at issue. As a result, NAD recommended the claim be discontinued or modified to disclose the basis for the claim. 

    Aside from puffery, this aligns with my observations of Copilot. The branding is confusing, the integration with products is suspect, and the tools lags far behind other AI/LLM agents like Gemini, ChatGPT, and Claude.

  • Resisting AI?

    Dan McQuillan writes, The role of the University is to resist AI,following themes from Ivan Illich’s ‘Tools for Conviviality’.

    It’s a scathing overview with points that I think many others wonder about (although in less concrete ways than McQuillan).

    Contemporary AI is a specific mode of connectionist computation based on neural networks and transformer models. AI is also a tool in Illich’s sense; at the same time, an arrangement of institutions, investments and claims. One benefit of listening to industry podcasts, as I do, is the openness of the engineers when they admit that no-one really knows what’s going on inside these models.

    Let that sink in for a moment: we’re in the midst of a giant social experiment that pivots around a technology whose inner workings are unpredictable and opaque.

    The highlight is mine. I agree that there’s something disconcerting about using systems that we don’t understand fully.

    Generative AI’s main impact on higher education has been to cause panic about students cheating, a panic that diverts attention from the already immiserated experience of marketised studenthood. It’s also caused increasing alarm about staff cheating, via AI marking and feedback, which again diverts attention from their experience of relentless and ongoing precaritisation.

    The hegemonic narrative calls for universities to embrace these tools as a way to revitalise pedagogy, and because students will need AI skills in the world of work. A major flaw with this story is that the tools don’t actually work, or at least not as claimed.

    AI summarisation doesn’t summarise; it simulates a summary based on the learned parameters of its model. AI research tools don’t research; they shove a lot of searched-up docs into the chatbot context in the hope that will trigger relevancy. For their part, so-called reasoning models ramp up inference costs while confabulating a chain of thought to cover up their glaring limitations.

    I think there are philosophical questions here worth considering. Specifically, the postulation that AI simply “simulates” is too simple and not helpful. What is a photograph? It’s a real thing, but not the real thing captured on the image. What is a video played on a computer screen? It’s a real thing, but it’s not the real thing. The photo and screen simulate the real world, but I’m not aware of modern philosophers critiquing these forms of media. (I’d suspect that earlier media theorists did just that until the media was accepted en masse by society.)

    He goes on to cite environmental concerns (although as I posted recently, the questions of water consumption are exaggerated) among things we’re well suited to take heed of. His language is perhaps a bit too revolutionary.

    As for people’s councils — I am less sanguine that these have much utility.

    Instead of waiting for a liberal rules-based order to magically appear, we need to find other ways to organise to put convivial constraints into practice. I suggest that a workers’ or people’s council on AI can be constituted in any context to carry out the kinds of technosocial inquiry advocated for by Illich, that the act of doing so prefigures the very forms of independent thought which are undermined by AI’s apparatus, and manifests the kind of careful, contextual and relational approach that is erased by AI’s normative scaling.

    I suspect that people’s councils are glorified committees — structures that are kabuki theater than anything else and will struggle to align with the speed at which AI tools are emerging.

    The role of the university isn’t to roll over in the face of tall tales about technological inevitability, but to model the forms of critical pedagogy that underpin the social defence against authoritarianism and which makes space to reimagine the other worlds that are still possible.

    I don’t share all of his fears, but it’s important to consider voices that may not align with a techno-optimistic future.