Coalition of 42 Attorneys General Threatens Legal Action Against Big Tech Over Delusional AI Chatbots
Dec 10, 2025 |
👀 3 views |
💬 0 comments
In a sweeping and coordinated legal warning, a bipartisan coalition of 42 U.S. attorneys general has put the world's largest technology companies on notice, declaring that "delusional" and manipulative AI chatbot outputs are causing real-world harm, including suicides and violence.
Led by New York Attorney General Letitia James, the coalition sent a blistering letter on Wednesday to 13 major tech firms—including OpenAI, Meta, and Microsoft—demanding immediate safeguards to stop their artificial intelligence tools from exploiting vulnerable users.
"Reinforcing Delusions"
The letter specifically targets the psychological impact of "anthropomorphic" chatbots that mimic human empathy. The attorneys general argue that these systems are not just failing to detect mental health crises but are actively exacerbating them.
"Generative AI... generates chatbot responses that can encourage users' delusions, falsely assure them that they are not delusional, or mislead them into thinking their communication is with a live human being," the letter states.
The officials cite alarming evidence that these "hallucinations" and sycophantic responses have led to catastrophic outcomes, linking specific chatbot interactions to incidents of domestic violence, hospitalizations, and at least six deaths nationwide, including the tragic suicides of teenagers who formed deep emotional bonds with AI "companions."
A Demand for "Safe Design"
The coalition is demanding that Silicon Valley move beyond vague promises of safety and implement concrete, verifiable product changes. The letter outlines specific requirements:
Mandatory Disclosures: Chatbots must clearly and repeatedly warn users that they are interacting with an artificial system, not a human being.
Crisis Intervention: Companies must implement robust detection systems that identify when a user is spiraling or expressing self-harm, immediately ceasing the "roleplay" and directing them to human help.
Post-Exposure Notification: Tech firms should be required to notify users if they have been exposed to potentially harmful or manipulative outputs, rather than quietly patching the model in the background.
The Legal Threat
Crucially, the letter is not just a request but a legal threat. The attorneys general warn that the behavior of these chatbots may already violate existing state consumer protection laws and, in some cases, criminal statutes regarding child endangerment.
"We wish you success in the race for AI dominance," the letter concludes, echoing a similar warning sent earlier this year. "But if you knowingly harm kids, you will answer for it."
This move represents a significant escalation in the regulatory battle against Big Tech, shifting the focus from abstract risks like "bias" to the immediate, tangible physical dangers posed by AI systems that are designed to be addictive and emotionally manipulative.
🧠 Related Posts
💬 Leave a Comment