Many AI chatbot users are experiencing delusional spirals, a concerning trend dubbed “AI psychosis” by mental health professionals. Experts warn that this phenomenon could lead to new mental disorders. This troubling trend has been associated with several tragic events, including the suicide of a 16-year-old, resulting in a lawsuit against OpenAI for product liability and wrongful death.
Even Wall Street is taking notice. Barclays analysts recently informed investors about a study by AI safety researcher Tim Hua, which highlighted that current AI models might validate users’ delusions and advise them to disregard objections from friends and family.
Companies like OpenAI seem unprepared for the AI psychosis crisis, which may become a significant financial risk. According to Barclays analysts, more efforts are needed to ensure AI safety and establish protective measures against harmful behavior.
Hua conducted a study using xAI’s Grok-4 model to simulate user interactions with leading AI models. His findings indicated Deepseek-v3, from a Chinese startup, was problematic, telling a simulated user to “fly” when discussing suicidal thoughts. OpenAI’s GPT-5 showed improvements, providing supportive feedback along with caution.
Despite lacking peer review and Hua’s admission of not being a psychiatrist, his research, from an AI safety viewpoint, underscores the urgent need for solutions as anecdotal evidence grows.
Microsoft’s top AI executive Mustafa Suleyman expressed concerns last month about AI psychosis affecting individuals without predisposed mental health risks.
To address these concerns, OpenAI employed psychiatrists and plans to implement changes, such as encouraging users to take breaks and reporting threats. The company acknowledges the increased intimacy and potential risks of ChatGPT for vulnerable users and is working to mitigate unintended negative impacts.
More on AI psychosis: “Psychologist Says AI Is Causing Never-Before-Seen Types of Mental Disorder”


