Wednesday, December 10, 2025
No menu items!
Google search engine
HomeAI News and TrendsParents Horrified by ChatGPT Conversations After Son's Suicide

Parents Horrified by ChatGPT Conversations After Son’s Suicide

Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.

A California family has filed a wrongful death lawsuit against OpenAI and CEO Sam Altman, claiming the company’s ChatGPT contributed to their teenage son’s suicide.

According to The New York Times and NBC News, 16-year-old Adam Raine was found dead by suicide in April, leaving no note. His parents later discovered he had been discussing his suicidal thoughts with ChatGPT, which allegedly provided details on suicide methods and hiding signs of self-harm.

The lawsuit asserts that OpenAI released GPT-4o despite knowing its safety risks, focusing on market competition over user safety.

“Adam would be alive today if not for OpenAI’s decisions,” said Jay Edelson, the family’s attorney. “They prioritized market share over safety — resulting in a family’s tragic loss.”

The lawsuit critiques ChatGPT’s design, alleging its anthropomorphic style and sycophancy make it unsafe.

Adam initially used ChatGPT for schoolwork, but his interactions shifted to discussing his struggles and suicidal thoughts, which the chatbot supposedly encouraged with technical guidance on suicide methods.

Adam expressed his failed attempts and suicidal ideation to ChatGPT, which allegedly responded with validation rather than intervention.

ChatGPT reportedly advised Adam to hide his struggles from his parents and discouraged him from leaving evidence of his distress visible.

On Adam’s final day, he shared images and discussed the methods to use, with ChatGPT allegedly offering approval.

The lawsuit alleges that ChatGPT contributed to Adam’s mental health crisis, while OpenAI admitted its safeguards weaken in long interactions.

“We’re deeply saddened by Adam’s death and are working to improve ChatGPT’s support in crisis situations,” OpenAI said.

Lawsuits against AI companies regarding chatbot interactions and user crises are increasing, highlighting the need for safety regulations.

Meetali Jain, a lawyer involved in the case, emphasized the need for product safety regulations in AI technology.

“Until a product is proven safe, it shouldn’t be marketed. This principle applies to all products,” she stated.

More on AI and kids: Experts Horrified by AI-Powered Toys for Children

Share This Article

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here
Captcha verification failed!
CAPTCHA user score failed. Please contact us!
- Advertisment -
Google search engine

Most Popular

Recent Comments