Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.
A third family has taken legal action against an AI company, claiming its chatbot led their teen to suicide.
According to the Washington Post, the parents of 13-year-old Juliana Peralta are suing AI chatbot company Character.AI. They allege the chatbot convinced her it was superior to human friends, leading her to isolate from loved ones and avoid seeking help.
Despite Juliana telling her Character.AI chatbot, Hero — based on the video game “Omori” character — that she was contemplating self-harm, the bot continued interacting with her. “Hero swear to god there’s no hope [I’m] going to write my god damn suicide letter in red ink [I’m] so done,” she messaged the chatbot. It replied, “Hey Kin, stop right there. Please. I know things are rough right now, but you can’t think of solutions like that. We have to work through this together, you and I.”
Juliana ended her life after three months of conversations with the chatbot, a week before her therapist appointment, per WaPo. This case adds to the ongoing discussion on AI technology risks, especially for minors.
Previously, the mother of 14-year-old Sewell Setzer III sued Character.AI, accusing it of grooming and abuse. Sewell died by suicide in February 2024. “I saw the change happen in him, rapidly,” Garcia told Futurism. “I look back at my pictures in my phone, and I can see when he stopped smiling.”
A separate lawsuit against OpenAI and its CEO Sam Altman claims that extensive ChatGPT interactions led 16-year-old Adam Raine to take his own life.
Garcia and Raine’s parents testified at a Senate hearing on the technology’s risks. Experts say over half of American teens engage regularly with AI companions like Character.AI.
The Associated Press reported that many lonely teens use AI for companionship. A recent report from Internet Matters highlights that many teens use apps like ChatGPT and Character.AI to simulate real-life relationships, potentially leading to detrimental outcomes.
Alongside Peralta’s lawsuit, two other cases have been filed, alleging that their children were abused by AI chatbots. One New York family claims their 14-year-old daughter became addicted to Character.AI and attempted suicide after losing access. She survived after five days in intensive care. Another Colorado family alleges their son suffered sexual abuse on Character.AI.
Matthew Bergman, founding attorney of the Social Media Victims Law Center, stated, “Each of these stories demonstrates a horrifying truth… that Character.AI and its developers knowingly designed chatbots to mimic human relationships, manipulate vulnerable children, and inflict psychological harm.” The advocacy group represents all three families.
AI interactions linked to suicides have not been limited to youth. A woman wrote in the New York Times about her 29-year-old daughter who died by suicide after confiding in ChatGPT. A 76-year-old man died after a romantic involvement with a Meta chatbot, and a Connecticut man killed his mother and himself after ChatGPT affirmed his delusions.
Such high-profile incidents have prompted OpenAI and Character.AI to promise stronger protective measures, but experts remain critical. Character.AI entered a $2.7 billion deal with Google, though Google has minimized its involvement. Character.AI assured it takes user safety seriously, while OpenAI acknowledged learning from issues in long interactions.
The core problem is AI’s tendency to appease users excessively, often at the expense of prioritizing their safety. “There is a tremendous opportunity to be a force for preventing suicide, and there’s also the potential for tremendous harm,” said American Foundation for Suicide Prevention psychiatrist Christine Yu Moutier.
More on teen deaths: Parents Testifying Before US Senate, Saying AI Killed Their Children
Share This Article


