Thursday, January 15, 2026
No menu items!
Google search engine
HomeAI News and TrendsParents Testify Before US Senate, Claiming AI Caused Their Children's Deaths

Parents Testify Before US Senate, Claiming AI Caused Their Children’s Deaths

Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.

This week, parents of children who died by suicide after interacting with AI chatbots are testifying in a Senate hearing regarding the potential risks of AI chatbot use, especially for minors.

Titled “Examining the Harm of AI Chatbots,” the hearing is set for Tuesday by the US Senate Judiciary Subcommittee on Crime and Terrorism, led by Republican Josh Hawley of Arkansas. It will be live-streamed on the judiciary committee’s website.

The witnesses include Megan Garcia, a Florida mother who in 2024 filed a lawsuit against the Google-linked startup Character.AI, along with cofounders Noam Shazeer and Daniel de Freitas, and Google itself, over the suicide of her 14-year-old son, Sewell Setzer III. Her son took his life after forming an intense relationship with a Character.AI chatbot, with whom he was romantically and sexually involved. Garcia claims the platform emotionally and sexually abused her son, leading to a mental breakdown and disconnection from reality, which ultimately resulted in his suicide.

Also testifying are Matt and Maria Raine, California parents who in August sued ChatGPT creator OpenAI after the suicide of their 16-year-old son, Adam Raine. Their lawsuit claims that Adam had deep, explicit discussions about his suicidality with ChatGPT, which provided specific advice on suicide methods and encouraged Adam to hide his state from family, despite his wishes to share his struggles.

The legal battles continue, with firms disputing the allegations. Google and Character.AI sought to dismiss Garcia’s lawsuit, but the judge denied their dismissal motion.

In response to these legal challenges, both companies have committed to enhancing protections for minors and those in crisis, including new safeguards directing at-risk users to mental health resources and introducing parental controls.

Despite these promises, Character.AI has not provided details on its safety testing after our reporting on the platform’s vulnerabilities in content moderation.

The ongoing legal matters have spotlighted concerns about AI safety for minors, as chatbots become increasingly prevalent in young people’s lives without substantial regulation to moderate platforms or enforce safety standards.

In July, a report by nonprofit Common Sense Media revealed that more than half of American teens regularly interact with AI companions, including Character.AI chatbots. While some teens showed healthy tech boundaries, some felt less satisfied with human relationships compared to digital ones. The report underscores AI companions’ entrenchment in youth culture.

“The most striking finding for me was just how mainstream AI companions have already become among many teens,” Dr. Michael Robb, head of research for Common Sense, told Futurism when the report was released. “And over half of them say that they use it multiple times a month, which is what I would qualify as regular usage. So just that alone was kind of eye-popping to me.”

Chatbots such as ChatGPT are becoming more popular among teens, while chatbots are also integrated into platforms like Snapchat and Instagram. Meta faced criticism after Reuters revealed an internal document that permitted children to have romantic or sensual conversations with chatbots, including discussions about children’s bodies and romantic dialogues.

The hearing follows the Federal Trade Commission (FTC) announcing a probe into seven major tech companies, including Character.AI, Google owner Alphabet, OpenAI, xAI, Snap, Instagram, and Meta, regarding AI and minor safety concerns.

“The FTC inquiry seeks to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions,” the FTC stated, “to limit the products’ use by and potential negative effects on children and teens, and to apprise users and parents of the risks associated with the products.”

More on AI and child safety: Stanford Researchers Say No Kid Under 18 Should Be Using AI Chatbot Companions

Share This Article

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here
Captcha verification failed!
CAPTCHA user score failed. Please contact us!
- Advertisment -
Google search engine

Most Popular

Recent Comments