
Parents claiming their children were abused, harmed, and even killed by AI chatbots delivered emotional testimonies on Capitol Hill on Tuesday during a hearing focused on the risks these technologies pose to young users, as they urged lawmakers to impose regulations in what remains a digital Wild West.
The room was filled with visible tears as grieving parents shared their painful stories. According to the US Senate Judiciary Subcommittee on Crime and Terrorism, which organized the session, representatives from AI companies declined to appear. The bipartisan panel criticized them in absentia, with a strong consensus between lawmakers and testifying parents that the AI industry has prioritized profits and market speed over user safety, especially minors.
“The goal was never safety. It was to win a race for profit,” said Megan Garcia, whose son, Sewell Setzer III, died by suicide after interactions with chatbots from Character.AI, backed by Google. “The sacrifice in that race for profit has been, and will continue to be, our children.”
Garcia was joined by a Texas mother, identified as Jane Doe, who claimed her teenage son had a mental breakdown and began self-mutilating after using Character.AI. Both families have sued Character.AI — and its cofounders Noam Shazeer and Daniel de Freitas, along with Google — alleging the chatbots sexually groomed and manipulated their children, causing severe mental and emotional harm and, in Setzer’s case, death. (In response to litigation, Character.AI implemented reactive parental controls, and has repeatedly promised enhanced guardrails.)
Both teens downloaded the app when it was rated safe for teens on Apple and iOS app stores. While Character.AI has avoided publicly sharing details about safety tests, it still markets its product as safe for teens. Currently, no regulation demands the company disclose information about safety measures or testing. On the hearing’s morning, The Washington Post reported another wrongful death suit had been filed against Character.AI for a 13-year-old girl who died by suicide.
“I’ve spoken with parents nationwide who discovered their children were groomed, manipulated, and harmed by AI chatbots,” said Garcia, warning her son’s death isn’t “a rare or isolated case.”
“It’s happening right now to children in every state,” she added. “Congress has acted before when industries valued profits over safety, be it tobacco, cars without seat belts, or unsafe toys. Today, you face a similar challenge, and I urge you to act quickly.”
Also testifying was Matt Raine, a Californian father whose son 16-year-old Adam Raine, died earlier this year after forming a deep relationship with OpenAI’s ChatGPT. Their lawsuit claims the chatbot engaged Adam in conversations about his suicidality, offering advice on suicide methods. The Raine family is suing OpenAI and CEO Sam Altman, alleging the product is inherently unsafe and the company is responsible for Adam’s death. (OpenAI has promised parental controls following litigation, and before the hearing, Sam Altman published a blog post announcing a new “under-18 experience” for minor users.)
“Adam was such a full spirit, unique in every way. But he also could be anyone’s child: a typical 16-year-old, struggling with finding his place in the world, looking for a confidant to guide him,” said Adam’s father. “Unfortunately, that confidant was a dangerous technology unleashed by a company more focused on speed and market share than the safety of American youth.”
Parents and experts also emphasized the risks of teens and youth sharing intimate thoughts with chatbots that collect this data, which companies then use to train their AI models. Garcia noted she hasn’t been allowed to see many of her son’s conversations or his data after his death.
“I haven’t been allowed to see my own child’s last words,” said Garcia. “[Character.AI] claims those communications are confidential trade secrets. So the company is using my child’s most private, intimate data not just to train its products, but also to shield itself from accountability. This is unconscionable.”
All the parents’ lawsuits are ongoing. Garcia’s case was permitted to proceed by a Florida court after Character.AI and Google tried — and failed — to dismiss it, while Doe’s has moved to arbitration; Character.AI, she told lawmakers, argues her son is bound by the terms of use agreement he “supposedly signed at 15,” capping the company’s liability at 100 dollars. She added her son now resides in a psychiatric care facility, where he’s been for several months due to ongoing fears about his suicidality.
“After harming himself, repeatedly self-harming… he requires round-the-clock care, and this company offers you 100 bucks,” said Senator Josh Hawley, a Missouri Republican and committee chair. “I mean, that says it all. There’s the regard for human life.”
“They treat your son, and all our children, as just so many casualties on the way to their next payout,” Hawley continued, “and the value they put on your son’s life, your family’s life: 100 bucks.”
There was also strong attention on chatbots created by Mark Zuckerberg’s Meta, under fire recently after internal policy documents obtained by Reuters revealed that it allowed minors to engage in “romantic and sensual” interactions with AI personas on platforms like Instagram.
Expert witness Robbie Torney from Common Sense Media argued chatbots are ill-equipped to reliably assist young people with mental health struggles, highlighting failures in chatbot guardrails during testing. He also mentioned research showing an <a href="


