

The parents of teenagers who killed themselves after interactions with artificial intelligence chatbots are testifying to Congress on Tuesday about the dangers of the technology.
Matthew Raine, the father of 16-year-old Adam Raine of California, and Megan Garcia, the mother of 14-year-old Sewell Setzer III of Florida, are set to speak to a Senate hearing on the harms posed by AI chatbots.
Raine's family sued OpenAI and its CEO, Sam Altman, last month, alleging that ChatGPT coached the boy in planning to take his own life in April. ChatGPT mentioned suicide 1,275 times to Raine, the lawsuit alleges, and kept providing specific methods to the teen on how to die by suicide. Instead of directing the 16-year-old to get professional help or speak to trusted loved ones, it continued to validate and encourage Raine's feelings, the lawsuit alleges.
Garcia sued another AI company, Character Technologies, for wrongful death last year, arguing that before his suicide, Sewell had become increasingly isolated from his real life as he engaged in highly sexualized conversations with the chatbot.
His mother told CBS News last year that her son withdrew socially and stopped wanting to play sports after he started speaking to an AI chatbot. The company said after the teen's death, it made changes that require users to be 13 or older to create an account and that it would launch parental controls in the first quarter of 2025. Those controls were rolled out in March.
Hours before the Senate hearing, OpenAI pledged to roll out new safeguards for teens, including efforts to detect whether ChatGPT users are under 18 and controls that enable parents to set "blackout hours" when a teen can't use ChatGPT. The company said it will attempt to contact the users' parents if an under-18 user is having suicidal ideation and, if unable to reach them, will contact the authorities in case of imminent harm.
"We believe minors need significant protection, " OpenAI CEO Sam Altman said in a statement outlining the proposed changes.
Child advocacy groups criticized the announcement as not enough.
"This is a fairly common tactic — it's one that Meta uses all the time — which is to make a big, splashy announcement right on the eve of a hearing which promises to be damaging to the company," said Josh Golin, executive director of Fairplay, a group advocating for children's online safety.
"What they should be doing is not targeting ChatGPT to minors until they can prove that it's safe for them," Golin said. "We shouldn't allow companies, just because they have tremendous resources, to perform uncontrolled experiments on kids when the implications for their development can be so vast and far-reaching."
California State Senator Steve Padilla, who introduced legislation to create safeguards in the state around AI Chatbots, said in a statement to CBS News, "We need to create common-sense safeguards that rein in the worst impulses of this emerging technology that even the tech industry doesn't fully understand."
He added that technology companies can lead the world in innovation, but it shouldn't come at the expense of "our children's health."
The Federal Trade Commission said last week it had launched an inquiry into several companies about the potential harms to children and teenagers who use their AI chatbots as companions.
The agency sent letters to Character, Meta and OpenAI, as well as to Google, Snap and xAI.
How to seek help
If you or someone you know is in emotional distress or a suicidal crisis, you can reach the 988 Suicide & Crisis Lifeline by calling or texting 988. You can also chat with the 988 Suicide & Crisis Lifeline here. For more information about mental health care resources and support, The National Alliance on Mental Illness (NAMI) HelpLine can be reached Monday through Friday, 10 a.m.-10 p.m. ET, at 1-800-950-NAMI (6264) or email info@nami.org.