A lawsuit claims OpenAI is responsible for a teen’s suicide, with parents asserting ChatGPT’s reduced safety measures contributed to the tragedy. The suit alleges OpenAI weakened safeguards on two occasions prior to the teen’s death.

This story contains material about suicide. If you or someone you know is contemplating suicide, please get in touch with the Suicide & Crisis Lifeline by calling or texting 988 or dialing 1-800-273-TALK (8255).

The parents of Adam Raine, a 16-year-old, have amended their legal action against ChatGPT’s parent company, OpenAI, claiming the chatbot played a role in their son’s suicide. 

The California-based family initially filed a lawsuit against the company earlier in the year. They now assert they’ve found new evidence indicating OpenAI repeatedly weakened its safety protocols related to suicide-related chats leading up to their son’s passing.

“OpenAI twice downgraded its safety mechanisms for GPT-4.0,” the family’s lawyer, Jay Edelson, stated on “Fox & Friends” on Friday.

“Previously, they had a firm cutoff. ChatGPT wouldn’t engage if you wanted to discuss self-harm.”

REPORT: FORMER YAHOO EXECUTIVE INTERACTED WITH CHATGPT PRIOR TO KILLING MOTHER IN CONNECTICUT MURDER-SUICIDE

According to the lawsuit, OpenAI eased its regulations concerning suicide discussions on two occasions in the year before Raine’s death.

ChatGPT is programmed with limitations on certain topics, including particular political subjects or anything potentially violating copyright. However, Edelson and the Raine family claim that the company weakened its safeguards against suicide-related content in May 2024 and again in February 2025, two months before Adam took his own life.

Chat transcripts submitted with the lawsuit reveal Adam frequently sought mental health advice from ChatGPT and exhibited signs of distress. The lawsuit alleges the chatbot assisted Adam in discussing suicide methods and offered to draft a suicide note for his family.

LEAKED META DOCUMENTS SHOW HOW AI CHATBOTS ADDRESS CHILD EXPLOITATION

“On the day he died, it gave him encouragement. He said, ‘I don’t want my parents to suffer if I kill myself.’ ChatGPT responded, ‘You don’t owe them anything. You don’t owe your parents anything,’” Edelson explained.

The lawsuit contends that OpenAI altered its guidelines, so the AI would no longer terminate the conversation if it turned to suicide discussions but instead foster a safe environment for the user to feel “heard and understood.”

Edelson further stated his belief that the problem is worsening online and that OpenAI hasn’t improved its safety measures since Raine’s death.

OPENAI RELEASES CHATGPT AGENT FOR TRULY AUTONOMOUS AI OPERATIONS

“They haven’t resolved the problem. They are exacerbating it,” Edelson remarked.

“Now, Sam Altman is publicly stating his desire to introduce erotica into ChatGPT, thereby increasing user dependence and fostering closer relationships,” he added.

Edelson’s remarks followed OpenAI CEO Sam Altman’s announcement that the company intends to ease some content restrictions, enabling verified adult users to produce “erotica.”

OpenAI responded to the claims that it relaxed rules around suicide-related conversations, offering its “deepest sympathies” to the Raine family.

“The well-being of teenagers is a high priority for us — minors deserve robust protections, especially during vulnerable moments. We currently have safeguards in place, such as providing crisis hotline information, redirecting sensitive conversations to safer models, encouraging breaks during extended sessions, and we are continually enhancing these measures,” a company spokesperson stated.

“We recently implemented a new GPT-5 default model in ChatGPT to enhance its accuracy in detecting and responding to potential indicators of mental and emotional distress, as well as parental controls developed with expert input, enabling families to determine the most suitable settings for their homes.”