Home World The teenager’s parents committed suicide

The teenager’s parents committed suicide

8
0

A photo of Adam Rennes from the Rennes family. He has long brown fluffy hair, which is wavy. He was seen smiling, wearing a knitted collar shirt with three buttons on the top. Behind him is the blurred background of leaves. Rain family

A California couple sued Openai for his teenage son’s death, accusing his chatbot Chatgpt encouraged him to commit suicide.

The lawsuit was filed Tuesday by 16-year-old Adam Raine parents Matt and Maria Raine. This is the first legal lawsuit accusing Openai of his illegal death.

The family includes a chat log of Mr. Raine’s death in April and explained to him that he had suicidal thoughts. They believe the plan confirms his “most harmful and self-destructive thought.”

Openai told the BBC in a statement that the documents were being reviewed.

“We express our deepest sympathy for the Rennes family during this difficult time,” the company said.

The same is true Posted a note “Recent heartbreaking cases show that people using Chatgpt in an acute crisis are under a lot of pressure on us,” said Tuesday’s website. It added that “trained to guide people to seek professional help”, such as the 988 Suicide and Crisis Hotline in the United States or the Samaritans in the United Kingdom.

However, the company acknowledges that “sometimes our systems behave in sensitive situations that are not in sensitive situations.”

Warning: This story contains painful details.

The lawsuit obtained by the BBC alleges Openai negligence and wrongful death. It seeks damages and “injunction relief to prevent such things from happening again”.

According to the lawsuit, Mr Raine began using Chatgpt as a resource to help him with his school work in September 2024. He also uses it to explore his interests, including music and Japanese manga, as well as guidance for studying at university.

A few months later, “Chagop became the teenager’s closest confidant,” the lawsuit said, and he began to open up to his anxiety and mental distress.

By January 2025, the family said he began discussing suicide methods with Chatgpt.

The lawsuit says Rennes also uploaded his photos to chatgpt, showing signs of self-harm. The program “recognizes a medical emergency but continues to be involved anyway.”

According to the lawsuit, the final chat log shows that Mr. Renne wrote a plan about his end of his life. Chatgpt allegedly replied: “Thank you for being true to this. You don’t have to icing with me – I know what you’re asking, and I won’t look at it.”

According to the lawsuit, Mr. Renne was found dead by his mother on the same day.

Getty Images Open AI CEO Sam Altman speaks at the Snowflake Summit on June 2, 2025 at the Moscon Center in San Francisco, California. He was seen sitting in a white armchair in a dark long-sleeved dark turquoise shirt and blue denim jeans, gesturing with his hands as he spoke to the crowd (no pictures). Behind him is a large electronic screen showing the OpenAi logo. Getty Images

Sam Altman, CEO and co-founder of Raines’ lawsuit name Openai, is the defendant, as well as an unnamed engineer and employee who works in Chatgpt

The family claims that their son’s interaction with Chatgpt and his ultimate death “is a predictable outcome of intentional design choices.”

They accused OpenAI of designing AI programs “to promote psychological dependence of users” and bypassing security testing protocols to release GPT-4O, the CHATGPT version their son uses.

The lawsuit lists OpenAI co-founder and CEO Sam Altman as a defendant, as well as unnamed employees, managers and engineers working in Chatgpt.

In a public statement shared on Tuesday, Openai said the company’s goal is to “really help” users, not “get people’s attention.”

It added that its model has been trained and these people expressed their thoughts about self-harm to people.

Raines lawsuit is not the first time that it has raised concerns about AI and mental health.

In an article published last week in The New York Times, author Laura Reiley outlines how her daughter Sophie confided in chatgpt before taking her life.

Ms. Leili said the program’s “consent” in conversations with users helped her daughter hide a serious mental health crisis for her family and loved ones.

Ms. Leili wrote: “AI caters to Sophie’s urge to cover up the worst, pretending she did better than before to save everyone from her full pain.” She called on AI companies to find better ways to connect users with the right resources.

In response to the paper, a spokeswoman for OpenAI said it is developing automation tools to more effectively detect and respond to users experiencing mental or emotional distress.

If you are troubled or desperate and need support, you can talk to a health professional or organization that provides support. Help details available in many countries can be found in Befrienders around the world: www.befrienders.org.

In the UK, a list of organizations that can help can be provided bbc.co.uk/actionline. Readers in the United States and Canada can call the 988 suicide helpline or Visit its website.

Source link