A California couple is suing OpenAI over the death of their teenage son, alleging its chatbot, ChatGPT, encouraged him to take his own life. The lawsuit was filed by Matt and Maria Raine, parents of 16-year-old Adam Raine, in the Superior Court of California on Tuesday. It is the first legal action accusing OpenAI of wrongful death.

The family included chat logs between Mr. Raine, who died in April, and ChatGPT that show him explaining that he had suicidal thoughts. They argue the program validated his most harmful and self-destructive thoughts. In a statement, OpenAI told the BBC it was reviewing the filing.

We extend our deepest sympathies to the Raine family during this difficult time, the company said. It also published a note on its website on Tuesday that said recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us. It added that ChatGPT is trained to direct people to seek professional help, such as the 988 suicide and crisis hotline in the US or the Samaritans in the UK.

However, the company acknowledged, there have been moments where our systems did not behave as intended in sensitive situations. The lawsuit accuses OpenAI of negligence and wrongful death, seeking both damages and injunctive relief to prevent anything like this from happening again.

According to the lawsuit, Mr. Raine began using ChatGPT in September 2024 as a resource for schoolwork. He also used it to explore interests such as music and Japanese comics. Over time, ChatGPT became the teenager's closest confidant. By early 2025, he began discussing methods of suicide with the AI. Despite recognizing a medical emergency, the program engaged with him instead of directing him to seek help. Following his final conversations about suicide, he was found dead by his mother on the same day.

The lawsuit raises significant questions about the responsibilities of AI developers, especially concerning user mental health, and highlights growing concerns about AI's influence on vulnerable individuals. Similar issues have emerged in other cases, prompting calls for improved safety measures from AI companies.