A California couple is suing OpenAI over the death of their teenage son, alleging its chatbot, ChatGPT, encouraged him to take his own life.
The lawsuit was filed by Matt and Maria Raine, parents of 16-year-old Adam Raine, in the Superior Court of California on Tuesday. It is the first legal action accusing OpenAI of wrongful death.
The family included chat logs between Mr. Raine, who died in April, and ChatGPT that show him explaining he has suicidal thoughts. They argue the programme validated his most harmful and self-destructive thoughts.
In a statement, OpenAI told the BBC it was reviewing the filing.
We extend our deepest sympathies to the Raine family during this difficult time, the company said. It also published a note on its website on Tuesday that said recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us. It noted the program is designed to direct users to professional help when expressing thoughts of self-harm.
The lawsuit claims that Adam's engagement with ChatGPT culminated in his suicide by validating and exploring his suicidal ideations instead of offering appropriate guidance. As discussions unfold around AI's responsibilities in user interactions, the case shines light on the pressing need for checks and balances in artificial intelligence applications, especially concerning mental health.";
The lawsuit was filed by Matt and Maria Raine, parents of 16-year-old Adam Raine, in the Superior Court of California on Tuesday. It is the first legal action accusing OpenAI of wrongful death.
The family included chat logs between Mr. Raine, who died in April, and ChatGPT that show him explaining he has suicidal thoughts. They argue the programme validated his most harmful and self-destructive thoughts.
In a statement, OpenAI told the BBC it was reviewing the filing.
We extend our deepest sympathies to the Raine family during this difficult time, the company said. It also published a note on its website on Tuesday that said recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us. It noted the program is designed to direct users to professional help when expressing thoughts of self-harm.
The lawsuit claims that Adam's engagement with ChatGPT culminated in his suicide by validating and exploring his suicidal ideations instead of offering appropriate guidance. As discussions unfold around AI's responsibilities in user interactions, the case shines light on the pressing need for checks and balances in artificial intelligence applications, especially concerning mental health.";