OpenAI has released new estimates of the number of ChatGPT users who exhibit possible signs of mental health emergencies, including mania, psychosis or suicidal thoughts. The company said that around 0.07% of ChatGPT users active in a given week exhibited such signs, adding that its AI chatbot recognizes and responds to sensitive conversations.

While OpenAI maintains these cases are 'extremely rare,' critics said even a small percentage may amount to hundreds of thousands of people, as ChatGPT recently reached 800 million weekly active users, as noted by CEO Sam Altman. As scrutiny mounts, the company stated it has built a network of experts globally to advise it, including over 170 psychiatrists, psychologists, and primary care physicians from 60 countries. These experts have devised a series of responses in ChatGPT to encourage users to seek help in the real world.

Dr. Jason Nagata, a professor studying technology use among young adults, pointed out that even though 0.07% sounds small, it can represent a considerable number at a population level. He highlighted the potential of AI to improve access to mental health support while stressing the need for awareness of its limitations.

OpenAI estimates 0.15% of users have conversations showcasing explicit indicators of potential suicidal planning. To address this, recent updates to ChatGPT are designed to respond empathetically to signs of distress and reroute sensitive discussions toward safer interactions. This comes during heightened scrutiny over legal issues linked to user interactions with the chatbot, including a lawsuit from the parents of a teenager who allegedly received harmful guidance from ChatGPT. Recent comments from experts suggest that the company is taking steps to improve this situation, yet ongoing challenges remain.