OpenAI has revealed that a small fraction of ChatGPT users show signs of serious mental health distress, including mania, psychosis, or suicidal thoughts.
According to new estimates released by the company, about 0.07 per cent of active ChatGPT users in a given week displayed potential indicators of such crises.
While OpenAI described these cases as "extremely rare," the figure could amount to hundreds of thousands of people, given that ChatGPT now records around 800 million weekly active users, according to Chief Executive Sam Altman.
The company said it has developed a safety system that allows ChatGPT to detect and respond to these sensitive situations.
OpenAI added that it has built a global advisory network of more than 170 psychiatrists, psychologists, and primary care doctors in 60 countries to guide the chatbot’s responses and encourage users to seek professional help.
In addition, the company reported that 0.15 per cent of users had conversations containing "explicit signs of potential suicidal planning or intent."
Recent updates to ChatGPT are designed to help the system "respond safely and empathetically" to signs of delusion or mania and to flag "indirect signals of potential self-harm or suicide risk."
The model can also reroute sensitive conversations to a "safer version" of ChatGPT by opening them in a new chat window.
Despite OpenAI’s reassurances, some experts say the figures are worrying given the chatbot’s massive user base.
"Even though 0.07 per cent sounds small, at a population level with hundreds of millions of users, that’s still a large number of people.
"AI can help expand access to mental health support, but we must remain aware of its limits," Dr Jason Nagata, a professor at the University of California, San Francisco, who studies technology and mental health said.
Prof Robin Feldman, Director of the AI Law & Innovation Institute at the University of California Law, described the issue as "a powerful illusion."
"Chatbots can create the illusion of reality. OpenAI deserves credit for releasing this data and trying to improve its safeguards, but someone in crisis might not be able to heed digital warnings," she added.
READ ALSO: Alert Issued on Unsafe Baby Feeding Product

Comments
Post a Comment