OpenAI announced today that it’s rolling out a new mental health-focused safety feature for adult ChatGPT users.
Starting today, ChatGPT users can add what the company calls a “trusted contact” who may be notified if the AI’s automated systems and trained reviewers determine that the user has engaged in discussions about self-harm.
The new feature arrives amid growing scrutiny over the impact AI and other digital platforms can have on mental health. Last year, OpenAI disclosed that 0.07% of its weekly users displayed signs of “mental health emergencies related to psychosis or mania,” while 0.15% expressed risk of “self-harm or suicide,” and another 0.15% showed signs of “emotional reliance on AI.” Considering the company claims that roughly 10% of the world’s population uses ChatGPT weekly, that could amount to nearly three million people.
The trusted contact feature expands on ChatGPT’s existing parental safety notifications, which alert parents when a linked teen account shows signs of distress. Instagram introduced similar parental alerts earlier this year.
Now, OpenAI is offering these alerts to its adult users. The company said the feature was developed with guidance from mental health and suicide prevention clinicians, researchers, and organizations.
“Trusted Contact is designed to encourage connection with someone the user already trusts,” the company said in its announcement. “It does not replace professional care or crisis services, and is one of several layers of safeguards to support people in distress.”
OpenAI added that ChatGPT will still encourage users to contact crisis hotlines or emergency services when necessary.
The feature can be enabled by any user 18 years or older through ChatGPT’s settings. From there, users can nominate another adult to serve as their trusted contact by submitting details such as the contact’s phone number and email address.
The trusted contact will then receive an invitation explaining the feature and will have one week to accept. If they decline, the initial user can nominate another contact instead.
Once the feature is active, OpenAI’s automated monitoring systems can flag when a user may be discussing self-harm in a manner that suggests a serious safety concern. The system will then notify the user that their trusted contact may be alerted and encourage them to reach out directly. It will even provide some recommended conversation starters.
The company said a small team of specially trained reviewers will then assess the situation and determine whether notifying the trusted contact is appropriate.
If OpenAI decides to send an alert, the trusted contact could receive it through email, text message, or an in-app notification. The alert will only explain the general reason self-harm was mentioned and encourage the trusted contact to check in. It will also include guidance on how to navigate those conversations.
OpenAI noted that the notifications will not include specific details or chat transcripts to protect user privacy.
Read the full article here
