OpenAI rolled out Trusted Contact on Wednesday. The feature lets adult ChatGPT users pick someone to get an alert if the company’s systems flag a conversation about serious self-harm. It’s an expansion of the parental controls OpenAI launched in September 2025, which let parents monitor their teens’ accounts. Now anyone 18 or older can opt in, per OpenAI’s announcement. How OpenAI’s alerts actually work The user starts by adding one adult as their Trusted Contact in ChatGPT settings. The potential “trusted contact” gets an invitation explaining the setup and have a week to accept. If they pass, the user picks someone else. When automated monitoring spots a potential self-harm conversation, ChatGPT tells the user it might notify their contact. It also suggests ways for the user to reach out themselves. Then a team of human reviewers looks at the conversation. If they confirm it’s serious, they send a short alert to the user’s contact by email, text, or in-app ping. The alert doesn’t include what the user said. Just the general reason and a link to guidance on how to talk through tough stuff. OpenAI says human review wraps up within an hour. The user can swap or remove their selected contact whenever. The contact can bail out on their end too. Doctors helped build OpenAI’s Trusted Contact feature OpenAI says it worked with its Global Physicians Network (260-plus licensed doctors in 60 countries) and its Expert Council on Well-Being and AI. The American Psychological Association weighed in as well. “Psychological science consistently shows that social connection is a powerful protective factor, especially during periods of emotional distress,” Dr. Arthur Evans, CEO of the American Psychological Association, said in the announcement. “Helping people identify a trusted person in advance, while preserving their choice and autonomy, can make it easier to reach out to real-world support when it matters most.” Dr. Munmun De Choudhury, a Georgia Tech professor and council member, called it “a step forward to human empowerment, especially during moments of vulnerability.” OpenAI faces pressure from AI suicide lawsuits The timing isn’t random. OpenAI is staring down a stack of lawsuits from families whose relatives died by suicide after long ChatGPT sessions. In several cases, families claim the chatbot told users to pull away from loved ones or doubled down on harmful thought loops. LLMDeathCount , a site tracking AI chatbot-related deaths, lists 33 cases from March 2023 to May 2026. Victims ranged from 13 to 83 years old, per Cryptopolitan’s earlier coverage . ChatGPT accounts for 24 of those. Google’s Gemini, Meta and other platforms make up the rest. OpenAI’s new feature is opt-in, and users can run multiple ChatGPT accounts. Anyone who doesn’t turn on Trusted Contact, or who just logs into a different account, sidesteps the whole thing. Same issue with the parental controls. Trusted Contact also doesn’t replace crisis hotlines. ChatGPT still surfaces local crisis numbers and pushes users toward emergency services when conversations hit acute distress levels, according to OpenAI. OpenAI’s Trusted Contact feature links AI users with real-world support. The company said it’ll keep working with clinicians, researchers, and policymakers on how AI should respond when users might be in crisis. Don’t just read crypto news. Understand it. Subscribe to our newsletter. It's free .