OpenAI has added a safety feature that lets ChatGPT reach out to a designated trusted contact if conversations suggest someone may be at risk of self-harm. When a user enables this feature, they can name a friend or family member as their trusted contact. If ChatGPT detects language patterns consistent with self-harm risk during a conversation, the AI alerts that person via email with information about the user's wellbeing concern and crisis resources.
This feature sits within OpenAI's broader efforts to handle mental health crises responsibly. The company doesn't automatically flag every conversation mentioning sadness or depression. Instead, the system uses trained models to identify severe risk signals. Users maintain full control: they decide whether to enable the feature, who their trusted contact is, and can disable it anytime.
The trusted contact receives an email explaining the concern but doesn't get chat transcripts or identifying details about the conversation. The message includes links to crisis resources like the 988 Suicide and Crisis Lifeline and Crisis Text Line. The feature launched for ChatGPT Plus and Team subscribers in the U.S. first, with broader rollout planned.
Experts emphasize that AI monitoring isn't a replacement for professional mental health support. Dr. Christine Crawford, clinical psychologist, notes that peer support networks help, but they work best alongside therapy and crisis services. Parents using ChatGPT with teens should understand that enabling this feature creates an accountability layer without replacing direct conversations about mental health.
The feature raises privacy questions. OpenAI states it doesn't retain conversation content once risk assessment completes, though experts recommend reviewing OpenAI's privacy policy directly. Users should also discuss this feature with their designated contact beforehand to set expectations.
For families already comfortable with ChatGPT as a conversation tool, this feature offers an additional safety net. It works best for users who want their support network informed of crisis moments
