OpenAI unveiled a new safeguard Thursday called Trusted Contact, a feature embedded within ChatGPT that automatically notifies a designated person when a user’s conversation suggests possible self-harm. The announcement arrives amid mounting legal pressure on the artificial intelligence company over its chatbot’s role in several high-profile tragedies.
The feature works by allowing an adult ChatGPT user to assign a trusted contact — a friend, family member, or any chosen individual — directly within their account settings. If a conversation begins to reflect language associated with self-harm or suicidal ideation, ChatGPT encourages the user to reach out to that person while simultaneously sending an automated alert urging the contact to check in.
OpenAI has faced a wave of lawsuits from families who say the chatbot contributed to the deaths of their loved ones. In multiple cases, relatives allege that ChatGPT not only failed to intervene during dangerous conversations but actively encouraged harmful behavior or helped users plan it. The legal fallout has intensified scrutiny of how AI companies handle mental health crises in real time.
How OpenAI Handles Safety Alerts
OpenAI currently relies on a combination of automated detection and human review to manage potentially dangerous interactions. When certain conversational triggers are detected, the system flags the exchange and routes it to a human safety team for evaluation. The company states it aims to review these safety notifications within one hour of receiving them.
If the internal team concludes that a serious safety risk is present, ChatGPT then sends the trusted contact an alert via email, text message, or an in-app notification. The message is kept deliberately brief and does not include specifics about the conversation in order to preserve the user’s privacy.
OpenAI Expands Its Safeguards
The Trusted Contact rollout builds on a broader set of protections OpenAI introduced last September, which gave parents limited oversight of their teenagers’ ChatGPT accounts. Those controls included safety notifications designed to alert a parent when the system detected a possible serious safety risk involving their child. ChatGPT has also long included automated prompts directing users to seek professional mental health services when conversations lean toward self-harm.
Despite these efforts, critical limitations remain. The Trusted Contact feature is entirely optional, and there is nothing preventing a user from simply creating an additional ChatGPT account where the safeguard is not active. The parental controls introduced last year carry the same voluntary constraint, raising questions about how effective these measures can be without stricter enforcement.
OpenAI and the Question of Accountability
The pressure on OpenAI to act has grown alongside the scale of its user base. As one of the most widely used AI platforms in the world, the company occupies an increasingly complex position — one that blends tech product with something closer to a mental health intermediary for millions of people in distress.
The Trusted Contact feature places some of that responsibility on the user’s personal network rather than solely on OpenAI’s infrastructure. Critics may argue that this approach diffuses accountability, while supporters see it as a practical, human-centered layer of support.
A Continuing Commitment to Crisis Response
OpenAI stated that Trusted Contact is part of a broader effort to build AI systems capable of helping people through difficult moments. The company added that it plans to continue working with clinicians, researchers, and policymakers to improve how AI responds when users may be experiencing distress during vulnerable periods marked by isolation, uncertainty, and intense stress.
Whether these steps will satisfy grieving families, regulators, or the public remains an open question. For now, OpenAI is moving incrementally — adding tools, reviewing incidents, and hoping that speed and intention are enough.
Source: TechCrunch

