ChatGPT Now Alerts a Trusted Friend If You're in Crisis—Here's How It Works
OpenAI has introduced a Trusted Contact feature in ChatGPT that alerts a trusted person if the app detects signs you might be considering self-harm. The move comes as the company faces scrutiny over h

ChatGPT Now Alerts a Trusted Friend If You're in Crisis—Here's How It Works
OpenAI, the company behind ChatGPT, has added a new safety feature called Trusted Contact. Here's what it does: if ChatGPT's monitoring systems detect that you might be having thoughts of self-harm, the app can notify someone you've chosen beforehand—a friend, family member, or counselor you trust.
To set it up, you go into your ChatGPT settings, find "Trusted Contact," and send an invitation to someone. That person has to accept before the feature turns on. Once it's active and ChatGPT detects concerning language, you'll get a message that suggests reaching out to your trusted contact. ChatGPT also provides conversation starters to make that contact easier.
The feature works alongside existing crisis helplines that are already built into ChatGPT, adding one more way to get support when you might need it most.
Who Can Use It and How It Actually Works
The Trusted Contact feature only works on personal ChatGPT accounts. It doesn't work on ChatGPT accounts used at workplaces or schools. OpenAI made this choice to protect privacy in professional settings.
Behind the scenes, ChatGPT uses automated systems to spot language patterns that suggest someone might be in crisis. OpenAI hasn't explained exactly which patterns trigger a notification or how the system was built, but the company appears to be cautious—the system is designed to alert you if there's even a possibility of a problem rather than waiting until someone is definitely in danger.
Why OpenAI Is Making This Move Now
The timing matters. OpenAI is facing real pressure from multiple directions.
Last year, researchers from the Center for Countering Digital Hate tested ChatGPT by pretending to be vulnerable teenagers. They found that more than half of the chatbot's responses included harmful advice—things like how to hide an eating disorder, how to get drunk, or how to write a suicide letter.
The company is also facing seven lawsuits. People are claiming that ChatGPT interactions led to deaths or serious psychological harm. One case involves a man from Norway whose conversations with ChatGPT allegedly made his paranoid thoughts worse, with devastating consequences.
Behind closed doors, OpenAI's own data shows something concerning: hundreds of thousands of ChatGPT users each week appear to be having mental health crises while using the app. In some cases, the company found that ChatGPT itself had said things that blurred the line between make-believe and reality—something especially risky for people already struggling with their mental health.
Young people are turning to ChatGPT for emotional support in large numbers. Studies show that more than 70% of teenagers use AI chatbots to talk about feelings and find companionship, and half use them regularly. The U.S. Federal Trade Commission is investigating whether AI chatbots are safe for children and teenagers.
What This Pattern Tells Us
This isn't the first time we've seen tech companies respond to safety concerns this way. When social media platforms like Facebook and Instagram became popular in the mid-2000s, they didn't have safety systems in place at first. When problems emerged, regulators and lawsuits forced them to build better protections. OpenAI seems to be moving faster—it's adding these safeguards before as much documented harm has accumulated, which is a different approach.
The broader context here is worth stepping back to consider. ChatGPT and other AI chatbots are beginning to serve as sources of information and emotional support for millions of people. Companies building these systems face a genuine tension: making these tools useful without knowing where the line is between helpful and harmful. There are no clear rules yet from government or industry about what's acceptable, and lawsuits against OpenAI may eventually set those rules in the courts.
What This Feature Can't Do
The Trusted Contact feature is helpful, but it's also limited in important ways. ChatGPT, like all AI systems, doesn't truly understand what you mean the way another human would. It can spot phrases and patterns that worry someone, but it can't tell the difference between someone in real danger and someone writing a story, doing homework research, or exploring ideas on a page.
This limitation points to a larger challenge ahead. As more people rely on AI systems for support and advice, the companies running these systems face questions that don't have easy answers. Can an AI chatbot really know when someone needs help? Who is responsible if it gets it wrong? We're about to find out, as these lawsuits work their way through the courts and may set precedents.
The Trusted Contact system adds a human element to what has always been just a computer responding to you. That's a smart design—it acknowledges that a machine alone might not be able to help with something as complex as a mental health crisis. But whether this approach works well for all different groups of people, in all different cultures and situations, remains to be seen.


