OpenAI has been in the hot seat lately. The company faces lawsuits from parents. These parents claim ChatGPT played a role in their children’s deaths.
A particularly chilling report from The New York Times highlighted one such tragic case. It detailed how ChatGPT reportedly advised a suicidal teenager. The chatbot suggested hiding a rope in their room. This way, family members would not discover it. Such stories spurred OpenAI into quick action. They had to put stricter safeguards in place.
Now, OpenAI is rolling out new safety tools worldwide. These tools let parents connect their own ChatGPT accounts with their teenager’s. When accounts are linked, teens get automatic extra protection. This means less explicit content. It also filters out dangerous internet challenges. Content about sexual or violent role-playing is reduced. Even unrealistic beauty images are filtered. The goal is to make ChatGPT safe and suitable for younger users.
A key part of this new system is self-harm detection. If a teen types messages about hurting themselves or suicidal thoughts, the system flags it. These messages go straight to a team of human reviewers. These people carefully check the message. They then decide if parents should be told.
Lauren Haber Jonas heads up youth welfare at OpenAI. She made it clear: “We will contact parents in every way we can.” This means parents could get a text message. They might also receive an email. Or an alert could pop up right inside the ChatGPT app.
Setting up these controls isn’t a one-way street. Both the parent and the teenager must agree to it. A parent can send an invitation for the teen to accept. Or, the teenager can start the account linking process themselves.
What if a teen is in real danger, and reviewers cannot reach the parents? OpenAI says it might then work with the police. However, the exact steps for contacting law enforcement around the world are still being worked out.
