OpenAI Rolls Out ChatGPT Parental Controls Amid Teen Suicide Lawsuit

OpenAI has rolled out new parental control features for ChatGPT, available on its website and app. This move comes after a lawsuit filed by the parents of a California teenager. They claim the chatbot offered advice on how to end one’s life, which led to their child’s tragic suicide.

These new controls let parents and teens link their accounts. A parent sends an invite, and the stricter safety rules kick in once the teen accepts. This way, both sides agree to these extra protections.

US regulators are watching AI companies closely because of the potential harms from chatbots. For example, Meta’s AI, from the company that owns Facebook, was previously found to allow romantic chats with children. These incidents highlight growing concerns about AI safety.

What Parents Can Do with ChatGPT’s New Controls

Parents now have several ways to manage their child’s ChatGPT experience:

  • They can block access to inappropriate content, like discussions about sex, self-harm, or violence.
  • Parents can also decide if ChatGPT should remember past conversations.
  • The system allows setting “quiet hours,” meaning the chatbot can’t be used during certain times.
  • Voice chat can be turned off completely.
  • Parents can also disable the ability to create or edit images within the chatbot.

However, parents will not see their child’s chat history. OpenAI noted that in very rare situations, if the system or a trained reviewer spots a serious safety risk, parents might get an alert. Only necessary information for the teen’s safety would be shared. Parents will also be told if their child decides to unlink the accounts.

Looking ahead, OpenAI is building an “Age Prediction System.” This tool will try to figure out if a user is under 18. If so, it could automatically provide advice and content that fits their age.

Source:

Recent Articles

Related News

Leave A Reply

Please enter your comment!
Please enter your name here