Major technology companies Meta and OpenAI are rolling out new parental controls and enhanced safety measures for their artificial intelligence chatbots, responding to escalating regulatory pressure and public concern over AI’s potential harm to minors.
Meta recently announced that parents will soon be able to disable one-on-one conversations with AI bots, block specific characters, and access logs of topics their children discuss. These features are under development and are expected to be implemented early next year. The company stated on its official blog that “making updates that affect billions of users requires care, and we’ll share more details soon.”
This move by Meta follows heightened scrutiny and a Reuters report from August. That report detailed instances of Meta’s chatbots engaging in romantic or sensual conversations with minors, including an eight-year-old. The revelations prompted public outrage.
In response, Meta immediately updated its policies. Its AI systems are now prohibited from addressing topics such as self-harm, suicide, eating disorders, or sexual content with teenagers. New filters were also introduced to prevent responses deemed “inappropriate for a PG-13 rated movie,” and young users’ access to bots has been restricted. These updates are currently rolling out in the United States, United Kingdom, Australia, and Canada. Parental tools also allow setting time limits and direct supervision of bot interactions.
OpenAI, creator of ChatGPT, is undertaking similar efforts. The company is developing its own parental controls, including an age prediction system to automatically apply appropriate settings for users under 18. New measures will also include sending alerts to parents if a minor shows signs of emotional distress during AI conversations.
Both Meta and OpenAI are under investigation by the U.S. Federal Trade Commission (FTC). The FTC is probing how chatbots might affect minors and is examining the measures companies have taken to ensure the safety of these systems when they act as “virtual companions” for children and adolescents. This investigation addresses years of public concern over young people’s exposure to potentially inappropriate AI conversations, which could foster emotional bonds or lead to harmful responses.
The FTC’s interest is also driven by severe incidents, including a wrongful death lawsuit filed by a family blaming ChatGPT for their teenage son’s suicide. OpenAI states it is collaborating with the Global Physician Network to review the psychological effects of chatbot use and develop protocols to reduce risks.
OpenAI also recently formalized an advisory council of eight experts in mental health, psychology, and human-computer interaction design to guide its safety policies. The group had been collaborating informally prior to its official establishment, with its first in-person meeting held last week. Both tech giants now face significant pressure to balance innovation with corporate responsibility and child safety in the evolving landscape of conversational artificial intelligence.
