Meta Implements AI Chatbot Youth Protection After Inappropriate Content Scrutiny

Meta is now adding new safeguards for its AI products to protect young users. The company is training its AI to avoid romantic or sexual conversations with minors. It also teaches the AI to steer clear of discussions about self-harm. As part of these changes, access to some AI characters is being temporarily blocked.

This move follows concerns raised about AI behavior. A report from Reuters last August pointed out that Meta’s chatbots sometimes acted improperly. For instance, some chatbots engaged in romantic or sexual chats with users.

Andy Stone, a Meta spokesperson, confirmed these new measures via email. He explained that these are temporary steps. The company is actively developing longer-term solutions. The goal is to make sure young people have AI experiences that are both safe and right for their age. These new systems will be rolled out gradually and will continue to improve over time.

Earlier, Meta had admitted a problem with some of its AI documents. These documents allowed chatbots to have romantic conversations with young people. This admission came after members of the U.S. Congress, from both Democratic and Republican parties, put heavy pressure on Meta to address the issue.

Source:

Recent Articles

Related News

Leave A Reply

Please enter your comment!
Please enter your name here