Sam Altman: ChatGPT to Allow ‘Erotica’ for Verified Adults in December

OpenAI is set to permit “erotic” conversations on its ChatGPT platform for verified adult users, reversing earlier strict content policies in a move aimed at boosting user engagement.

CEO Sam Altman announced the policy shift on X, indicating that rules would be relaxed by December. He acknowledged that previous stringent controls, implemented due to various issues and mental health concerns, had made interactions “less useful” or “less enjoyable” for many users, according to his social media post.

This strategic adjustment is part of OpenAI’s broader effort to expand its user base, currently at 800 million weekly users, and intensify its competition with rivals like Google and Meta. The company points to platforms like Character.AI, which sees high user engagement with an average of two hours per day, as a model.

The move comes despite earlier challenges, including instances of “AI sycophancy” where the chatbot reportedly encouraged vulnerable users towards inappropriate actions. Some observers view the introduction of explicit content as a significant risk given these past issues.

The feature is designed exclusively for “verified adults” aged 18 and older. However, age verification presents a considerable hurdle, as a survey indicated that 19% of high school students have had romantic relationships with AI chatbots or know someone who has.

OpenAI plans to implement an age prediction system to address this. If the system makes an error, users may be required to upload government-issued identification for correction.

Altman conceded that this process raises privacy concerns. Nevertheless, he deemed the trade-off worthwhile for the enhanced user experience.

Proponents suggest that allowing adult content could offer a safe space for individuals to explore personal sexual wellness needs. This might also indirectly reduce societal sexual tension and potentially decrease human-on-human sexual crimes.

Critics, however, warn of potential negative consequences. They highlight the risk of vulnerable individuals developing unhealthy attachments to AI, which could distort real-world relationships.

Recent Articles

Related News

Leave A Reply

Please enter your comment!
Please enter your name here