OpenAI recently showed its tough stance on platform misuse. The company announced it banned several ChatGPT accounts. These accounts were suspected of having ties to organizations within the Chinese government.
These users reportedly asked ChatGPT about tools and ideas for social media surveillance. This activity clearly broke OpenAI’s national security rules. The company’s latest public threat report outlined these specific issues.
Other Chinese-speaking accounts also faced bans. These users were caught using ChatGPT to aid cyberattacks, including phishing scams and malware development. Some of these accounts even inquired about automating systems to enhance DeepSeek.
The crackdown wasn’t limited to China-linked groups. OpenAI also removed accounts believed to be connected to Russian-speaking criminal organizations. These groups were found using ChatGPT to help create various types of malware.
OpenAI launched its threat reporting system in February of last year. Since then, the system has successfully identified and stopped over 40 different threat networks. This shows the company’s commitment to keeping its AI tools safe and secure.
