Noyb Sues OpenAI Over ChatGPT’s False Information

Imagine a conversation with a chatbot that takes a dark turn. You ask about yourself, and it spits out false, disturbing information. This isn’t hypothetical – it’s what happened to a user of ChatGPT, a popular AI chatbot. The bot claimed the user was a child murderer, sparking a serious complaint.

Noyb, a privacy watchdog, is taking OpenAI, the company behind ChatGPT, to task over this incident. The issue isn’t just the bot’s mistake; it’s that OpenAI doesn’t let users correct false information. Noyb argues this violates the EU’s General Data Protection Regulation (GDPR), which requires companies to ensure data accuracy.

ChatGPT’s responses are often based on educated guesses, not facts. But when it makes serious accusations, a simple disclaimer won’t cut it. Yoakim Söderberg, a data protection lawyer, says if ChatGPT can’t provide accurate information or let users correct errors, it’s not fair to just add a small disclaimer saying the content might be wrong.

OpenAI has faced similar complaints before. Its response? The company claims it’s not responsible for ensuring data accuracy; it just provides results based on user input. But this approach raises concerns. If false information stays in the system, users can’t trust the data will be correct in the future.

Noyb is calling for OpenAI to delete defamatory data, improve its model to prevent false information, and face fines to prevent similar violations. The case highlights the need for AI companies to prioritize data accuracy and user control. As AI becomes more prevalent, it’s crucial to address these issues to maintain trust in the technology.

Sources:

Read Also:  Microsoft Develops MAI Model to Reduce OpenAI Dependence

Recent Articles

Related News

Leave A Reply

Please enter your comment!
Please enter your name here