Samsung employees made a serious mistake using ChatGPT

Samsung employees leaked sensitive data unknowingly using ChatGPT to help them with their tasks.

Engineers in their semiconductor division used the AI ​​bot to help them troubleshoot their source code. But in doing so, workers entered sensitive data, such as the source code for a new program, notes from internal meetings, data related to their hardware.

The result is that, in just under a month, three incidents of employees leaking confidential information via ChatGPT.

As ChatGPT keeps data entered by users to continue training, these Samsung trade secrets are now in the hands of OpenAI, the company behind the AI ​​service.

In one of the cited cases, an employee asked ChatGPT to optimize test sequences to identify chip failures, which is confidential. In another case, an employee used ChatGPT to convert meeting notes into a presentation, the content of which was obviously not something Samsung would want third parties to know about.

After the incidents, Samsung Electronics has issued a warning to its employees about the potential dangers of leaking confidential information, stating that this data is impossible to retrieve since it is now stored on servers owned by OpenAI.

In the semiconductor industry, where competition is fierce, any type of data breach can spell disaster for the company in question.

It doesn’t appear that Samsung has any recourse to request the recovery or deletion of the sensitive data that OpenAI now has in its possession. Some have argued that this fact makes ChatGPT non-compliant with the EU GDPR, as this is one of the fundamental principles of the law that governs how companies collect and use data. It’s also one of the reasons why Italy has banned the use of ChatGPT across the country (Opens in a new tab).

Read Also:  Adobe unveils generative video innovations (like OpenAI's Sora) - Ecommerce News

Recent Articles

Related News

Leave A Reply

Please enter your comment!
Please enter your name here