Meta is using public content from Facebook, Instagram, and Threads to train its AI models, including posts, comments, and captions. The data will not include private messages or content from users under 18. A panel of EU privacy regulators has confirmed this practice, and Meta says it’s following legal requirements. The company claims its goal is to make its AI more representative of diverse cultures, languages, and social norms.
How Meta’s AI Training Works
Other companies, like Google and OpenAI, also use user-generated content to train their AI models. Meta says its approach is more transparent, and users can opt out. European users will receive emails and notifications with a link to a form to reject the use of their content.
To opt out, users need to:
- Access the form via the link in Meta’s notice or through its privacy policy
- Indicate they don’t want their public info used for AI training
- Provide a reason (any reason is accepted)
- Submit the form
Users must act before the deadline, as Meta won’t guarantee data removal after that.
Concerns and Implications
While Meta says this will make its AI better suited to European diversity, many users are worried about privacy. This move sparks a debate about how big tech uses personal data and user control. The stricter regulatory context in the EU has forced Meta to adjust its plans. Now, with the European Data Protection Board’s approval, the company is moving forward. However, the focus will be on how Meta implements this change and ensures users’ rights.
The European Center for Digital Rights (NOYB) has filed complaints in 11 EU countries, claiming Meta makes it hard to opt out using “dark patterns” like hard-to-find forms or requiring a reason to be given.