The European Union faces accusations of undermining its long-held privacy standards by proposing data protection reforms that could allow technology companies to use personal data for artificial intelligence training without explicit consent.
This potential shift marks the first significant proposed alteration to the General Data Protection Regulation (GDPR) since its inception. Critics contend the reforms would prioritize economic competitiveness over citizen privacy rights.
Activists warn that the changes represent a “massive degradation” of digital rights across the continent. Organizations like NOYB, founded by privacy advocate Max Schrems, describe the proposals as “a death by a thousand cuts” for the GDPR.
The core of the controversy lies in enabling tech firms to process personal information for AI development under a “legitimate interest” principle. This bypasses the need for users’ direct, explicit permission, according to a report.
Such modifications could allow platforms to collect sensitive data, including browsing habits and personal preferences, without explicit authorization. Critics argue this opens the door to indiscriminate data usage.
The European Commission defends the initiative. It states the goal is to “reduce bureaucracy” and “adapt legislation to the pace of technological innovation” while upholding GDPR’s fundamental principles.
Further reforms target rules governing cookies and device access. Companies could gain access to information on personal devices like phones and computers, citing reasons such as fraud detection or audience measurement.
Itxaso DomÃnguez de Olazabal from the European Digital Rights network warned that these changes would “transform how Europe protects private spaces within personal devices.” This would dilute citizens’ control over their digital information.
If approved, the reforms could also merge cookie rules within the GDPR itself. This would effectively remove many current legal structures that mandate explicit consent through pop-up windows.
This policy pivot is influenced by external pressures. The United States government has previously labeled EU regulations as “anti-innovation,” particularly in response to privacy violation penalties imposed on companies like Meta.
Major technology companies have intensified their lobbying efforts in Brussels. They argue that EU regulations hinder AI development compared to the faster pace of innovation in Silicon Valley and China.
The proposed new approach includes a one-year moratorium on penalties for using deepfakes and other synthetic content. This would give companies more time to label machine-generated creations.
Additionally, AI systems used internally, for limited functions, or in non-public applications would not require registration in the EU’s high-risk AI database.
The European Commission is set to formally present these amendments on November 19, though the final text may still undergo modifications.
This debate reflects a broader trend within European policy. The Commission has recently relaxed environmental regulations, citing the need to prevent investment flight and maintain global competitiveness.
Sources close to AI legislation admitted that “Europe is nervous about the possibility of losing investments.” The current priority is to project an image of flexibility and economic pragmatism.
However, many legislators are concerned about the symbolic cost of this shift. For a decade, the GDPR stood as a global benchmark and a “beacon of European values” in the digital realm.
Analysts warn that if the EU compromises its privacy principles, it could jeopardize its moral authority in technological governance. This discussion is seen as a crucial test for the European model of digital regulation.
Once considered a “global privacy standard,” the GDPR now risks becoming a diluted version, shaped by commercial and geopolitical pressures. If the reforms proceed, Brussels could move from resisting big tech to demonstrating how economic interests can override citizen rights.
