Ireland’s Data Protection Commission investigates X’s alleged use of EU user data for training Grok AI, testing GDPR’s limits on AI development and data minimization principles.
The Irish Data Protection Commission (DPC) confirmed on 20 June 2024 it is examining whether X processed EU users’ private messages, deleted tweets, and biometric data without proper legal basis under Article 6 of GDPR. This investigation follows whistleblower claims about data scraping practices and coincides with the European Data Protection Board’s new AI enforcement priorities.
Investigation Scope Reveals Systemic Challenges
The DPC’s probe focuses on three key data types: 1) Direct messages between users 2) Deleted posts archived in X’s servers 3) Biometric data from profile photos. According to their 21 June press release, X failed to disclose location metadata processing during initial GDPR audits.
GDPR’s Article 6 Under Microscope
Legal experts highlight the case tests GDPR’s “lawful basis” requirements under Article 6. “Consent for AI training must be explicit and specific – broad terms of service won’t suffice,” stated EDPB spokesperson Marta Silva during 19 June task force announcement.
Industry-Wide Implications Emerge
The investigation follows France’s CNIL fining Mistral €385,000 on 18 June for similar transparency failures. A 17 June Stanford study revealed 68% of EU AI training datasets contain undocumented personal data, complicating compliance efforts under the incoming AI Act’s data provenance rules.
Historical Context: EU’s Evolving Tech Regulation
This probe continues Europe’s pattern of stringent tech oversight, building on Microsoft’s 2021 €525 million GDPR penalty for LinkedIn data practices. The current actions mirror 2018-2020 GDPR enforcement waves that reshaped adtech, now applied to generative AI systems.
Previous digital transformations offer cautionary parallels. The 2010s mobile payment revolution in China saw regulators initially struggle with Alipay’s data practices before implementing strict controls. Today’s AI governance efforts aim to preempt similar scaling issues through proactive enforcement.