X, the social media platform owned by Elon Musk, has been targeted with a series of privacy complaints after it helped itself to the data of users in the European Union for training AI models without asking peopleās consent.
Late last month an eagle-eyed social media user spotted a setting indicating that X had quietly begun processing the post data of regional users to train its Grok AI chatbot. The revelation led to an expression of āsurpriseā from the Irish Data Protection Commission (DPC), the watchdog that leads on oversight of Xās compliance with the blocās General Data Protection Regulation (GDPR).
The GDPR, which can sanction confirmed infringements with fines of up to 4% of global annual turnover, requires all uses of personal data to have a valid legal basis. The nine complaints against X, which have been filed with data protection authorities in Austria, Belgium, France, Greece, Ireland, Italy, the Netherlands, Poland and Spain, accuse it of failing this step by processing Europeansā posts to train AI without obtaining their consent.
Commenting in a statement, Max Schrems, chairman of privacy rights nonprofit noyb, which is supporting the complaints, said: āWe have seen countless instances of inefficient and partial enforcement by the DPC in the past years. We want to ensure that Twitter fully complies with EU law, which ā at a bare minimum ā requires to ask users for consent in this case.ā
The DPC has already taken some action over Xās processing for AI model training, instigating legal action in the Irish High Court seeking an injunction to force it to stop using the data. But noyb contends that the DPCās actions thus far are insufficient, pointing out that thereās no way for X users to get the company to delete āalready ingested data.ā In response, noyb has filed GDPR complaints in Ireland and seven other countries.
The complaints argue X does not have a valid basis for using the data of some 60 million people in the EU to train AIs without obtaining their consent. The platform appears to be relying on a legal basis thatās known as ālegitimate interestā for the AI-related processing. However, privacy experts say it needs to obtain peopleās consent.
āCompanies that interact directly with users simply need to show them a yes/no prompt before using their data. They do this regularly for lots of other things, so it would definitely be possible for AI training as well,ā suggested Schrems.
In June, Meta paused a similar plan to process user data for training AIs after noyb backed some GDPR complaints and regulators stepped in.
But Xās approach of quietly helping itself to user data for AI training without even notifying people appears to have allowed it to fly under the radar for several weeks.
According to the DPC, X was processing Europeansā data for AI model training between May 7 and August 1.
Users of X did gain the ability to opt out of the processing via a setting added to the web version of the platform ā seemingly in late July. But there was no way to block the processing prior to that. And of course itās tricky to opt out of your data being used for AI training if you donāt even know itās happening in the first place.
This is important because the GDPR is explicitly intended to protect Europeans from unexpected uses of their information which could have ramifications for their rights and freedoms.
In arguing the case against Xās choice of legal basis, noyb points to a judgement by Europeās top court last summer ā which related to a competition complaint against Metaās use of peopleās data for ad targeting ā where the judges ruled that a legitimate interest legal basis was not valid for that use case and user consent should be obtained.
noyb also points out that providers of generative AI systems typically claim theyāre unable to comply with other core GDPR requirements, such as the right to be forgotten or the right to obtain a copy of your personal data. Such concerns feature in other outstanding GDPR complaints against OpenAIās ChatGPT.