Meta scored a major victory in its fight to use European users’ data for AI training, despite facing fierce opposition from privacy groups and regulators. The social media giant’s claiming “legitimate interest” under European privacy laws lets it bypass the need for users to give explicit permission before their data gets used.
Meta sidesteps explicit consent requirements by claiming legitimate interest to train AI on European users’ data.
This move affects over 400 million European Facebook and Instagram users. Under Meta‘s new policy, people’s posts, photos, and other content will automatically be included in AI training unless they actively opt out. That’s different from traditional privacy rules that require companies to ask permission first.
Privacy watchdog NOYB sent Meta a cease and desist letter demanding the company stop these practices. German consumer protection authorities also issued legal warnings. Meta gave European users until May 27, 2025, to opt out, while NOYB demanded answers by May 21.
The controversy deepened when some users who’d already opted out were asked to do it again. Critics say this confuses people and makes it harder to protect their privacy. Meta’s approach turns privacy protection upside down – instead of asking “Can we use your data?” the company’s basically saying “We’ll use your data unless you tell us not to.”
Meta plans to use this data to train its AI tools, including its open-source Llama models. The company’s privacy policy explicitly states it uses shared information including posts, photos, and captions from users. Once these AI systems learn from user data and get released publicly, it’s nearly impossible to remove that information later. This means even if someone asks Meta to delete their data, it might already be baked into AI models that anyone can download.
Legal experts warn Meta could face massive fines and lawsuits. Privacy groups threaten class action cases that could cost billions of euros. They argue that making money isn’t a valid “legitimate interest” that overrides people’s privacy rights. In August 2023, Meta had already changed its legal basis from legitimate interest to consent-based for targeted ads following regulatory pressure.
The battle highlights growing tensions between tech companies hungry for data to train AI and European laws designed to protect privacy. Regular ethical audits could help ensure fairness and transparency in how Meta and other companies utilize this massive trove of personal data. Authors and content creators have joined the legal fight, worried their work’s being used without permission.
As AI development accelerates, this conflict between innovation and privacy protection will likely intensify.
References
- https://thehackernews.com/2025/05/meta-to-train-ai-on-eu-user-data-from.html
- https://www.ksby.com/science-and-tech/data-privacy-and-cybersecurity/meta-faces-new-lawsuit-for-making-eu-users-repeatedly-opt-out-of-ai-data-training
- https://authorsguild.org/news/meta-libgen-ai-training-book-heist-what-authors-need-to-know/
- https://noyb.eu/en/noyb-sends-meta-cease-and-desist-letter-over-ai-training-european-class-action-potential-next-step
- https://www.fingerlakes1.com/2025/05/14/meta-ai-data-europe-privacy-lawsuit-2025/