Meta's AI Ambitions Face Privacy Challenges in Europe
Meta plans to use public data from European users to train its AI models, despite stringent EU data privacy laws and concerns raised by privacy activists. The company aims to better reflect European languages and cultures in its AI, while offering users an opt-out form.

- Country:
- United Kingdom
Meta is set to use public data from its European users to train its artificial intelligence models, the social media conglomerate announced on Monday. As Meta strives to stay competitive with AI giants like OpenAI and Google, the company faces significant hurdles due to strict EU data protection regulations.
To ensure that its AI models reflect Europe's diverse languages and cultures, Meta has emphasized the need to incorporate public content from these users. However, Vienna-based privacy group NOYB, led by activist Max Schrems, has urged national privacy watchdogs to halt Meta's AI training plans, warning that the company could breach privacy laws.
Meta's AI models, including the newly-developed Llama, are trained on extensive datasets to enhance their predictive capabilities. Despite utilizing AI features in the U.S. and 13 other countries, Meta has yet to implement them in Europe. The company clarified that it will not use private messages or data from users under 18 in its AI training. European users have been notified of these plans and given the option to opt-out, with the latest privacy policy taking effect on June 26.
(This story has not been edited by Devdiscourse staff and is auto-generated from a syndicated feed.)