Artificial Intelligence to Analyze and Process Your Future Facebook Posts
Meta, the parent company of Facebook, Instagram, WhatsApp, and Threads, has updated its privacy policies to allow the use of user data for AI training purposes. This includes conversations with AI chatbots across these platforms [1][5].
The policy change involves hiring external contractors who may access and review user interactions with Meta’s AI systems, sometimes exposing personal data such as names, phone numbers, emails, selfies, and more [1][5]. Meta claims to have "processes and guardrails" in place to limit what personal data contractors see and instructs them on handling such data responsibly. However, contractors have reported frequently encountering personally identifiable information in the datasets they assess for AI training [1][5].
Privacy concerns have been raised, particularly in Europe, where privacy advocates like Max Schrems and his group NOYB have criticized Meta’s approach. They argue that many users are unaware their data is used this way, and that only a small minority consent to training AI models on their social media data [2].
For users who wish to object or opt out, the options vary by region due to legal requirements. European users are protected under GDPR, which mandates user consent and provides rights to limit data processing, including AI training-related uses. However, users may not be explicitly asked for consent for AI training or may not be fully informed [2].
Meta provides a form for users to report AI-generated responses that include personal data. To use the form, users must provide evidence of prompts or messages sent to the AI, as well as the responses (including personal data) from them [1]. Users can also add additional information to help Meta examine their objection.
It is important to note that Meta may still process information about a user to develop and improve AI, even if they object to its use or don't use its products and services [1]. The company's privacy policy from June 26, 2024, is the reference for these updates [1].
Facebook offers a form to object to this data collection in its privacy policy. The same form can be accessed on Instagram using a specific link [1]. Meta clarifies that it does not use the content of private messages for AI model training [1].
As for the future, the European Union may react to this new privacy intrusion for its citizens [1]. For users concerned about privacy, it is recommended to avoid sharing personal information in AI chatbot conversations, review and adjust privacy settings on Facebook, Instagram, and WhatsApp, utilize regulatory rights where applicable, and monitor official Meta communications for updated privacy policy changes and data control options [2].
For a test of Meta's AI, readers can refer to the article titled "A LLaMA that spits out posts: Our test of Meta's AI" [1]. It remains to be seen how this development will unfold in the coming days and weeks.
[1] Source: [Link to the article or news site] [2] Source: [Link to the article or news site] [3] Source: [Link to the article or news site] [4] Source: [Link to the article or news site] [5] Source: [Link to the article or news site]
Technology and artificial-intelligence are integral parts of Meta's updated privacy policies, as user data is now being utilized for AI training purposes, including conversations with AI chatbots across Meta's platforms. Despite the company's claims of having processes and guardrails in place to protect personal data, contractors have reported frequent encounters of such information in the datasets they assess for AI training.