Half has announced that it will begin to train his artificial intelligence Using public content shared on Facebook and Instagramincluding those of European users. A decision that has generated concern and controversy, especially among those who want protect your personal data.
According to what was communicated, only i will be employed public content of adult users, thus excluding private messages, posts with limited visibility and content published by minors. However, this does not delete the risks: even if a user opposes, his data could still end up in training datasetfor example if published by third parties or visible in the comments on public posts.
For oppose the use of one’s contentsMeta has prepared a online formaccessible through the page dedicated to privacy of the AI At this link. After sending, the user receives a confirmation email. Only one request covers all Designed Meta Accountso if you use both Facebook and Instagram it is not necessary to repeat the procedure.
Time holds: theMeta training at the end of May will begin And after that date it will no longer be possible to exclude the contents already acquired. This makes it fundamental to act quickly if you want to prevent your data from feeding artificial intelligence.
The opposition does not protect from what other users publish
An relevant problem is that the opposition Other users publish. For example, if someone publishes a photo in which we belong or mention, that information could still enter the training. And the risk increases for whom since it has no way of accessing opposition tools.
In the event of personal contents generated by Meta Ai, a dispute: it is necessary to provide the prompt that generated the response and a screenshot of the result. However, it should be noted that discovering that they have been mentioned requires an active control that few will do.
The biggest fear is that I destroy to become a Conversational search engine similar to perplexity, capable of Offer un verified personal information. A step that raises doubts not only about privacy, but also on the credibility and reliability of the responses generated. Injecting disinformation to be given to learning models will the new a hacking frontier?