Cover Image for Meta confirms that it could train its AI with any image you request for the analysis of Ray-Ban Meta AI.
Thu Oct 03 2024

Meta confirms that it could train its AI with any image you request for the analysis of Ray-Ban Meta AI.

We recently asked Meta whether it trains its artificial intelligence using the images and videos captured by users through the Ray-Ban Meta smart glasses. Initially, the company did not provide many details on the matter.

Recently, concerns were raised about how Meta might use images and videos captured by users through the Ray-Ban Meta smart glasses. Initially, the company did not provide much information on the matter, but later, Emil Vazquez, Meta's policy communications manager, offered more details. According to his clarification, any image shared with Meta AI can be used for training its artificial intelligence, especially in regions like the United States and Canada, where multimodal AI is available.

A previous company spokesperson had indicated that photos and videos taken with the Ray-Ban Meta would not be used for training Meta's AI, provided that the user did not send them for analysis. However, when requesting Meta AI to examine these images, they fall under a different set of policies. This implies that the company can accumulate a vast volume of data from its first consumer AI device, which in turn could create more powerful AI models. To prevent their data from being used, users would have to opt out of using Meta's multimodal AI features.

The implications are concerning, as Ray-Ban Meta users may not be aware that they are sharing numerous images, which could include the interiors of their homes or personal documents, for the training of the company's AI models. While Meta representatives claim that this information is clear in the Ray-Ban Meta user interface, company executives did not provide details when initially asked.

Meta was already training its Llama models using any content that users publicly shared on Instagram and Facebook, but it has now broadened this definition to include everything users see through their smart glasses and then send to its AI chatbot for analysis. This becomes particularly relevant with the recent launch of new AI features that make it easier for users to invoke Meta AI more naturally, which could encourage more data sharing for training.

A new real-time video analysis feature was also announced, allowing a continuous stream of images to be sent to Meta's multimodal AI models. During a recent conference, the company promoted the ability to analyze wardrobe content and select an outfit; however, it was not mentioned that those images would also be used for training its models.

Meta also directed inquiries to its privacy policy, which states that "your interactions with AI features may be used to train AI models." However, despite these claims, the company did not provide further clarity on the subject. Regarding the terms of service for Meta AI, it states that by sharing images, users agree to allow the company to analyze said images, including facial features.

It is worth mentioning that Meta recently paid $1.4 billion to Texas to settle a legal case related to its use of facial recognition technology. This case referred to a Facebook feature launched in 2011 called "Tag Suggestions," which was modified in 2021 to be optional. Furthermore, some of Meta AI's image resources will not be available in Texas.

Finally, the company's privacy policies indicate that voice conversation transcripts with Ray-Ban Meta are also stored by default to train future AI models, although there is an option to opt out of voice recording during the first login to the Ray-Ban Meta app. It is evident that companies like Meta and Snap are driving the use of smart glasses as a new form of computing, reigniting privacy concerns previously discussed in the era of Google Glass. Recently, it was reported that some university students have managed to hack the Ray-Ban Meta glasses, revealing personal information about people looking through them.