Meta glasses face privacy fight over intimate footage review by contractors

March 5, 2026Case Studies
#AI in Law
2 min read
Meta glasses face privacy fight over intimate footage review by contractors

Meta’s Ray-Ban Meta smart glasses are facing a fresh privacy fight after a Swedish investigation published on 27 February alleged that Kenya-based contractors working on Meta’s AI systems had reviewed sensitive material from users, including clips showing people undressing, using the toilet and exposing bank cards. The reporting has now spilled into the US, where a proposed class action filed on 5 March accuses Meta and Luxottica of selling the glasses with privacy promises the product may not fully support.

According to The Verge’s account of the Swedish reporting, the contractors were reviewing clips and transcriptions to help Meta’s systems check whether its assistant had correctly understood and answered spoken questions. That makes “access” less vague. It means parts of what people recorded or asked through the glasses could move into a human review process tied to product testing and AI improvement.

Also read: Google is facing a wrongful-death lawsuit over Gemini and its safety defenses may now be tested in court.

Meta’s defence is likely to rest on consent and on where the product draws the line. The company has said photos and videos taken on the glasses stay on a user’s device unless that person chooses to share them, cloud features or third-party services. But that is also where the legal fight is likely to land. 

The complaint says buyers were shown strong privacy claims and were not clearly told that content shared into Meta’s AI features could end up being reviewed by workers overseas.

The privacy policy says wake-word voice recordings can be stored in the cloud for up to a year to improve its products, while last year’s privacy changes made some camera-linked AI features effectively active unless users switched them off.

The class action is likely to argue that Meta’s privacy promises were misleading in light of how review worked in practice, while regulators are more likely to focus on the old problems this episode revives: weak notice for bystanders, data-retention defaults and whether users were given a real understanding of what happens once AI features are turned on. 

YR
Y. Anush Reddy

Y. Anush Reddy is a contributor to this blog.