When Technology Sees Everything: Why Meta’s AI Glasses Scandal Demands a Reset in Trust, Ethics, and Governance

News and information from the Advent IM team.

Having spent decades championing security, privacy, and robust governance, I’ve seen the pattern play out enough times to recognise it instantly: innovation races ahead, controls lag behind, and society ends up dealing with the fallout. The recent revelations about Meta’s Ray-Ban smart glasses should worry anyone who values ethics and public trust and they should alarm every organisation handling personal data.

Investigations by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten revealed that contractors in Nairobi were reviewing footage captured by Meta’s AI-powered glasses. The footage included people undressing, using the toilet, discussing personal matters, or inadvertently exposing financial details such as bank cards. All of this occurred without the subjects’ knowledge or consent. Workers on the project reportedly described the experience bluntly: “We see everything.” The UK Information Commissioner’s Office (ICO) has since reached out to Meta, demanding answers on how such deeply intrusive content ended up in review queues and whether Meta’s practices meet UK data protection standards. For anyone in security and governance, the red flags are glaring.

Meta marketed the glasses as “designed for privacy, controlled by you,” but the reality seems to tell a different story. Reports indicate that the company promised footage would be anonymised through facial blurring, yet sources say the technology often failed, leaving identifiable individuals exposed to offshore contractors. In my world, that’s not just a privacy failure, it’s a governance failure, and a predictable one at that. Whenever a product promises always-on intelligence, the critical questions are: what data fuels that intelligence, where does it go, and who sees it? Convenience without governance is simply risk dressed up as progress.

The ethical concerns are just as severe as the security ones. Footage of private activities; intimate, vulnerable, deeply personal moments, was seen by people with no relationship to the recorded individuals, many of whom didn’t even know a camera was present. The echoes of the Google Glass controversy are clear: when technology creates an asymmetry of surveillance, where the wearer gains power and the public loses privacy, the social licence to operate collapses. We are witnessing that collapse all over again.

Regulators are paying attention. The ICO has described the situation as “concerning” and is seeking details from Meta regarding compliance with UK law, particularly cross-border data transfers and transparency obligations. Longstanding guidance emphasises transparency, individuals must know when and how they are being recorded , data minimisation, only the data necessary should be collected and appropriate safeguards for international processing, which appear to have been severely lacking here. UK policing bodies have previously raised concerns about body-worn and public-facing cameras, particularly the risks of capturing unintended third-party data and the chain-of-custody issues inherent in offshoring footage analysis. When the capture device sits on someone’s face and is powered by an AI that “sees everything” in its environment, these risks escalate exponentially. Meta’s seven million smart glasses sold in 2025 only amplify the potential exposure.

For businesses investing in wearables, analytics, or AI-driven user augmentation, this should serve as a wake-up call. The governance failures here are striking: a lack of informed consent for wearers and bystanders, inadequate technical safeguards as anonymisation failed, misleading privacy assurances that contradicted how footage was actually handled, and poor oversight of third-party processors exposed to sensitive data under questionable ethical conditions.

The path forward requires a proactive approach. Treat every new technology as a potential risk vector until proven otherwise. For AI wearables, conduct a Data Protection Impact Assessment before deployment — not as a tick-box exercise, but as a living operational tool. Explicitly address risks to bystanders, scrutinise claims of AI-based anonymisation, demand full transparency from vendors regarding data storage and access, and implement clear internal policies for wearable use. You may not control the vendor, but you can control your environment.

Innovation cannot outpace ethics. Security and privacy are not constraints; they are enablers of sustainable innovation. When organisations cut corners, outsource accountability, or prioritise features over duty of care, incidents like this are inevitable. Meta’s AI glasses scandal is not just a misstep; it is a case study in what happens when ethics is optional and governance an afterthought. If we want a future where technology serves people rather than surveils them, this must be the moment we draw a line.

– Mike Gillespie, Advent IM

Share this Post