Amazon Ring Sparks Controversy With AI Facial Recognition

So if you’re like me and you’ve ever looked at a doorbell notification and felt that quick thrill—Who is it?. That brief moment of not knowing is exactly what makers of home intelligence technology want to remove. But recent updates have introduced a catch.
On December 9, 2025, TechCrunch reported that Amazon-owned Ring is now launching “Familiar Faces,” an AI-based facial recognition service meant to work on Ring video doorbells in the United States, after showing it at a fall device launch in October.
The pitch is simple: fewer generic alerts and more alerts that actually help. When you tag a contact in the Ring app, family members, neighbors, or frequent visitors. Ring says that, over time, notifications may show that contact’s name instead of a generic “Person detected” alert.
This contact tagging system works after a consumer opts in, and Ring’s own help documentation says detected faces are added to a “Familiar Faces” library controlled by the account holder.
The ‘Bystander Problem’: Why Privacy Advocates Are Concerned
What happens in “Familiar Faces” isn’t limited to the homeowner who turns it on. For a face to be labeled “familiar,” the system has to evaluate faces that appear in view, including people who never chose to be scanned. The Washington Post put the equity issue bluntly: “Where the technology is deployed, the faces of bystanders are scanned for a familiar match.”
Ring says this is optional. The setting is off by default and also restricted: Ring’s support page says, “This requires a supported Ring membership and is not available in Texas, Illinois, or Quebec or in the Portland, OR area because of state and/or provincial laws.” It also “only works with 2K and 4K-enabled Ring devices."
For lawyers tracking this, those limits line up with biometric privacy rules like Illinois’s Biometric Information Privacy Act (BIPA) and Texas’s Capture or Use of Biometric Identifier Act (CUBI), two major laws that cover facial scans in the United States.
This point was underlined by Sen. Ed Markey in a letter to Amazon CEO Andy Jassy on October 31, 2025, asking that the plan be dropped: “Even assuming that owners can enroll in this feature, this fix does not protect those who are ‘unknowingly captured.’”
There’s also a technical footnote: Ring’s docs say Familiar Faces can’t be used with end-to-end encryption (Ring Edge).
Consumers hear versions of this tradeoff a lot: smarter notifications in exchange for their data. But many people also wonder where their trust limit is. They want more intelligent notifications, including clear notifications about what their data can trigger. And they want reassurance that the most sensitive information a camera can pull won’t widen the blast radius if something goes wrong.
The Police-Access Question
Ring’s baggage is hard to ignore. Ring says it “receives and responds” to legally valid government requests—like search warrants, subpoenas, and court orders—and that if it is legally required to produce information tied to a government demand, it must comply.
That matters for Familiar Faces because Ring’s own docs say the feature is not compatible with E2E encryption, and that it “encodes your profiles and biometrics and stores them in encrypted form in cloud storage in your account.”
In practical terms, when data sits in a provider’s cloud (even if encrypted) and is not end-to-end encrypted where only the customer holds the keys, the provider may have the technical ability to produce records in response to lawful process.
This worry is even stronger because Ring has repeatedly been connected to law enforcement workflows. In 2025, Ring resumed allowing police to request video footage from consumers through an Axon integration (with user consent). Whether Familiar Faces is different from “share this clip” or not, the current direction pushes smart home video deeper into law enforcement use, not away from it.
The EFF has raised this issue, arguing that front-door face recognition is very different from unlocking a phone, because the “subjects” are often non-users, so it’s hard to claim meaningful consent when the camera faces public spaces.
This marks a shift in consumer AI from passive monitoring to active sorting. Once a camera can label identities, it turns video into data that categorizes visitors as 'known' or 'unknown' instantly. The real challenge, however, isn't about an app setting—it’s how regulators respond to the idea of “opt-in” when the people being scanned aren’t opting in at all.
Y. Anush Reddy is a contributor to this blog.



