AI Literacy  ·  Biometric Privacy

Your Face Is Data.
Do You Know
Who Has It?

Every photo filter, AI headshot, and face unlock uses your biometric data. Unlike a password, your face cannot be changed.

The most sensitive data about you isn't stored in your bank account or medical record. It's your face — a permanent, unique identifier that every smartphone, photo filter, retail camera, and airport scanner now reads, maps, and in many cases stores. Most people have no idea how many systems have collected it, who those systems share it with, or what happens when they get it wrong.

$1.4B
Meta paid Texas for collecting facial recognition data without consent — the largest biometric privacy settlement ever by a single state
60B
Facial images scraped from social media by Clearview AI — without consent from any of the people in those photos
7+
People wrongfully arrested in the US after police used facial recognition misidentification — every documented case has been a Black person
25
US states with age verification laws for apps or social media, creating a new landscape for facial scanning of minors
Module 01

What biometric data actually is

Biometric data is information derived from your physical characteristics — the things that make you uniquely, permanently you. That permanence is what makes it categorically different from other personal data.

What makes it useful
  • +Uniquely identifies you — no two people share the same face, fingerprint, or iris pattern
  • +Can't be forgotten or lost the way a password can
  • +Enables frictionless authentication — unlocking devices, boarding flights, accessing buildings
  • +Can identify patients, detect genetic conditions, and assist medical diagnosis
  • +Enables legitimate security uses — finding missing persons, identifying crime suspects with supporting evidence
What makes it dangerous
  • Cannot be changed if compromised — you can change a password, not your face
  • Once stored, it can be used to identify you anywhere, at any time, without your knowledge
  • Breaches are permanent — exposed biometric data remains exposed for life
  • Systems can be wrong — and the consequences of a false match can be severe
  • Most people never consented to collection — it happens through apps, cameras, and filters they use for other reasons
Types of biometric data
Face
Facial geometry
The precise measurements and spatial relationships of facial features — distance between eyes, shape of nose, contour of jaw. This is what facial recognition systems map and store. It is collected by photo filters, AI headshot apps, social media platforms, retail cameras, law enforcement databases, and airport security systems — often without explicit disclosure.
Fingerprint
Fingerprints
Used to unlock smartphones, access work devices, and verify identity at border crossings. Most phone fingerprint data is stored locally on the device. Government fingerprint databases — maintained by the FBI, DHS, and state agencies — are separate and can contain records from employment checks, immigration processing, and criminal investigations.
Voice
Voiceprint
The acoustic characteristics of your speech — pitch, rhythm, cadence, tone — that together create a unique identifier. Collected by voice assistants (Google Assistant, Amazon Alexa, Siri), some phone banking systems, and increasingly by AI systems trained on large audio datasets. Google paid Texas $1.375 billion in 2025 partly for collecting voiceprints through Google Photos and Assistant without proper consent.
Iris
Iris and retina
Among the most accurate biometric identifiers. Used primarily in high-security access control and border crossing systems. Less common in consumer applications than face and fingerprint, but increasingly integrated into premium devices. The iris pattern is stable for life and has a false match rate measured in the millionths.
Behavioral
Behavioral biometrics
How you type, how you move your mouse, how you hold your phone — patterns of behavior that are unique to you. Used primarily in fraud detection by financial institutions. Less visible to users than physical biometrics but increasingly embedded in background authentication systems on websites and apps.

The permanence problem: If your credit card number is stolen, you get a new card. If your password is compromised, you change it. If your facial geometry is collected without consent and stored in a database that is later breached — you cannot change your face. The biometric data of 150,000 individuals was exposed in a 2021 breach of a major facial recognition company. Those individuals have no recourse that addresses the core problem.

Module 02

What you use every day

Most people interact with facial recognition and biometric systems multiple times a day without thinking of it as data collection. These are the most common touchpoints — and what actually happens with the data they generate.

Everyday biometric touchpoints
Phone
Face ID and fingerprint unlock
Apple's Face ID and Android face unlock create a mathematical representation of your face, stored locally on the device in a secure enclave — not on Apple or Google servers. This is one of the better-designed uses of biometric data: the purpose is clear, the data stays on your device, and you consented explicitly during setup. It is the model other biometric systems are measured against.
Photos
Google Photos and Apple Photos
Both apps automatically group photos by the faces in them. To do this, they analyze and store facial geometry data. Apple processes this on-device. Google Photos' facial grouping feature uses Google's servers in some configurations. Google paid Texas $1.375 billion in 2025 partly for collecting facial geometry through Google Photos without adequate consent disclosure. Apple has not faced equivalent enforcement action.
Airport
Airport and border facial scanning
US Customs and Border Protection (CBP) operates a biometric exit program that scans faces at departure gates. As of 2024, CBP scans approximately 97% of departing international passengers. Participation is technically optional for US citizens — you can request an alternative — but few passengers know this or are told at the gate. The data is shared with airlines and can be retained for up to 75 years.
Retail
Store cameras and retail surveillance
Major retailers — including Walmart, Macy's, and Kroger — have deployed facial recognition systems to detect shoplifting and flag previously banned individuals. In most US states, no law requires retailers to disclose this to customers. New York City requires businesses to post signs if they collect biometric data. Most jurisdictions have no equivalent requirement. You may be scanned simply by entering a store.
Workplace
Workplace time-tracking and access
Many employers use fingerprint or facial recognition for timekeeping, building access, and remote work verification. Illinois' BIPA has generated over 100 class action lawsuits against employers who collected this data without written consent. A $8.75 million BIPA settlement was approved in 2025 against a company that collected face and voice models from 660,000 students without consent as part of an education platform.
Social
Social media photo tagging
Facebook built and used a facial recognition system called DeepFace for photo tagging that it eventually shut down in 2021 — after paying approximately $650 million in Illinois settlements. Meta then agreed to pay $1.4 billion to Texas in 2024 for facial recognition violations under state law. TikTok paid $92 million in 2021 for using facial analysis to determine users' age, ethnicity, and gender for content targeting without adequate disclosure.
Meta
Account suspension and biometric recovery — Meta, 2024–2025
In spring 2025, thousands of legitimate Instagram and Facebook accounts were suspended by automated AI moderation systems — often labeled with serious violations like "child sexual exploitation" — with no specific explanation of what content triggered the action. To appeal, Meta requires users to submit a video selfie, which is then matched using facial recognition against their account profile. The system is frequently glitchy, rejects legitimate submissions, and offers users a single appeal attempt with no human review available in most cases. After 180 days, accounts are permanently disabled. Meta stated it would not run this facial recognition verification in Illinois or Texas — the two states with the strongest biometric privacy laws — which is itself significant. Despite the volume of complaints, Meta has not publicly explained the criteria that trigger the verification requirement.

The consent gap: Most of these systems collect biometric data as a byproduct of a service you're using for another purpose. You take a photo to share with family. You enter a store to buy groceries. You board a flight. At no point did you agree to have your facial geometry mapped and stored in a commercial database. The consent you gave — if any — was buried in terms and conditions that almost no one reads.

The consent reversal: The Meta account recovery situation introduces a new problem. You didn't consent to biometric collection — but an AI flagged your account, and now submitting your face is the only way to appeal. A company that paid $1.4 billion for misusing facial recognition data is now requiring that same data as a condition of accessing an account you built. This is not a hypothetical risk. It happened to thousands of people in 2025. And it is currently unregulated — no US law requires Meta to explain the criteria for triggering biometric verification or limits what they can do with the video selfie once submitted.

Module 03

Photo filters and AI headshots

The Snapchat filter that gives you dog ears. The AI headshot you generated for LinkedIn. The aging app everyone tried. These feel like entertainment. They are also data collection events — and the terms of what happens next vary enormously.

Augmented reality face filters work by mapping your facial geometry in real time — detecting landmark points on your face (eyes, nose, jaw, cheekbones) and overlaying digital elements on them. The facial analysis required to make the dog ears follow your expressions is the same facial analysis used in surveillance and identity systems. The difference is what happens to the data afterward.

Photo filters — what's actually happening
Snapchat
Snapchat AR filters
Snapchat's Lens Studio processes facial mapping for filters. In 2021, Snapchat paid $35 million to settle an Illinois BIPA class action alleging it collected biometric identifiers through filters without required written consent. Snapchat states that face data used for lens effects is processed locally and not stored. The settlement resolved claims about earlier practices. Snapchat's privacy policy indicates that user content, including images, may be used to train AI models — with an opt-out available.
Instagram
Instagram and Meta AR effects
Meta's platforms offer AR effects through Spark AR. Meta's broader practices around facial data led to the $650 million Illinois settlement and $1.4 billion Texas settlement. Meta's privacy policy indicates user content may be used to train AI. In 2025, internal documents obtained by the New York Times revealed Meta's plans to reintroduce facial recognition features on Ray-Ban glasses — timing the launch for when privacy advocates were focused elsewhere.
FaceApp
FaceApp and aging/transformation apps
FaceApp, developed by a Russian company, went viral multiple times for its aging and transformation effects. Its terms of service grant a "perpetual, irrevocable, nonexclusive, royalty-free, worldwide, fully-paid, transferable sub-licensable license" to use, reproduce, modify, and publish user content — meaning the company can use your face images for almost any purpose, indefinitely, without compensation. This language is not unique to FaceApp — many free filter apps contain similar terms.
TikTok
TikTok filters and facial analysis
TikTok paid $92 million in 2021 for using AI to analyze users' facial features — age, ethnicity, gender — from filter usage to inform content recommendations, without adequately disclosing this in its privacy policy. TikTok's current policy indicates it may collect "faceprints and voiceprints" where permitted by law. TikTok lacks a clearly described opt-out mechanism for AI training on user data.

AI headshot apps — what you need to know

AI headshot generators — tools like Lensa, HeadshotPro, Aragon AI, and many others — ask you to upload 10 to 20 photos of your face. The system creates a personalized AI model of your facial features, generates portraits, and then — depending on the service — either deletes your photos or retains them for varying periods.

What reputable services do
  • +Delete uploaded photos after a defined period (7–30 days typical)
  • +State explicitly that photos are not used for model training
  • +Allow manual deletion at any time
  • +Encrypt photos in transit and at rest
  • +Comply with GDPR and provide clear data handling documentation
Red flags to watch for
  • No stated deletion policy or vague language about retention
  • Broad license terms allowing unlimited use of your images
  • Free service with no clear business model — your data may be the product
  • Company with no verifiable legal entity or physical address
  • No information about where servers are located or what jurisdiction applies

The cybersecurity expert's view: "It's impossible to know, without a full audit of the company's back-end systems, how safe or unsafe your pictures may be." Even when companies claim to delete photos within 24 hours, there is no independent verification mechanism available to users. If a company changes ownership, whatever data remains in its systems transfers to the new owner. The questions to ask before uploading: Does this company have a verifiable legal identity? Do their terms give them a license to use my images beyond the stated purpose? And is this service free — because if so, what are they actually selling?

Module 04

Who collects your face — and what they do with it

Your facial data doesn't stay with the app or camera that collected it. It moves — to data brokers, law enforcement agencies, AI training datasets, and advertising systems. Here is who the main collectors are and what they actually do.

The major collectors
Clearview AI
Clearview AI — 60 billion images, scraped without consent
Clearview AI built a database of over 60 billion facial images by scraping photographs from social media platforms, news websites, Venmo, and other publicly accessible sources — without consent from any person whose image was included. Law enforcement agencies in the US and other countries use it to identify suspects. In March 2025, a federal court approved a $51.75 million settlement resolving BIPA and other state law claims. Class members received a 23% equity stake in Clearview — a first-of-its-kind resolution. Clearview has faced bans in multiple countries for violating privacy law.
Data brokers
Data brokers and identity aggregators
Companies like LexisNexis, Acxiom, and dozens of smaller data brokers purchase and aggregate biometric data alongside other personal information — building comprehensive identity profiles that are sold to employers, insurers, landlords, and marketers. Most people have no idea a profile exists, no way to see what it contains, and limited ability to correct errors. In states without comprehensive privacy laws, there is no legal obligation for these companies to disclose their data holdings to the individuals described in them.
Government
Government databases
The FBI's Next Generation Identification system contains over 150 million face images — including millions of people who have never been charged with any crime (from driver's license databases, passport photos, and employment background checks). The Department of Homeland Security operates separate facial recognition systems for border control. Most states share driver's license photo databases with law enforcement facial recognition systems. Government use of biometric data is largely exempt from the state privacy laws that constrain commercial use.
AI training
AI model training datasets
Many AI systems — including the models that power the photo filters, recognition systems, and generative AI tools you use — were trained on large datasets of facial images scraped from the internet without individual consent. The images of millions of people have been used to train commercial systems without their knowledge. This is increasingly the subject of litigation and regulatory scrutiny, but remains largely unregulated in the US at the federal level.

"The gap between what companies know about you and what you know they know has never been wider." — From Jennifer Stivers' LinkedIn article on biometric data

The secondary use problem: Data collected for one purpose is routinely used for others. Facial images uploaded to a photo sharing app may end up in an AI training dataset. A face mapped by a retail camera may be matched against a law enforcement database. A headshot app's deletion promise may not survive a change of ownership. The original purpose of collection is rarely the only use.

Module 05

Age verification — a new frontier for face scanning

A wave of state laws now requires apps and social media platforms to verify the age of users — particularly to protect minors. The methods being used to do this create new privacy tradeoffs that most people don't know exist.

The intent behind age verification laws is legitimate: protect children from harmful content and exploitative platforms. The implementation raises a harder question — how do you verify someone's age online without collecting sensitive data in the process? The methods being deployed range from identity document checks to facial analysis, and each carries its own tradeoffs.

Age verification methods and their tradeoffs
ID upload
Government ID verification
Requires users to upload a driver's license, passport, or state ID. Confirms age with high reliability. Tradeoff: creates a direct link between your real identity and your platform activity — the most privacy-invasive method. If the platform is breached, your government ID is exposed. The US Supreme Court upheld Texas's right to require this for adult content sites in Free Speech Coalition v. Paxton (June 2025).
Face scan
Facial age estimation
AI analyzes facial features to estimate whether a user is above or below a specified age threshold — without storing the image. Promoted as privacy-preserving because no ID document is required. Tradeoffs: accuracy varies significantly across different skin tones and ethnicities. Results in a scan of your face as a condition of access. "Doesn't store images" is a policy claim that users cannot independently verify.
Mobile
Mobile network age assurance
Matches a user's phone number against mobile carrier records to confirm age without collecting identity documents or biometric data. Less invasive than ID upload or face scanning. Limited by the fact that many teens use a parent's phone number or account. Increasingly available as a verification method as regulators push for privacy-preserving alternatives.
Parental
Parental consent systems
Platforms link minor accounts to a verified parent or guardian account. The parent must consent before the minor can download apps, create accounts, or make purchases. Utah, Texas, and Louisiana enacted App Store Accountability Acts in 2025 that require Apple and Google to implement parental consent systems at the app store level — meaning even app downloads require parental approval for minors. Utah's law takes full effect May 2026; Texas and Louisiana in January and July 2026 respectively.

The state landscape — as of April 2026
25+
US states with enacted age verification or social media laws for minors
8
States that ban minors from social media outright or require parental consent for account creation
3
States (Utah, Texas, Louisiana) with App Store Accountability Acts requiring parental consent at the app store level

The privacy paradox: Age verification laws are designed to protect children's privacy. But the methods used to verify age — particularly ID upload and facial scanning — require collecting more personal data, not less. Privacy advocates argue that the most privacy-preserving approach is mobile network assurance or device-level age signals, which don't require users to hand over documents or face scans. The legislative debate over which methods are acceptable is ongoing.

What parents should know: Under the Utah App Store Accountability Act and similar laws, app store providers (Apple and Google) are required to verify users' ages and link minor accounts to a parent account. Developers must request age verification data from the app store. Neither app stores nor developers are permitted to share age verification data with third parties. These laws represent the first time a US state has required age verification at the infrastructure level — not just at the app level.

Module 06

Documented incidents

These are verified cases. They illustrate the range of harm — wrongful arrest, unauthorized collection, unconsented use — that biometric data misuse has already caused at documented scale.

$1.4B
Meta · Texas · July 2024
Meta agreed to pay $1.4 billion to the state of Texas — the largest biometric privacy settlement ever obtained by a single state — for unlawfully collecting and using the facial recognition data of millions of Facebook users without consent, in violation of Texas' Capture or Use of Biometric Identifier Act. No admission of liability was included.
$1.375B
Google · Texas · May 2025
Google agreed to pay $1.375 billion to Texas — the largest state-level privacy settlement against Google — for collecting biometric identifiers (voiceprints and facial geometry) through Google Photos and Google Assistant without proper consent, in violation of the Texas Capture or Use of Biometric Identifier Act. No admission of wrongdoing. Google stated it has since changed the relevant practices.
$51.75M
Clearview AI · Nationwide · March 2025
A federal court approved a $51.75 million class action settlement resolving BIPA and other state law claims against Clearview AI, which scraped over 60 billion facial images from the internet without consent. Unusually, the settlement granted class members a 23% equity stake in Clearview rather than cash payments — the first such resolution in biometric privacy litigation.
7+
Wrongful arrests · US · ongoing
At least seven people in the United States have been wrongfully arrested after police used facial recognition misidentification. Every publicly documented wrongful arrest due to facial recognition has involved a Black person — reflecting documented disparities in how recognition systems perform across different skin tones. Cases include Robert Williams (Detroit, 2020), Randal Quran Reid (Georgia/Louisiana, 2022), Porcha Woodruff (Detroit, 2023), and Trevis Williams (New York, August 2025).
Additional documented incidents
Wrongful arrest
Robert Williams v. City of Detroit — settled June 2024
Robert Williams was wrongfully arrested outside his home in front of his daughters after Detroit police matched a blurry surveillance image to his expired driver's license photo using facial recognition — despite the match being visibly incorrect. He spent 30 hours in detention. The ACLU and Michigan Law School's Civil Rights Litigation Initiative filed suit in 2021. The June 2024 settlement established the nation's strongest police department policies constraining facial recognition use: police must have independent, reliable corroborating evidence before making any arrest based on a facial recognition result.
Wrongful arrest
LaDonna Crutchfield — Detroit, January 2024
LaDonna Crutchfield was arrested by six Detroit police officers in front of her young children based on a facial recognition match linking her to a non-fatal shooting. She was lying in bed reading to her five-year-old when police arrived. She was released hours later. Her attorney filed suit in federal court in 2024. The case illustrates that even after the Williams settlement established new policies, wrongful arrests based on facial recognition continued in Detroit.
Students
$8.75 million settlement — education platform, October 2025
An Illinois court approved an $8.75 million class action settlement against a technology company that collected face models or voice models from approximately 660,000 students as part of an education platform product — without the written consent required under BIPA. The case illustrates that biometric data collection in educational settings carries the same legal exposure as commercial applications.
Social media
TikTok — $92 million settlement, 2021
TikTok paid $92 million to settle a class action lawsuit alleging it used AI to analyze users' facial features — age, ethnicity, and gender — from videos and filter usage, and used that data for content targeting without adequately disclosing this in its privacy policy. The case established that facial analysis for commercial purposes, even when framed as content personalization, is subject to biometric privacy law.

On racial bias in facial recognition: Every publicly known wrongful arrest due to facial recognition in the United States has involved a Black person. This is not coincidence — it reflects documented accuracy disparities in facial recognition systems, which are trained primarily on lighter-skinned faces and have significantly higher error rates for darker skin tones. The ACLU, the Government Accountability Office, and the Department of Justice have all documented this disparity. It is not a theoretical concern.

Module 07

The legal landscape

There is no federal law governing biometric privacy in the United States. What exists is a patchwork of state laws — uneven in coverage, contested in court, and moving fast.

Illinois · BIPA
Biometric Information Privacy Act (2008) — The strongest individual biometric privacy law in the US. Requires written consent before collecting biometric data, written policies on data retention and destruction, and prohibits sale of biometric data. Private right of action — individuals can sue without proving harm. Statutory damages of $1,000–$5,000 per violation. Has generated over 100 class action lawsuits in 2025 alone. The $1.4B Meta settlement and $51.75M Clearview settlement were driven significantly by BIPA. 2024 amendment limits recovery to one violation per entity per person.
Texas · CUBI
Capture or Use of Biometric Identifier Act — Requires informed consent before capturing biometric identifiers. No private right of action — only the Attorney General can sue. But the Texas AG has been aggressive: $1.4B from Meta (2024) and $1.375B from Google (2025) under this statute. June 2025: Texas enacted additional AI law outlawing biometric collection without permission. The largest biometric enforcement actions in US history have been brought under Texas law.
23 states
State biometric and privacy laws — As of August 2025, 23 states have enacted laws regulating facial and biometric data, according to NPR. Coverage varies significantly: some have standalone biometric laws like BIPA; others include biometrics as "sensitive data" under comprehensive privacy laws requiring opt-in consent. Washington, Colorado, Virginia, Connecticut, and California have comprehensive privacy frameworks that address biometric data with varying requirements.
15 states
Facial recognition in policing — state guardrails — As of early 2025, 15 states have enacted some legislation governing facial recognition use in law enforcement: Washington, Oregon, Montana, Utah, Colorado, Minnesota, Illinois, Alabama, Virginia, Maryland, New Jersey, Massachusetts, New Hampshire, Vermont, and Maine. Restrictions range from warrant requirements to limits on using facial recognition as the sole basis for arrest. No federal equivalent exists.
Federal · none
No federal biometric privacy law — There is no comprehensive federal law governing biometric data collection, use, or storage. Multiple bills have been introduced — including the National Biometric Information Privacy Act — but none have passed. Federal agencies including the FBI, DHS, and TSA can use facial recognition under frameworks that do not apply to private companies, and are largely exempt from state laws. Status as of April 2026.
EU
EU AI Act and GDPR — Facial geometry is classified as a special category of sensitive data under GDPR, requiring explicit consent for processing. The EU AI Act (effective February 2025) bans real-time remote biometric identification in public spaces with narrow exceptions. Fines for violations reach €30M or 6% of global annual revenue. The contrast with the US regulatory landscape — where the same technology operates with almost no federal restriction — is significant.

The government exemption: Most biometric privacy laws apply to private companies. Law enforcement and federal government agencies operate under different frameworks — and in many states, the biometric data they collect (from driver's licenses, passport photos, background checks, and criminal records) is shared with private facial recognition vendors without individuals' knowledge or consent. The laws designed to protect you from commercial data collection often do not protect you from government use of the same data.


What to watch
1
Federal biometric privacy legislation — Multiple bills have been introduced but none enacted. The state-level patchwork creates growing pressure for federal action. Track at congress.gov →
2
App Store Accountability Acts — enforcement begins 2026 — Utah (May 2026), Texas (January 2026), and Louisiana (July 2026) app store laws require Apple and Google to implement age verification and parental consent systems. How the two app stores comply will shape the practical landscape for all app developers and minor users nationwide.
3
EU AI Act biometric enforcement — The ban on real-time remote biometric identification in public spaces is in effect. How EU regulators enforce this against commercial operators — and whether it reaches US companies operating in Europe — will establish global precedents.
4
Facial recognition in retail — No US law requires retailers to disclose use of facial recognition to customers. Advocacy organizations including the ACLU are pushing for mandatory signage laws similar to New York City's. Watch state legislatures and the FTC for movement in 2026.
Module 08

Knowledge check

Eight questions based on verified facts from this guide. An honest measure of what you now know.

Question 1 of 8
Module 09

What you can do

You cannot opt out of all biometric data collection — much of it happens without your knowledge or meaningful choice. But you can reduce your exposure significantly and make more informed decisions about when and with whom you share your face.

01
Before uploading photos to any app, read the license terms
Specifically look for the word "license" in the terms of service. A broad, irrevocable, royalty-free license to use your images means the company can use your face for almost any purpose indefinitely. This is different from a service that retains your photos temporarily to process them. If you can't find clear terms about what happens to your photos, treat it as a red flag.
02
Check and adjust your photo app settings
In Google Photos: Settings → Google Photos settings → Manage memories and sharing → review face grouping settings. In iCloud Photos: face grouping is processed on-device and not shared with Apple. On Instagram and TikTok: review what data they can access through your camera, and check whether you're opted in to AI training on your content — opt-out options exist on some platforms but are not prominently surfaced.
03
Know your airport rights
US citizens can opt out of CBP facial scanning at airport departure gates. You may be required to request this — it is rarely offered proactively. Inform the gate agent before the scanning begins. You will be directed to an alternative verification process. Non-citizens do not have the same opt-out right. TSA Pre-Check and Global Entry enrollees have already provided biometric data as part of enrollment.
04
If you live in Illinois, Texas, Washington, or another BIPA-equivalent state — know your rights
If you live in Illinois, you have a private right of action against companies that collect your biometric data without written consent. In Texas, the AG can pursue enforcement on your behalf. Check your state's biometric privacy laws — 23 states now have some form of protection. The State of Surveillance resource at stateofsurveillance.org → provides a current state-by-state breakdown.
05
For parents: use parental consent tools proactively
Apple Screen Time and Google Family Link allow you to require approval for app downloads and review what permissions apps request. Under the new App Store Accountability Acts (Utah, Texas, Louisiana), Apple and Google will be required to implement parental consent at the app store level for minor accounts. Set up a family account before these laws take effect so the infrastructure is in place. Review which apps your child uses have access to their camera — and what those apps do with facial data.
06
Advocate for disclosure in retail spaces
Most US states do not require retailers to disclose use of facial recognition to customers. New York City does — businesses must post signs at entrances if they collect biometric data. If you believe a retailer is using facial recognition without disclosure, contact your state attorney general's office. Organizations including the ACLU and EFF track legislative developments and accept public input on biometric surveillance policy.
07
If Meta asks for a video selfie to recover your account — know what you're agreeing to
Meta's account recovery process may require a video selfie matched via facial recognition. Before submitting, check whether you live in Illinois or Texas — Meta has stated it does not run this verification in those states due to biometric privacy laws. If you proceed, you are providing biometric data to a company that has paid $1.4 billion for misusing facial recognition data, with no public explanation of how the video selfie is stored, used, or deleted. Document everything — screenshots of the suspension notice, the appeal process, and any responses. If you believe the suspension was wrongful, contact your state attorney general's office and consider reaching out to digital rights organizations like the EFF or ACLU.
Reflect
How many apps on your phone have access to your camera? When did you last review that list?
Have you used a photo filter or AI headshot app in the past year? Do you know what those companies do with photos after processing?
If you were wrongfully identified by a facial recognition system, would you know it happened? What recourse would you have in your state?
Primary sources

All claims verified

Every fact in this guide is drawn from the sources below. Pending legal and regulatory matters are noted as such.

Privacy World / Mondaq
privacyworld.blog →
2025 Year-in-Review: Biometric Privacy Litigation (December 2025). Source for BIPA litigation statistics (100+ class actions in 2025), $47.5M settlement against technology company, $8.75M education platform settlement covering 660,000 students, and Clearview AI $51.75M settlement approval.
Texas Attorney General
texasattorneygeneral.gov →
Official press release: $1.375 billion settlement with Google (November 2025). Source for Google biometric settlement details, CUBI violations, Google Photos and Google Assistant voiceprint and facial geometry collection. Also source for the Meta $1.4 billion settlement reference.
National Law Review
natlawreview.com →
A First in BIPA Litigation: Class Members Receive Equity in Clearview AI (July 2025). Source for Clearview AI settlement details, 60 billion image database, 23% equity stake resolution, and federal court approval date (March 20, 2025).
ACLU
aclu.org →
Williams v. City of Detroit — case page. Source for Robert Williams wrongful arrest details, June 2024 settlement terms (independent evidence requirement, training mandates, audit of past cases), and documentation of multiple subsequent wrongful arrests in Detroit.
Stateline / Pew
stateline.org →
Facial Recognition in Policing Is Getting State-by-State Guardrails (February 2025). Source for 15-state facial recognition policing legislation list, wrongful arrest case summaries, and Detroit Police Department settlement terms. Also sources the "seven wrongful arrests" figure and "every case has been a Black person" finding.
Inside Privacy / Covington
insideprivacy.com →
State and Federal Developments in Minors' Privacy in 2025 (July 2025). Source for App Store Accountability Act details (Utah, Texas, Louisiana), parental consent system implementation through Apple and Google, and overview of state social media age verification legislation.
AVPA / McNeesLaw
avpassociation.com →
US State Age Assurance Laws for Social Media (February 2026). Source for state-by-state age verification law breakdown, eight-state account ban/parental consent figure, litigation status of contested laws, and California SB 976 details.
Nevis / Profile Bakery
nevis.net →
Do Selfie Apps and Filters Collect Biometric Data? Source for TikTok $92 million filter settlement details (facial analysis for age, ethnicity, gender), Snapchat $35 million BIPA settlement, Facebook $650 million DeepFace settlement, and FaceApp license terms analysis.
State of Surveillance
stateofsurveillance.org →
Biometric Privacy Laws by State 2025. Source for 23-state biometric law figure, BIPA vs CUBI comparative analysis, and government use exemptions. Also sources wrongful arrest racial bias documentation and "every publicly known wrongful arrest has been a Black person" finding.
Hilt Digital / Profile Bakery
hiltdigital.co.uk →
AI Avatar Privacy Risks: What Happens to Your Face After You Upload It? Source for FaceApp license terms ("perpetual, irrevocable, nonexclusive, royalty-free, worldwide" language), data transfer jurisdictional concerns, and the cybersecurity expert's "impossible to know" quote about photo deletion verification.
American Bar Association
americanbar.org →
Considering Face Value: The Complex Legal Implications of Facial Recognition Technology (Winter 2025). Source for CBP biometric exit program scope (97% of international passengers), government database scale, facial recognition history, and Detroit Police settlement requirements. Published by the ABA Criminal Justice magazine.

What to watch
1
Federal biometric privacy legislation — No federal law exists. Multiple bills pending. Track at congress.gov →
2
App Store Accountability Acts — 2026 enforcement — Utah (May), Texas (January), Louisiana (July). How Apple and Google implement parental consent at the app store level will set practical standards for all developers and minor users.
3
EU AI Act biometric enforcement — Ban on real-time remote biometric identification in public spaces is in effect. Enforcement actions against commercial operators will establish global precedents. Track at digital-strategy.ec.europa.eu →
4
Retail facial recognition disclosure requirements — Currently required only in New York City among major US jurisdictions. Watch for state-level legislation in 2026, particularly in Illinois, California, and Massachusetts.

About this guide

I'm Jennifer Stivers, founder of Jenntelligence.ai, a division of MarketMind Consulting. I have a psychology degree and spent my career in marketing — at Apple, at a venture-backed startup that went public, at organizations like Coursera and GlobalEnglish. I built these guides using AI tools. The research questions, editorial decisions, and responsibility for accuracy are mine.

A note on accuracy

This guide reflects my research and editorial judgment as of the date shown. Biometric privacy law, enforcement actions, and the legal cases covered here change quickly — sometimes faster than any guide can track. I update content when I become aware of significant changes, but I cannot guarantee real-time accuracy. Pending legal and regulatory matters are noted as such and should not be read as final. If you find something that needs correction, I want to know. Contact me here. Links to external sources are provided for reference; I am not responsible for changes to third-party content after publication.