In October 2016, Sr. Associate at the Georgetown Center on Privacy & Technology, Clare Garvie (@ClareAngelyn), and a team of attorneys and researchers uncovered the practice of widespread use of facial recognition technology (“FRT”), with very little to no oversight or accountability mechanisms, limited training for bias, and the systematic build-up of databases of law-abiding citizens.[1]Clare’s work has helped shed a light on several issues with facial recognition technology. We are honored to have her join us at Northeastern University School of Law on May 10, 2019 for About Face: The Changing Landscape of Facial Recognition. Clare will join Jennifer Lynch, Litigation Director at Electronic Frontier Foundation and Sue Glueck, Sr. Director of Academic Relations at Microsoft on the panel “Regulatory Possibilities & Problems.” This article will introduce  Clare’s work, provide insight into her thoughts on principles published by corporate actors, and address recent developments in the field of FRT.

The published report, entitled, The Perpetual Lineup: Unregulated Police Face Recognition in America revealed the sheer number of people affected by this practice. The results were astounding: “26 (potentially as many as 30) states allow law enforcement to…request searches against their databases of driver’s license and ID photos,” 16 states have agreements with the FBI, there were “8,000 monthly searches on the faces of seven million Florida drivers” from one County Sheriff’s office, 2% of law enforcement agencies were “found to use (or have used) face recognition…whose face recognition policy expressly prohibits” tracking constitutionally protected behavior, and over 117 million Americans in facial recognition databases.[2]

Nearly three years later, use continues to grow. Mostly recently, Amazon has come under attack due to its sales of the facial recognition program “Rekognition” to law enforcement.[3] In response, on March 25, 2019, a group of scientists and researchers published a  “Concerned Researchers” letter to Amazon, responding to Amazon’s response to a recent study that exposed Rekognition’s higher error rates when classifying the gender of dark skinned women compared to lighter skinned men. The reason for the Concerned Researchers letter is largely due to federal and state legislators failing to implement proper oversight regimes for this burgeoning technology.[4] This failure is rather inexplicable when you consider that Clare and her colleagues gifted model legislation and FRT use policy to law makers in The Perpetual Lineup.[5]

Regulatory Possibilities: A Prologue

            The alignment of researchers from Facebook, Google, IBM, and Microsoft in this letter on the importance of legislation and policies to prevent the abuse of this powerful technology is significant.[6]The letter comes only a few months after Microsoft published principlesto develop and deploy facial recognition technology.[7]While regulation of law enforcement is still lacking, the Microsoft principles appear to have sparked interest in regulating commercial entities, leading Senator Roy Blunt to introduce a billregulating commercial facial recognition.[8]The bill requires informed consent from individuals before collection and sharing can take place.[9]This is similar to the explicit written application requirements found in the model legislation proposed by Clare’s team. 

However, where a written application process leaves a clear record that can be checked and used to hold government use accountable, requirements for affirmative consent are less helpful due to information asymmetry between the company and the individual. Multiplescholarshave noted the issues with notice and consent, and it is unlikely that a technology like facial recognition, with its mostly invisible deployments, would not experience the same.[10]

Market Movement 

The conclusions from The Perpetual Lineup appear to be influencing the market and social tolerance for facial recognition. Notably, the Microsoft Principles specifically ask that companies only deploy systems where there are already laws in place to regulate its use that “define the parameters for the use of facial recognition…in public spaces,” upon order of the court or other judiciary body for surveillance related to specific individuals, or emergency situations “involving imminent danger or risk of death or serious physical injury…”[11] While not an exact match, the principles are in close alignment with the Model Police Face Recognition Use Policy that Clare and her team developed.[12]

Q&A with Clare Garvie

            Clare was kind enough to respond to a few questions about the Microsoft Principles and what a proper system of oversight looks like. Please enjoy our brief Q & A, excerpted below: 

Anthony: What was your reaction to Microsoft’s principles? Did they appear to align with your work’s proposed legislation?

Clare: “It is encouraging to see major companies acknowledge the unique risks face recognition poses to privacy, civil rights, and civil liberties, and to advocate for enhanced control. Microsoft’s commitment to ensuring fairness, transparency and consent, individual control, and non-discrimination in its development and deployment of face recognition is admirable. 

While they serve an important role, we should not view corporate principles, even if scrupulously adhered to, as a sufficient substitute for legislation. Legislators, acting on behalf of the citizens they represent, should be the ones to decide what constitutes adequate accountability and control over the use of face recognition systems.” 

Anthony: What would proper notice and consent to government use entail? Would it be preferable, given the risks of abuse and oppression in facial recognition technology, to have a centralized system for all facial recognition use that requires approval before deployment?

Clare: “In The Perpetual Line-Up, we recommend that the citizens on whom police face recognition technology will be used receive adequate notice and consent to certain aspects of its use. The former is achieved through public policies, transparency into its use, and audits. The latter is accomplished by allowing legislatures, not executive branch law enforcement agencies, be the ones to decide whether driver’s license and state ID databases are opened up police face recognition searches. 

Legislative efforts like Community Control Over Police Surveillance (CCOPS) also seek additional levels of notice to and consent of the public over the types of surveillance technologies their police departments acquire, including the public in funding decisions before such systems are deployed.” 

About Face: The Changing Landscape of Facial Recognition

            On May 10, 2019 we continue the discussion with Clare and the other panelists. The issues stemming from law enforcement and government use are closely related to the issues with commercial use. Our panelists bring diverse experiences and unparalleled expertise to these issues and we look forward to hearing more about where legislative and regulatory efforts can succeed. We will be providing introductions to our other panelists and keynote speaker Cyrus Farivar as we move closer to the conference. Stay tuned and follow @NUSLCLIC for updates!

[1]Garvie, Clare. Bedoya, Alvaro. Frankle, Jonathan. The Perpetual Lineup: Unregulated Police Face Recognition in America.(2016)


[3]Alkhatib, Ali. Et al. On Recent Research Auditing Commercial Facial Analysis Technology.(2019)


[5]Supra note 1. 

[6]Supranote 3.

[7]Microsoft Corporation. Six Principles for Developing and Deploying Facial Recognition Technology.(2018)

[8]Senator Roy Blunt, Office of. Blunt, Schatz Introduce Bipartisan Commercial Facial Recognition Bill.(2019)

[9]Blunt, Roy. Schatz, Brian. Commercial Facial Recognition Privacy Act of 2019. S.847. (2019)

[10]See Hartzog, Woodrow. Richards, Neil.Taking Trust Seriously in Privacy Law.19 Stan. Tech. L. Rev. 431 (2016); Lipman, Rebecca. Online Privacy and the Invisible Market for Our Data.120 Penn. St. L. Rev. 777 (2016).

[11]Supra note 7. 

[12]Supra note 1.