Search

Search

Facial recognition technology poses a “fatal consent problem” according to CLIC faculty and Northeastern Law Professor Woodrow Hartzog. Last month, Hartzog published an article, co-authored with Rochester Institute of Technology Professor Evan Selinger, in the Loyola Law Review, calling for an outright ban on facial recognition technology. The article expands on previous work by Nancy S. Kim, whose book Consentability: Consent and Its Limits, creates a new framework for conceptualizing consent. That framework requires two factors be met when determining whether consent is possible: (1) the individual must have the requisite knowledge to be able to intentionally manifest consent; and (2) the social benefits of the activity the individual is consenting to must outweigh its social harms. When it comes to facial recognition technology, Hartzog and Selinger argue, there can never be valid consent, and the social harms far outweigh the social benefits of this rapidly emerging technology.

Consent is used often in the law to justify myriad activities. Informed consent is a touchstone across practice areas, including medical malpractice, contract law, and privacy law. By relying on consent, the article argues, the law allows for invasions of privacy and autonomy that cause harms and repercussions that are too large and incomprehensible to be validly consented to. Because of this, the first prong of Kim’s consentability framework, requiring valid consent be made with the requisite knowledge, can never be met with facial recognition technology. Individuals simply cannot comprehend the vast, sweeping societal repercussions of consenting to this surveillance. While one person may say they have “nothing to hide” or the law may say that individuals do not have a right to privacy when out in public spaces, these arguments place individual autonomy over society’s collective autonomy. As individuals begin to consent to more and more invasions of privacy (whether validly or not), this privacy invasion, through the use of facial recognition technology, for example, becomes the norm, the only way of doing something that previously did not require its use. Take the iPhone, for example. It has only been very recently that the use of facial recognition technology took the place of a passcode to unlock an iPhone. However, now if you have a newer iPhone and you don’t use the facial recognition technology, this is considered a rather oddball, conspiracy theorist approach to using your device. It is that facial recognition technology creep that leads to the sacrifice of collective autonomy (the normalization of facial recognition technology in your pocket) in the name of individual autonomy (“I don’t have anything to hide.”)

Instead, Hartzog and Selinger argue that, when it comes to facial recognition technology, collective autonomy ought to be reframed as the right to obscurity. As opposed to more blanket terms like “anonymity,” the idea of “obscurity” is more nuanced and essential to individual autonomy. The right to remain obscure enables us to “share different aspects of our identity in different contexts.” Sharing information in one context does not imply consent to that information being shared generally. Importantly, obscurity allows us to participate in democracy without worrying about repercussions. By appreciating the importance of obscurity, facial recognition technology can be reframed as a threat to our right to remain obscure, and regulatory frameworks can begin to emerge that provide meaningful obscurity protection.

While face recognition used to unlock a phone may be a relatively benign example, Hartzog and Selinger argue that it is this slow acceptance of the technology into our lives that will lead to a corrosion of the rights of many in the name of autonomy for a few. Those whose rights will be eroded, such as the right to due process, to be innocent until proven guilty, or to be free from government surveillance, will disproportionately be people of color and other minority and vulnerable populations. The privileged few who are able to pay for and consent to surveillance, in the form of iPhones, IoT thermostats, Ring Doorbells, and Apple Watches, will ultimately not have to pay the price for being watched. It is that blind privilege which Hartzog and Selinger argue make consent to facial recognition impossible. The individuals clicking “I Agree” on facial recognition user agreements do not possess the requisite knowledge to validly consent, because they don’t realize the collective weight of what they are consenting to.

In the end, the move towards increased surveillance will have a chilling effect on democratic society. Simply requiring that individuals consent to the use of facial recognition technology is insufficient to protect against this fact. Implementing procedural protections, like requiring warrants be issued before government can access privately collected data, will simply provide a path for allowing those abuses to occur. In a society where surveillance is a privilege for those who can buy it and a check on the behavior of those who are surveilled, Hartzog and Selinger argue that facial recognition cannot be consented to and, for the good of our collective autonomy, it must be banned.