One of the first great possibilities of the internet was disseminating information to a wider audience at incredible speed. This so-called democratization of knowledge was supposed to facilitate widespread changes in governance. However, technology also has the capability to prohibit access to information. Increasingly, we see the historical issues of “haves” and “have-nots” playing out in the digital world as certain populations find their access to information restricted. The practice of “redlining” allowed Home Owners Loan Corporation  to create maps of the largest cities in the United States with color-coding to indicate who could get access to certain types of loans.[1] Fast forward into the present  and we are  still reckoning with our inability to cure the ills of the past.[2] Dr. Chris Gilliard has been exploring the impact of a new ill, what he & co-author Hugh Culick refer to as “digital redlining.” Dr. Gilliard’s research has shed light on issues that are often ignored in the technology development industry. 

We are honored to have Dr. Gilliard join us at Northeastern University School of Law on May 10, 2019 for About Face: The Changing Landscape of Facial Recognition. Dr. Gilliard will join Kade Crockford, Director of the Technology for Liberty Program at ACLU-Mass and MIT Media Lab Director’s Fellow, and Brenda Leong, Sr. Counsel & Director of Strategy at Future of Privacy Forum on the panel “Understanding the Social Impacts & Challenges of FRT.” This article will introduce Dr. Gilliard’s work, provide insight into his perspective on social harms and technology, and explore the issues with oversight of new technology.

Digital Redlining

            The term digital redlining comes from Dr. Gilliard & Hugh Culick’s 2016 essay. Where redlining controlled access to loans, digital redlining controls access to content.[3] While you may be familiar with content filtering in Chinaand Iran, you probably aren’t familiar with the extensive filtering that takes place on college campuses in the United States.[4]

“The more research-based the institution, the more the policies emphasized IT as an environment with a variety of stakeholders. On the other hand, institutions that emphasized job training and certification saw IT as a tool for transmitting information as determined by the school.”[5]

The problems here are familiar and somewhat obvious. Free flow of information on the internet is both a gift and a curse, but what is the justification for students at Northeastern University having access to different information on the web than Mass Bay Community students? This division is most alarming because it is purposeful. This is not a difference of professors or material discussed in class, “[i]t is a different thing, a set of education policies, investment decisions, and IT practices that actively create and maintain class boundaries…”[6] At their core, these decisions are about power. Who can see what? Who can use which tools? Whose privacy is more important? These questions are answered every day by implementing policies like internet filtering.

Application to FRT

            Internet filtering presents a common problem in technology, that society does not have a great way to determine the best application of new innovations. Dr. Gilliard & David Golumbia demonstrated several examples of this problem. But who do we trust? Technologists? Free markets? Politicians? None of the above? Part of the difficulty answering these questions is known information asymmetry. FRT presents a new edition of our old problem. 

            As a society, we have a collective stake in way FRT is deployed. We know that there are real benefits to allowing law enforcement access to FRT.[7]  However, society mostly rejected, albeit for a limited time, the idea of people walking around with glasses that could eventually deploy FRT.[8] Still, recent efforts show that FRT is being widely deployed and the ability to use FRT systems is getting cheaper.[9] If “…it is unacceptable for tech alone, or tech alone using pure market forces, to decide what tools are or are not acceptable…” then what process should we have for FRT?

FRT & Friction-Free Racism

            In addition to the socioeconomic divides that are amplified by access to technology, FRT amplifies racial divisions. “[T]he practice of coding difference onto bodies is not new…” and is actually “made more real and ‘effective’ by whatever technologies are available…”[10]  Dr. Gilliard points us to Simone Browne’s concept of “digital epidermalization” or “the exercise of power cast by the disembodied gaze of certain surveillance technologies…that can be employed to do the work of alienating the subject by producing a ‘truth’ about the body and one’s identity (or identities) despite the subject’s claims.”[11] We are creeping dangerously close to a world where the way we identify ourselves matters little in comparison to what Alphabet or Amazon says about us.

            Are we pursuing the right outcomes with advances in technology? The examples of “Uber, Amazon Go, and touchscreen kiosks at fast-food joints” are evidence that Silicon Valley is not pursuing further development of human relationships but rather removing them entirely, or as Dr. Gilliard elegantly puts it “friction-free” interactions.[12] Other examples like “Ghettotracker” and “Road Buddy” indicate that technology is intentionally driving us apart.[13] 

            The costs of such pervasive surveillance and replacement of human relationships are not the same for everyone.[14] The better the technology, the more invisible these biased results become.[15] As social and cultural clues about identity are stripped in favor of purely mathematical models the gap in our understanding of each other widens.[16] People of color will find no safety in an environment where racist tendencies are protected by the “whims of black-boxed code.”[17]

Q&A with Dr. Gilliard

            Dr. Gilliard was kind enough to respond to a couple questions about the Microsoft Principles and the idea of consent to surveillance in the classroom. We have provided our questions and Dr. Gilliard’s responses below.

Anthony: What was your reaction to Microsoft’s 6 principles of FRT development? 

 Dr. Gilliard: “I have many thoughts regarding those specific principles and about the notion of industry self-regulation. Ultimately, I would say that while it’s nice that companies might seek to develop some guidelines for themselves, companies are beholden to their shareholders, and there’s a long history of companies abandoning their “conscience” for profits. What we really need are laws and regulations that put meaningful limits in place for how companies can deploy their technologies against the population, and rules that take into account the disparate impact that surveillance technologies have on marginalized communities.”  

Students are introduced to technology in the classroom at very young ages. Schools rely on parents to provide consent for use, if they get consent at all. Given your research on edtech, what information do you think is necessary for parents to have in order to provide consent to FRT use on school grounds or in classrooms?

“I think we need to think very hard about how we are using the word “consent” when it comes to things like FRT, because as we have seen with many current technologies, it’s often difficult (if not impossible) for there to be real informed consent because not only do people not know the full extent of how their data (in this case their face) will be used, often companies don’t know (yet), because they are constantly using extracted data to improve the capabilities of their technology as well as developing new ways to monetize that data. We might consider thinking about how we avoid “idealising control” as Woodrow Hartzog argues, and “…because it is virtually impossible for people to be adequately informed of data risks and exert control at scale, our rules should make sure companies cannot unreasonably favour their own interests at our expense.”[18]

About Face: The Changing Landscape of Facial Recognition

Dr. Chris Gilliard joins us at Northeastern University School of Law on May 10, 2019 for About Face: The Changing Landscape of Facial Recognition. Dr. Gilliard will join Kade Crockford, Director of the Technology for Liberty Program at ACLU-Mass and MIT Media Lab Director’s Fellow, and Brenda Leong, Sr. Counsel & Director of Strategy at Future of Privacy Forum on the panel “Understanding the Social Impacts & Challenges of FRT.” Admission is free and we look forward to continuing the discussion.

[1]Madrigal, Alexis C. The Racist Housing Policy That Made Your Neighborhood. The Atlantic. May 22, 2014. Available at:

[2]See Coates, Ta-Nehisi. The Case for Reparations.The Atlantic. June 2014 Issue. Available at:

[3]Gilliard, Chris & Culick, Hugh. Digital Redlining, Access, & May 24, 2016. Available at:




[7]Garvie, Clare. Bedoya, Alvaro. Frankle, Jonathan. The Perpetual Lineup: Unregulated Police Face Recognition in America.(2016)

[8]Gilliard, Chris & Golumbia, David. There are no Guardrails on Our Privacy March 9, 2018. Available at:

[9]Chinoy, Sahil. We Built a (Legal) Facial Recognition Machine for $60.New York Times Opinion. April 16, 2019. Available at:

[10]Dr. Chris Gilliard. Friction-Free Racism. Real Life Magazine. October 15, 2018. Available at (Hereinafter Friction-Free Racism).

[11]Id. quoting Simone Browne. Dark Matters

[12]Friction-Free Racism.






[18]Dr. Gilliard is quoting from Woodrow Hartzog’s The Case Against Idealising Control.European Data Protection Law Review. Vol. 4 Iss. 4. 423. (2018). Available at