Privacy expert Clare Garvie explains why your face is already in a criminal lineup

Biometric surveillance is coming for you, even if you have 'nothing to hide'

Clare Garvie is a Senior Associate at Georgetown University’s Center on Privacy and Technology, where she has dedicated her work to studying law enforcement’s use of face recognition technology on the American public. She is considered the foremost expert on face recognition technology; last year she testified in front of Congress. She writes extensively on its use in law enforcement investigations. As well, she brings to light the worrying ways the technology disrupts privacy, circumvents judicial norms and legal precedents, and promotes chilling effects on free speech and civil liberties. All of this happens under a veil of secrecy, without public consent and largely outside of the purview of American lawmakers. 

Garvie’s research spotlights the ways these technologies are disproportionately used on Black and Brown communities and the failures of face recognition algorithms when deployed on people of color and women. The technology’s efficacy, already cause for concern, is further problematized by law enforcement’s cavalier practices. In her seminal study “Garbage In, Garbage Out: Face Recognition on Flawed Data” Garvie details one instance in which law enforcement submitted a photograph of Woody Harrelson to effect an arrest of a man accused of stealing beer from a CVS (to their credit, the suspected man bore a striking resemblance to the True Detective star). 

This year, protesters and activists across the nation have made clear their fear of face recognition technologies, which have profound effects on privacy and civil liberties. Face recognition, Orwellian in its big brother-like capabilities, is a proxy for the larger consequences of American life, as Garvie tells Document: “Today it is face recognition; tomorrow it might be gait analysis or remote iris scans. But the rights at risk remain the same—privacy, free speech, due process, equal protection under the law.”

Alex: Hi, Clare. How are you? Curious what you were working on throughout the pandemic and how you’re experiencing this year?

Clare: I’m doing well, all things considered. I’m very fortunate that I can continue working on issues I care about, with much of the research I do already being online. The remote working environment has also made some things easier, such as testifying before a local assembly or participating in online meetings that would otherwise require travel, though I do miss face-to-face conversations!

Along with so much else, 2020 has brought with it heightened scrutiny of police face recognition use. Early on in the pandemic, some technology vendors floated the idea of face recognition contact tracing, an idea that fortunately has not panned out. The more ubiquitous use of face masks raises important questions about how reliable the technology can be. And quite apart from the pandemic, widespread protests against police treatment of Black and Brown communities necessarily include a reckoning about what surveillance technology police have access to, both for everyday policing and for surveilling protests.

Alex: In addition to the pandemic, this year was marked by the George Floyd protests across the country. In the course of covering the protests in New York City, I realized a lot of protesters were really concerned about being photographed or showing their faces. Should they be concerned about law enforcement’s use of face recognition technology? Are there any measures we can take to safeguard our privacy and identity while participating in protests?

Clare: We should all be concerned. The Supreme Court has on numerous occasions recognized the crucial role that anonymity plays in safeguarding our First Amendment rights to free speech, assembly, and association—even when we assert those rights in public. Face recognition is at its core a tool for de-anonymization—when used on protests it risks chilling our constitutionally protected right to engage in precisely that activity. Law enforcement agencies themselves have recognized this. A Privacy Impact Assessment about face recognition use published in 2011 by a state-run police organization, cautioned that “As an instrument of surveillance, identification increases the government’s power to control individuals’ behavior. It can further inhibit one’s ability to be anonymous, which is an important right in a free society.”

Yet this has not prevented police from using face recognition on protests, as early as 2015 during demonstrations in Baltimore following the death of Freddie Gray, a 25-year-old black man, in police custody. It is touted by companies and agencies alike as a useful resource in identifying “rioters” and other individuals from within the protests sparked by the death of George Floyd and far too many others at the hands of police.

We should not have to affirmatively protect our First Amendment right to protest and the anonymity that makes it possible. It is the responsibility of government to not breach this right. But here we are. Masks can help, as can blurring or hiding the faces of protesters you capture on camera or video at protests, as these can become the face recognition “probe” images used by police to identify folks on the ground.

And finally, don’t underestimate the power of your voice in deciding whether your city or state regulates—or bans—face recognition use. It may sound naively optimistic, but elected officials around the country are looking for guidance on how to address face recognition risks, and their constituents have been instrumental in the bans, moratoria, and other restrictions that have passed in San Francisco, Portland, Cambridge, and elsewhere.

Alex: You write “garbage in, garbage out,” which is an adage often applied to algorithms and big data. There’s a lot of anxiety around surveillance and law enforcement’s use of face recognition technology, but the public is not very literate in the technology—I’m speaking mostly about the NYPD, which has revealed very little about its surveillance technology—how effective is this technology? What is this data being used for? Is the algorithm/technology more dangerous if it is flawed or if it is refined and perfected?

Clare: I’ve used the term “garbage in, garbage out” to refer to the way law enforcement use face recognition in investigations. Face recognition systems are a combination of human and machine components, and the machines aren’t magic. If an analyst puts “garbage” data in—low quality images, edited photos, or forensic sketches for example—she can’t expect the system to return highly accurate results. And yet we see agencies like the NYPD rely almost entirely on the results of a face recognition search to make arrests.

For example, the NYPD has used what they call “celebrity comparisons.” In one case, NYPD analysts ran a photo of Woody Harrelson through the face recognition system in place of a person wanted for stealing beer from a CVS downtown. The system returned a list of possible matches, and officers ended up arresting the tenth person on the candidate list. This means that not only did the NYPD run a search on the wrong person’s biometrics, but the system thought nine photos looked more like Woody Harrelson than the person who was arrested.

Face recognition is a forensic science, but police departments seem to treat it far more like magic than like science. It doesn’t matter how accurate the algorithms get if law enforcement agencies continue submitting the wrong or edited biometrics to the algorithms.

Today, over half of American adults are in a biometric face database accessible to law enforcement thanks to getting a driver’s license. This happened without our knowledge or consent.

Alex: Last year you testified before congress, warning about the use of face recognition technology. I’m curious: in your experience, does law enforcement’s use of face recognition technology split traditionally along left and right political lines? Is it of bipartisan concern?

Clare: Legislators on both sides of the aisle are deeply concerned about face recognition technology. This makes sense—face recognition is a very powerful tool that, without constraints, threatens American’s fundamental rights and liberties and goes against norms on which we have historically been aligned.

For example, Americans across the political spectrum have on numerous occasions rejected the concept of a national biometric identity system. Yet today, over half of American adults are in a biometric face database accessible to law enforcement thanks to getting a driver’s license. This happened without our knowledge or consent.

Or consider the right to peaceful protest and assembly. Face recognition can be used as a biometric surveillance tool to catalogue who participates in what rally or demonstration—independent of the political views espoused at the gathering. Some states or municipalities enroll all firearm applicants into a face recognition database; others collect face recognition templates from undocumented immigrants who apply for a driver privilege card. And in at least thirty-one states, everyone who applied for a license has a face recognition profile on file that can be accessed in some form by police.

Alex: I ask because in a recent interview, you said: “we’ve seen bipartisan support and interest in face recognition and regulating face recognition pretty much from the start.” If there’s bipartisan support in regulating this technology, why hasn’t there been any legislative action?

Clare: There is now widespread agreement that police use of face recognition should be regulated; how it should be regulated is still a contentious debate. Proposals range from light-touch transparency and accountability rules to complete bans.

Advocates in California, Oregon, and Massachusetts have been successful in pushing their legislatures to pass bans or moratoria on police use of face recognition, and a number of elected officials at the state level have introduced bills to do the same.

This is cause for optimism in my view. Four years ago, when the Center on Privacy & Technology published our first report about face recognition, not a single state or local jurisdiction had passed legislation comprehensively regulating this technology. Today, we have major cities like San Francisco completely banning its use by police.

Alex: Some proponents of the technology will argue, “if you’ve done nothing wrong, then you have nothing to hide.” What’s your response to that?

Clare: Fortunately, our constitutional rights and liberties are not contingent on us behaving perfectly all the time.

For example, we have the right to be presumed innocent until proven guilty. Allowing face recognition searches without any rules flips this on its head, requiring us to continuously prove our innocence—our status as having “nothing to hide”—in order to avoid an accusation of wrongdoing. Some jurisdictions do not even limit face recognition searches to the suspects of crime. In Vermont, someone was searched for “asking suspicious questions at a gun store.” That’s not a crime, and there is no clear line separating what amounts to a “suspicious” versus a non-suspicious question about guns. In New York, people have been searched for liking someone’s photo on Facebook. Again, this is not a crime, and rather amounts to guilt by association.

The “nothing to hide” argument could also be used to justify eliminating the search warrant requirement, or any number of restrictions we have in place against unregulated police power. If you have “nothing to hide,” would you object to a warrantless search of your home? If you have “nothing to hide,” would you be ok with being detained and questioned without cause? Are you ok with biometric identification of everyone at a peaceful political rally?

It also ignores the fact that social movements, and progress towards racial and political equality, rely upon civil disobedience and the breaking unjust rules. To accept a policing structure based on “nothing to hide” assumes that all laws are just, and that our policing system punishes only truly wrong behavior. While this is certainly the goal, it is not our present-day reality. To get there we all benefit from the work of those willing to break those laws, willing to participate in unpopular speech, willing to stand up to historically unjust power structures.

Put another way, having “nothing to hide” is a privilege that is not afforded to everyone. To rely on it is to ignore both the realities of those historically not similarly privileged and the fundamental good to society stemming from the actions of those brave enough to stand up to injustice.

Today it is face recognition; tomorrow it might be gait analysis or remote iris scans. But the rights at risk remain the same—privacy, free speech, due process, equal protection under the law.

Alex: You’ve spoken, in the past, about the ways this technology might be used disproportionately—and even unfairly—on already over-surveillanced communities. In your research, how have you seen the hidden biases in this technology manifest?

Clare: The way police use face recognition can perpetuate bias in at least three ways. First, face recognition will likely be used on communities that are historically and continue to be over-policed—Black and low-income communities. San Diego for example found that police used face recognition up to 2.5 times more on communities of color than the proportion of the total population they composed. Second, young Black men likely comprise a vastly disproportionate total of a face recognition database, being far more likely to be arrested than anyone else in this country. And lastly, studies continue to show that many face recognition algorithms perform differently depending on the skin tone of the person being searched, possibly introducing higher room for error on the very same people on which the technology will be disproportionately used.

Alex: You’re considered the foremost expert in face recognition technology. How did you arrive at studying this topic?

Clare: The Center on Privacy & Technology examines where new technologies raise privacy questions, particularly for historically marginalized populations. This is in recognition that privacy is not a luxury and yet it is fulfilled to a greater or lesser degree depending on an individual’s race, socio-economic status, sexual orientation, immigration status, religion, or other affiliation. Face recognition raises precisely those questions.

When we first began examining the topic, there was little to no information about how state and local police departments used the technology, and whether there were any protections against harmful use or misuse. We saw an opportunity to fill this knowledge gap through in-depth research, relying on public records requests and legal and technical surveys. This was my first assignment out of law school.

Studying face recognition also builds on my background in human rights. Both the ability to prove our identity and to obscure it are integral components of promoting and protecting human rights, and are at times at odds with each other. Questions around the risks and opportunities of face recognition are not unique to the United States; countries around the world and international bodies are facing a similar reckoning.

In looking to regulate face recognition technology, I recommend legislators focus on the rights at issue more so than the technology itself.

Alex: Have you worked in cooperation with law enforcement throughout your research? I guess this is a methods question: how do you uncover information about law enforcement’s use of the technology, especially since most departments using the technology are wont to keep these programs out of public view?

Clare: Most of the information I’ve relied on in reports, testimony, trainings, and interviews come directly from law enforcement agencies. To date I’ve received around thirty thousand pages of documents in response to public records requests to agencies across the country. I have also had the chance to interview law enforcement officials or sit in on trainings. While some agencies have made it incredibly difficult to obtain information about their face recognition programs, others have been very forthcoming, helpful, and even sought advice on how to improve their programs.

Alex: Lawmaking is a slow process in the United States—how can policymakers create laws that curb the use of surveillance technologies when the technology advancement outpaces legislation?

Clare: This is one of the reasons the Center advocates for a moratorium on the use of face recognition technology, to give legislatures, experts, and the public the space they need to fully understand and adequately control a new and as of yet not well understood technology. In addition, in looking to regulate face recognition technology, I recommend legislators focus on the rights at issue more so than the technology itself. Today it is face recognition; tomorrow it might be gait analysis or remote iris scans. But the rights at risk remain the same—privacy, free speech, due process, equal protection under the law.

Alex: I recently spoke to a technologist who said “there are certainly legitimate uses for this technology.” In what ways might this technology be legitimate or even beneficial—and how do we balance public safety and personal privacy?

Clare: Face recognition is an identification tool, so to the extent that identification would be beneficial to someone, face recognition might be useful to that person. The real question, however, is whether those benefits to that individual or agency outweigh the risks posed by face recognition to everyone else, and whether there is another way to achieve the benefit without risking people’s privacy, civil rights, and civil liberties.

Whatever a given jurisdiction’s decision on police face recognition use is, it should be a decision made in consultation with the communities being policed, not the police department’s alone.

Alex: Are you worried about how quickly this technology is improving, or, are improvements a good thing?

Clare: The better the technology gets, the more perfect of a surveillance tool it becomes. So while more accurate face recognition may reduce the risk of errors, it introduces even greater surveillance risks.

We see a trend of users deploying face recognition technology at the upper bounds of its technological capabilities too. The more advanced the perceived ability of the technology, the more advanced the deployments will become, regardless of whether it operates well in these new deployments.

Focusing on improvements to the technology can also obscure the accuracy issues raised by how the technology is used in practice. It doesn’t matter how good the technology gets if police departments are using “garbage” data in the form of low-quality images, edited photos, or “celebrity lookalikes.” The technology isn’t magic, and cannot overcome poor data quality or errors introduced by the humans operating the systems.

Alex: Any time I speak to an expert about this subject, I always ask, for those who fear face recognition technology—activists, artists, free speech advocates—what are some ways they might be able to turn their concern into action?

Clare: Political engagement is crucial, and it is within reach. The bans and moratoria that have passed were successful because of community-led engagement and pressure on local officials. Even in jurisdictions where bans or regulation have not yet been possible, communities and individuals have been able to make a concrete difference in how face recognition is used.

I am continuously awed and inspired by the power of the community in Detroit. Dozens of residents have called into police oversight and city council meetings to voice opposition to police use of face recognition over and over again. While the program unfortunately continues, the police have had to respond to this pressure by passing a much more restrictive policy than previously implemented, publishing monthly statistics on their use of the technology, and answering very hard questions posed by public officials and journalists alike.

You are also entitled to information about how your police department operates—what technology they have acquired and how they use it. Anyone can file a public records request for this information.

Alex: Finally: what are you working on these days?

Clare: My work on face recognition continues. I recently wrote a paper on face recognition and the right to anonymity under the human rights framework which will be published next year. This paper examines face recognition use in the United Kingdom, United States, and Kenya and the ways courts may reconcile use of the technology with the right to anonymity both in international and domestic law.

I am also working on a report examining the use of face recognition in criminal prosecutions, framing the technology as a forensic without a science. Under this framework, I hope to highlight the risks both of misidentification and wrongful conviction as well as the rights of the accused to information about how they were identified. Face recognition has been used in thousands of cases across the United States, and in most of these cases the accused never had the opportunity to challenge it. I argue that this amounts to a violation of those individuals’ rights to a fair trial.

Tags