The digital dragnet
Facial recognition technology, if left unchecked, could fundamentally change what it means to be an American.
In late June, Jarrod Ramos shot his way through the newsroom of the Capital Gazette in Annapolis, Md., killing five and injuring two more. According to local reports, Ramos was uncooperative after Anne Arundel County police took him into custody. To identify him, they used a controversial method – facial recognition technology.
The system matched an image of his face against a database of over 10 million driver’s license and mugshot photos to confirm the identity of the shooter, according to documents obtained by Georgetown University. While systems like these have been in place for years, their use by law enforcement is questioned by legal experts and privacy advocates who argue that unless policy is put in place to define how these systems should be used, we might soon find ourselves in a surveillance state.
Roger Rodriguez, a former detective with the New York Police Department and head of their facial recognition program and current director of client relations at Vigilant Solutions, a provider of license plate recognition, facial recognition and data analytics systems, believes much of the trepidation surrounding these systems is unfounded. However in order to understand the concerns about this technology, it’s important to first understand how these systems work.
There are two main ways facial recognition technology is used in law enforcement, Rodriguez explains. First, as with the Capital Gazette case, an image of a suspect’s face is acquired and then, using a facial recognition system, that image is compared to a large database of faces. Depending on the state, this can be through the department of motor vehicles and/or law enforcement’s collection of mugshots. The system then produces a list of likely candidates which is then reviewed by detectives.
The second application is using facial recognition systems for real-time monitoring. Rodriguez explains at a large event like a Super Bowl or a large-scale protest or demonstration where real-time cameras with the capacity for facial recognition are in use, law enforcement can monitor the crowd for specific individuals. He says it’s important to note that using this application, faces are extracted, but not stored. “This is something that needs to be made clear – it’s not used for pervasive surveillance,” Rodriguez says. “Law enforcement strategically deploys [facial recognition] for the purposes of security as an alerting mechanism.”
It’s also important to note that these systems have been in place for years without any known cases of abuse. Rodriguez says the 9/11 attacks catalyzed these technologies and shortly thereafter they really began to expand in the law enforcement world. The NYPD was an early adopter, and Rodriguez was in charge of its first dedicated facial recognition unit. He says over the years these systems have improved in their ability to accurately match images, and the costs have come down dramatically making it a viable option even for smaller agencies.
Rodriguez points out that even though systems are becoming more accurate, fears of a “Minority Report-like” future where humans aren’t involved in the investigative process aren’t warranted. “In the public safety space [this technology] is just a pointer system,” he says. “It’s a start in the investigative process to quickly identify someone, but the need for human involvement… is always necessary to vet the software’s results.” The systems can’t definitively conclude an identity, he explains, nor should they be relied on to do so. There should be policy in place and a well-defined workflow (like the one illustrated on pages 20 and 21) to ensure matches are as accurate as possible. There are numerous variables that need to be considered, and it will always take a human touch to do so. “Law enforcement does not use this to definitively conclude an identity,” Rodriguez says. “It’s no different than flipping through mugshots – we’ve simply automated a process that has existed in the realm of law enforcement for many many years.”
Rodriguez points to the Capital Gazette case as a textbook example of how this technology should be deployed. “IT was a chaotic scene, law enforcement did a great job in apprehending him, but there were issues identifying him. The good thing about facial recognition is that it doesn’t require any contact; they were able to take a photo and identify Jarrod Ramos using this technology in a timely manner. They were transparent about it, and I applaud them for their efforts.”
Privacy advocates and civil liberties groups, however, do not agree with Rodriguez. To demonstrate the potential weaknesses of these systems, the American Civil Liberties Union recently conducted a test using a popular law enforcement facial recognition program developed by Amazon called Rekognition, which falsely matched 28 members of Congress to criminal mugshots.
An Amazon spokesperson declined to participate in this story but provided a link to the company’s public statement made by Matt Wood, general manager of artificial intelligence at Amazon Web Services. Wood said:
“There has been no reported law enforcement abuse of Amazon Rekognition. We also have an Acceptable Use Policy (“AUP”) that prohibits the use of our services for ‘[a]ny activities that are illegal, that violate the rights of others, or that may be harmful to others.’ This includes violating anybody’s Constitutional rights relating to the 4th, 5th, and 14th Amendments – essentially any kind of illegal discrimination or violation of due process or privacy right. Customers in violation of our AUP are prevented from using our services.”
Wood added, “There have always been and will always be risks with new technology capabilities. Each organization choosing to employ technology must act responsibly or risk legal penalties and public condemnation. AWS takes its responsibilities seriously. But we believe it is the wrong approach to impose a ban on promising new technologies because they might be used by bad actors for nefarious purposes in the future.”
However, for the ACLU, the potential for misusing these technologies is too great. In the statement they released after the Rekognition test, Jacob Snow, Technology and Civil Liberties Attorney at the ACLU Foundation of Northern California, said “our test reinforces that face surveillance is not safe for government use. Face surveillance will be used to power discriminatory surveillance and policing that targets communities of color, immigrants and activists. Once unleashed, that damage can’t be undone.” The statement went on to call on Congress to “press for a federal moratorium on the use of face surveillance until its harms, particularly to vulnerable communities, are fully considered.”
When reached for comment, Matt Cagle a technology and civil liberties attorney at the ACLU of Northern California, echoed the sentiments of the official statement.
“Right now, there’s a lot of concern that this technology is biased and inaccurate and not proven for public safety uses,” he says. “But even if it were 100 percent accurate, facial recognition technologies raise several privacy and civil liberty concerns. These systems allow the government to track where we go, what we do, and potentially even how we feel. This is the type of technology that once built can easily be turned against communities in ways that are really bad for civil rights and community trust in law enforcement.”
There are two main concerns, according to Cagle. First, the potential for false positives – like in their congressional test – and the mass surveillance of communities. “The results of this test raise the possibility that law enforcement might deploy a facial recognition system that produces inaccurate results, and that may lead to mistakes in the field that harm people,” he says. “That’s totally unacceptable form a public safety perspective.” He adds “These are dangerous systems that allow governments to easily track people without their knowledge or consent… and that’s exactly why the ACLU is calling on Congress to put the brakes on it.”
Jennifer Lynch, senior staff attorney at the Electronic Frontier Foundation, agrees that we’re at a turning point with this technology, and we collectively need to decide how we’re going to move forward. “We’re at a point where [the tecnhology] is significantly accurate and it’s becoming less and less expensive for cities to purchase and law enforcement agencies to use,” she says. “I think we’ll start to see more uses of facial recognition in the future if we don’t place restrictions on cities and law enforcement’s ability to use this kind of technology.”
One of her concerns is that technology has far outpaced the regulatory mechanisms of government, and these systems are being used virtually unchecked. She agrees with the ACLU’s position that a moratorium is appropriate until it’s better understood and we as a society can come to terms with this technology and understand its impact on our culture. Lynch reminds local leaders it’s their responsibility to thoughtfully consider the consequences of deploying these systems.
“Facial recognition is unlike other technologies in its impact on privacy and civil liberties. If we start to see facial recognition deployed on public surveillance cameras then cities will be able to track citizens wherever they go through public spaces,” Lynch says. “Cities need to be thinking big-picture about this – where could this go in the future, and what kind of restrictions do we want to create now so that five years down the road we aren’t living in a 1984 society.”
Like Pandora’s Box, Cagle says facial recognition technology’s law enforcement application can’t really be undone, but its deployment can be put in check. “We think that any law enforcement official that is considering using facial recognition technology should take a step back and not only scrutinize the vendor but have a transparent conversation with the community asking if [the use of the technology] is even necessary… It’s important for government agencies to be thinking about what it means to be experimenting with these infrastructures which once built can be easily abused.”
Purchasing structures and community involvement are also important considerations in the decision to deploy technologies like this, Lynch says. “We’ve advocated for a model where city governments have control over law enforcement purchases of new technologies, and through that kind of a process – where it’s all out in the open – you can get the electorate involved in determining whether facial recognition is right for the community,” she says. “I don’t think law enforcement should be able to make that decision on their own. This is too important.”
Practically speaking, Lynch feels facial recognition systems should be much harder to access. “We have very clear restrictions on law enforcement’s ability to use other types of technologies, for instance, to wiretap our phones or to access our emails or track our location,” she says. “Mainly this is through the use of a warrant that requires an officer go to a judge and justify to the judge why they need to use this kind of extremely invasive technology. In doing so, the officer has to prove the use of the technology is tied to a specific criminal activity and its likely to provide evidence of a crime.” What we need to avoid, she says, is the creation of a digital dragnet where everyone is being surveilled at all times.
What’s at stake, Lynch says, is the fundamental principles guiding our democracy. America was built on the ability of people to walk about in relative obscurity, and facial recognition technologies jeopardies that ability. “If you know that the government is watching you at all times, you’re much less likely to speak out on things that concern you, you’re much more likely to go with what the general consensus is. People are less likely to speak to and interact with people they don’t know. People are less likely to be political. This isn’t just theory – there have been many studies conducted on the impact of surveillance on communities.” If we accept this type of surveillance, Lynch says it would fundamentally change what it means to be an American. We have to ask ourselves, is this what we really want?
Cagle offers practical advice for local officials who are considering deploying these systems in their community – simply put, ask them. “We think it’s good government for the public to be involved at the earliest point possible before surveillance technology is acquired or used,” he says. “We find in many cases current law enforcement tools are adequately serving the public’s needs and dangerous state surveillance technologies aren’t going to be necessary.” In Cagle’s mind, the potential for abusing these systems far outweighs the benefit of utilizing them.
Rodriguez, however, does not think these 1984 fears are based in reality. In fact, he recently wrote an article about these fears where he stated:
“As someone who has been immersed in this technology for years, I can attest that many of the assertions are unjustified, misplaced, and misleading to the general population. As I receive my daily news alerts on facial recognition and read the headlines about how law enforcement uses this technology, I see language that reflects a misunderstanding about how the technology is used in practice and ignorance of facial recognition’s core value with regard to public safety.” He adds, “Public safety agencies are not in the business of using facial recognition technology to violate a person’s rights. There are no cases which support that theory… I will firmly state this is a proven technology that provides great and growing value to public safety.”