Facing the controversy of facial recognition technology
In January 2020, Robert Williams was returning home from work when he was approached by Detroit Police Department investigators with a warrant for his arrest. He was taken into custody for allegedly stealing five watches worth approximately $3,800 from a Shinola luxury retail store in October 2018.
The Detroit police ran the store’s security footage through powerful facial recognition software to identify Williams, matching the man on the tape to his driver’s license photo. William’s photo was then placed into a lineup and shown to a store employee who identified him as the thief.
The only problem was, Williams wasn’t the thief. During his interrogation, it became clear Williams was innocent. The investigating officer told him, “the computer must have gotten it wrong,” but despite the admission, Williams was still held in custody and wasn’t released for hours.
This is according to a complaint filed on behalf of Williams by the American Civil Liberties Union of Michigan last month. “At every step, DPD’s conduct has been improper,” the complaint alleges.
“It unthinkingly relied on flawed and racist facial recognition technology without taking reasonable measures to verify the information being provided” as part of a “shoddy and incomplete investigation.”
In the fallout following the case, Detroit’s Police Chief said the technology is inaccurate and admitted it’s wrong nearly every time it’s deployed, Ars Technica reports. This is why, he says, it’s not the only tool investigators use to identify subjects; however, its inaccuracy has led to the incarceration of at least one innocent person – namely Robert Williams.
“If we would use the software only [to identify subjects], we would not solve the case 95 to 97 percent of the time,” Chief James Craig said in a public meeting, the publication reports. “That’s if we relied totally on the software, which would be against our current policy… If we were just to use the technology by itself, to identify someone, I would say 96 percent of the time it would misidentify.”
This inaccuracy is a huge problem for Rashawn Ray, a David Rubenstein fellow at the Brookings Institution and the director of the lab for applied social science research at the University of Maryland where he and his colleagues study the intersection of technology and law enforcement, among other things. He says that while these technologies haven’t been deployed across the board by law enforcement agencies, the ones that do rely on it, rely on it heavily.
“It’s not only about how many departments use [facial recognition,]” Ray says, “It’s about how often they use it as well.”
This over-reliance on what is admittedly a powerful tool is problematic, Ray says, not only because it is inaccurate, but because it has a great potential for abuse, particularly in terms of reinforcing biases.
“Some of the main problems [with facial recognition] are based in the fact that the algorithms that power the software are written by people,” Ray says. “People have biases, and one thing that we know about science and technology as well as crime and criminology that there are embedded biases – particularly related to race and social class – that can lead to disadvantaging minority and low-income populations.”
Karen Gullo, a representative for the Electronic Frontier Foundation, a nonprofit organization that promotes civil liberties in digital spaces, agrees these biases tarnish the credibility of these technologies, and their use can have negative impacts on our personal and civil lives.
“Face surveillance is becoming an all- encompassing tool for government to track where we are, what we are doing, and who we are with, regardless of whether we’re suspected of a crime or not,” Gullo wrote in an email to American City and County. “Face surveillance can chill and deter people from protesting in public places. Many face recognition systems have unacceptably high error rates and misidentify people of color and women. This means innocent people will be subjected to erroneous police scrutiny.”
Pointing to the case of Williams, Gullo continued, “The threats from face recognition disproportionately impact people of color, both because face recognition misidentifies African Americans and ethnic minorities at higher rates than whites and because mug shot databases include a disproportionate number of African Americans, Latinos, and immigrants.”
The key problem, Ray says, is that the algorithms are only as good as the data that informs them. Simply put, the data sets these programs rely on aren’t robust enough to rely on them in a high- stakes environment like law enforcement, where a mistake might put an innocent person behind bars.
This notion is what prompted Ben Ewen-Campen, a young city councilor in Somerville, Mass., to spearhead a legislative effort to prohibit the use of facial recognition technologies in his community last year – one of the first communities in the country to do so.
Ewen-Campen says a large part of his constituency works in the tech industry, and as such issues with surveillance, digital privacy and the first amendment are high on their priority list.
“I think these are people who understand the industry really well, who understand how unregulated [surveillance] is and how powerful it is,” Ewen-Campen says. “In collaboration with the ACLU of Massachusetts, the idea of bringing transparency to surveillance technology in general and specifically focusing on facial recognition came to a head very quickly. There was widespread support. It was like pushing on an open door.”
The main problem for Ewen-Campen, and the public of Somerville as a whole, is that these technologies are not used via an “opt-in” agreement with the public. More often than not, law enforcement agencies will deploy these technologies without the public being aware of how they will be utilized, where they will be deployed or the regulations that have been set to govern the appropriateness of their use – if any. His legislation prevented this from occurring in his community.
It’s here that Ewen-Campen would like to be clear, though. This isn’t to suggest he believes law enforcement shouldn’t be permitted to leverage new technologies. He says the facial recognition prohibition was in the context of passing a larger, more general ordinance on surveillance technology in general – not banning it but requiring that any new surveillance technology needs to have an affirmative vote from the city council.
“I think that the proponents of a technology like this need to come to the public with a set of guidelines and standards for transparency that can convince the public that this is going to be used responsibly in a non-biased way,” Ewen-Campen says.
Ray agrees that buy-in from residents is critical if law enforcement is going to use facial recognition technologies in their communities. In fact, in a recent study, he recommends some questions policymakers should ask about these technologies.
Has the community been informed and had the opportunity to ask questions and give suggestions?
• What safeguards are put in place to ensure that the technology is being used properly and is working as intended?
How will you guard against biases in your technology?
How will your technology move beyond consent to include privacy protections?
While facial recognition is a powerful tool that can improve law enforcement efficiency, Ray says that doesn’t necessarily translate to increasing fair and just outcomes. If these technologies are to be deployed democratically, the veil must be lifted, and the public must be brought into the discussion.