Facing up to facial recognition
April 19, 2023
While many communities have developed ordinances and regulations aimed at banning or limiting facial recognition—it’s time to re-evaluate. Consumers have generally accepted facial recognition because it makes things easier and still provides a healthy degree of protection as opposed to forgetting or misplacing passwords. Smartphone users use facial recognition every time they look at their screen to unlock the homepage. Microsoft Hello uses facial recognition for those choosing to unlock their devices instead of using a password or code. Airlines are starting to use facial recognition to help board flights faster thus bypassing traditional or digital boarding passes. One thing is certain, the technology in all its forms has greatly improved over previous versions.
No doubt facial recognition still raises legitimate concerns regarding privacy and being mis-identified. Facebook was one of the first companies to integrate this technology and it soon became a novelty finding oneself automatically tagged with people one knew simply through photos—for many it was like magic to see how Facebook identified people solely through photos. But as the platform grew, so did the concerns. Facebook was faced with several lawsuits related to its use of facial recognition technology. One of the largest lawsuits was filed in 2015 by Illinois residents who claimed that Facebook had violated the state’s Biometric Information Privacy Act by collecting and storing their facial recognition data without their consent. The case was later certified as a class-action lawsuit in 2018, potentially exposing Facebook to billions of dollars in damages. In January 2020, Facebook agreed to a $550 million settlement to resolve the lawsuit. The settlement is one of the largest ever for a privacy violation case, and Facebook was forced to make changes to its facial recognition practices as part of the agreement. Finally, Facebook announced it would no longer use facial recognition software to identify faces in photographs and videos.
Meanwhile, the Department of Homeland Security currently uses facial recognition to scan images of travelers leaving and entering the country and compares the image to photos that are already on file, such as passport photos. Retailers have used facial recognition to combat shoplifting by scanning shoppers’ faces and comparing them to photos in a database of known shoplifters. Facial recognition is increasingly used in schools for school safety to alert administrators, teachers and security staff when an unauthorized individual has entered school grounds.
It’s worth noting that while some states have passed laws to restrict the use of facial recognition technology by law enforcement, there currently is no state that has outright banned the technology and at the local level, about a dozen cities have either banned or restricted the use of facial recognition.
There are no less than six major policy areas that needs to be addressed, they are:
Privacy: Facial recognition technology can collect, store and use sensitive personal information, such as biometric data, without individuals’ consent. This can lead to concerns about privacy, especially when the technology is used by governments or law enforcement agencies.
Accuracy and bias: Facial recognition technology is not always accurate, and it has been shown to have higher error rates for people of color, women and children. This raises concerns about bias and potential discrimination in the use of the technology.
Transparency: The algorithms and data used in facial recognition technology are often proprietary and not publicly available, which can make it difficult to understand how the technology works and evaluate its accuracy and bias.
Security: The use of facial recognition technology raises security concerns, such as the potential for data breaches and hacking of sensitive personal information.
Legal and ethical implications: The use of facial recognition technology can raise legal and ethical questions, such as whether it violates civil liberties and human rights, and whether it can be used for mass surveillance without due process.
Regulation and oversight: The use of facial recognition technology is largely unregulated in many countries, which can lead to inconsistent use and potential abuses. There is a need for clear regulations and oversight to ensure that the technology is used responsibly and ethically.
In today’s politically charged environment, citizens have voiced their concern regarding crime—it is perhaps time to re-look at facial recognition as a means of fighting crime. One would think that of all the policy areas mentioned above, most can be addressed through sound policies, transparency, and oversight—but what about accuracy and bias?
A few years ago, it was found that the prime algorithms used for years in digital cameras were biased towards fair-skinned individuals—thus showing biases towards those with darker skin, race and even gender. The bias was unintentionally engineered into the original technologies designed for much simpler applications. But if one had dark skin the accuracy rate dropped dramatically. Other issues of accuracy revolve around lighting and camera placement. According to tests by the National Institute of Standards and Technology (NIST) as of April 2020, the best face identification algorithm had an error rate of just 0.08 percent compared to 4.1 percent for the leading algorithm in 2014. In ideal conditions, facial recognition systems can have near-perfect accuracy. Verification algorithms used to match subjects to clear reference images (like a passport photo or mugshot) that can achieve accuracy scores as high as 99.97 percent on standard assessments—like NIST’s Facial Recognition Vendor Test (FRVT). This is comparable to the best results of iris scanners. This kind of face verification has become so reliable that even banks feel comfortable relying on it to log users into their accounts. But the key operative word is “ideal conditions” where light and camera position are critical. But the NIST testing also revealed a wide range of accuracy among the various vendors. Some had remarkable results, while the majority had improved systems, but the rate of accuracy was disappointing.
Using NIST test results compared among vendors is a first step in insuring that accuracy and bias are adequately addressed. Perhaps now is the time to rethink the use of facial recognition in a more responsible way while offering enhanced oversight protections protecting our civil liberties.
During the past five years, we have come to accept license plate readers that operate in a similar manner, and now we see gun recognition technology coming to schools. So, when you combine convenience, safety, security and fighting crime—technology that helps us better track bad actors and save lives is worth a second look.
Dr. Alan R. Shark is associate professor for the Schar School of Policy and Government, George Mason University, and executive director of the CompTIA Public Technology Institute (PTI) in Washington D.C. He is a fellow of the National Academy for Public Administration and co-chair of the Standing Panel on Technology Leadership. A noted author, his latest textbook, “Technology and Public Management” was recently published. He is also the host of the popular bi-monthly podcast, Sharkbytes.net.