What police departments should consider before implementing facial recognition software
Police agencies need to know how to vet the technology and its vendors
In Orwell’s dystopian novel, “1984,” every citizen knows they’re being watched. There are telescreens everywhere recording them. According to “the Party,” this surveillance is for the betterment of the state as a whole, and citizens who resist or disobey are labeled traitors and disappear. The leader of the Party goes by “Big Brother.”
The classic novel was assigned college reading for me. "Big Brother" was part of my pop culture lexicon – a synonym for government abuse of power as it related to civil liberties, often through mass surveillance. Today, watching, cataloging and identifying citizens aren’t science fiction.
How facial recognition software works
Facial recognition software aims to identify or authenticate individuals by comparing their face against a database of known faces and looking for a match.
First, a computer must find the face in the image. Then it creates a numeric representation of the face based on the facial features. Finally, this numeric “map” of the face in the image is compared to database images of identified faces, for example, a driver’s license database. There are almost as many computer algorithms for this process as there are companies.
Facial recognition has become more sophisticated in recent years.
Three dimensional (3-D) facial recognition uses 3-D sensors to capture information about the shape of a face. Three-dimensional data points from a face vastly improve the precision of face recognition. One advantage of 3-D face recognition is that it’s not affected by changes in lighting. It can also identify a face from a range of viewing angles, including a profile.
In 2015, Facebook announced its algorithm could identify people in unclear images or images in which people were not looking directly at the camera. Recently, according to Facebook’s AI department, it doesn’t even need a face but can identify people through hairdos, postures, gestures and body types.
Facial recognition accuracy depends on the algorithm used. In 2010, the U.S National Institute of Standards and Technology (NIST) tested various facial recognition systems and found that the best algorithm correctly recognized 92 percent of unknown individuals from a database of 1.6 million criminal records.
Currently, systems can reach reliability of up to 99 percent, depending on the image. It’s more reliable than recognition by humans. In 2014, a study of border control officers with specific education and training in facial recognition found that fraudulent photographs were accepted in 14 percent of cases.
Public debate over the use of facial recognition
Two recent reports have shined a spotlight on concerns about the accuracy and reliability of facial recognition. Both have received media attention.
In May 2016, the Government Accountability Office (GAO) issued a report on the FBI’s Next Generation Identification (NGI) program which is amassing multimodal biometric identifiers such as face-recognition-ready photos, iris scans, palm prints and voice data, and making that data available to other agencies at the state and federal levels. The report criticized the NGI for its lack of transparency, absence of reliability testing and invasion of privacy.
In October 2016, Georgetown Law’s Center for Privacy and Technology published findings from a year-long investigation based on over 15,000 pages of records obtained from over 100 FOIA requests. The report set out to inform the public about how facial recognition is used and the policies that govern how police can use it. Information about the FBI’s use of facial recognition had been known. This report tried to tackle the scale of local and state law enforcement involvement.
Concerns about the reliability and accuracy of facial recognition include:
- While companies marketing the technology claim accuracy rates higher than 95 percent, the algorithms used by police are not required to undergo public or independent testing to determine accuracy or check for bias before being deployed on everyday citizens.
- Accuracy rates are not equal across algorithms. According to NIST, algorithms developed in China, Japan and South Korea recognized East Asian faces far more readily than Caucasians. The reverse was true for algorithms developed in France, Germany and the United States.
- Facial-recognition systems are more likely either to misidentify or fail to identify African Americans, errors that could result in innocent citizens being marked as suspects in crimes. Little is being done to correct for the bias. One study co-authored by a senior FBI technologist found that Cognitec, whose algorithms are used by police in California, Maryland and Pennsylvania, consistently performed 5-to-10 percent worse on African Americans than on Caucasians. One algorithm, which failed to identify the right person in 1 out of 10 encounters with Caucasian subjects, failed nearly twice as often with African Americans.
- This bias is compounded by the disproportionate number of African Americans who are surveilled, stopped, booked and have mug shots taken by police. (This isn’t to say the algorithms are intentionally “racist.” Rather, they are flawed on racial lines, probably unintentionally during the algorithms’ development. An algorithm flaw in Google’s facial recognition tagged two African Americans as “gorillas.”)
- Facial recognition software often provides a list of possible matches. Police departments largely rely on officers to decide whether a candidate photo matches one in the list. A recent study showed that, without specialized training, humans make the wrong decision about such a match half the time.
- Face recognition systems aren’t audited for misuse. Of the 52 police agencies queried in the Georgetown Law study, only nine (17%) indicated that they log and audit their officers’ face recognition searches for improper use. Of those, only one agency, the Michigan State Police, provided documentation showing their audit regime was actually functional.
How police departments should plan for the use of facial recognition
There are several steps police departments should take when using facial recognition software:
- Police agencies are well-placed to require that facial recognition software vendors submit to NIST’s existing accuracy tests and any new tests that it develops. Require vendors to address their algorithms’ race, age and gender bias with accuracy tests and performance results.
- Provide training for officers who will be deciding whether there is a match amongst a list of possible candidates provided by facial recognition software.
- Log and audit the use of agency facial recognition software.
- Be transparent with your community about your facial recognition software, the vendor, accuracy testing, logging and auditing procedures.