The failure rate for searches has dropped from 5% in the year 2010 to 0.2% in 2018. © izusek/E+/Getty Images
The tremendous benefits of facial recognition technology for law enforcement identification applications have brought with them unparalleled challenges. Some among the public continue to believe that every individual is being tracked and watched wherever they go, or that facial recognition has an ethnic bias. These myths are incorrect—yet their persistence can create obstacles for hardworking officers and agencies.
To further complicate matters, at the same time facial recognition experts are working hard to educate the public on how beneficial this technology is, the amount of incident-related video is rapidly growing, and local and federal law enforcement investigations are relying more and more on video as one of their key sources of evidence to build a case. Body-worn cameras, in-car video and hundreds of millions of smartphones are generating massive quantities of video that can be used as evidence for arrests and in trials. This means that a rapidly growing number of individuals who are not involved in the incidents, and not part of ongoing investigations or trials, are being recorded on video as well.
The Freedom of Information Act (FOIA) was enacted to make U.S. government agencies’ functions more transparent. Through the FOIA, individuals, law firms, businesses and news organizations can request any information including these video recordings, from law enforcement agencies. Already resource-stretched agencies are being forced to take time away from critical responsibilities to respond to each of these requests. A significant amount of the video captured includes images of individuals not related to the investigations whose privacy must be protected. Agencies are further required to dedicate increasing amounts of time and resources to manually redact these individuals’ faces from video. Not only does this consume a tremendous amount of personnel bandwidth, it also is susceptible to errors.
Fortunately, while technology is creating more responsibilities for officers in this way, it is also providing the tools to address many of the public’s concerns about privacy and the accuracy of tracking utilities like facial recognition.
How automated redaction helps
Through a combination of artificial intelligence and machine learning, new technology is delivering solutions to ease this burden, reducing the time needed for the manual processes of uploading, storing, searching, editing, and sharing video evidence. Most important, the technology leverages the capabilities of deep learning analytics to automatically analyze video and catalog faces, thus redaction capabilities. This dramatically reduces the time, effort, and expense required for redaction services—in some cases by up to 90 percent.
Automated redaction addresses privacy concerns by ensuring that outside organizations requesting video do not see the faces of every person who happened to be captured within the scene. The video data is maintained in the evidence itself, but it does not get shared beyond the initial capture from body-worn cameras, in-car video, surveillance cameras, or phone cameras. When redacted video is played back inside a courtroom, juries and audiences are shown only the edited version with bystanders’ faces blurred beyond recognition, while the original video remains intact as evidence.
Using technology like this, agencies can streamline their video redaction responsibilities to better respond to FOIA requests quickly and efficiently.
Advanced software improves accuracy
From the moment the public became aware of the existence of facial recognition as an application, there were stories about shortfalls in accuracy. Many claimed that individuals from certain ethnic backgrounds could not be reliably identified. This spurred concerns that there could be accusations, arrests or even convictions that were incorrect based on inaccurate identification of a suspect based on facial recognition.
In reality, the technology has evolved greatly in the time since it was first introduced. Further, the level of accuracy has improved enormously in the last 10 years, according to an evaluation performed by the National Institute of Standards and Technology (NIST). According to a November 2018 news story, the NIST reports that the failure rate for searches (meaning that the software failed to find the matching face residing within a database) has dropped from five percent in the year 2010 to 0.2 percent in 2018—a reduction of 96 percent.
When facial recognition software scans a face to look for a match, it is processing a number of different geometric vectors. While all software is different, most read such factors as the distance between eyes and the distance from chin to forehead. Depending on the sophistication of the software, it may be reading 50, 75 or more different factors. Typically, each analysis is performed individually, and the only data being stored on databases is the physical information contained in the photos themselves, along with the identities of the individuals in the photos.
The network of facial image databases available to and via federal and local law enforcement agencies has grown accordingly, so this level of accuracy is even more important and meaningful. The databases include mug shots, the State Department’s entire directory of visa and passport pictures, and photos from the Departments of Motor Vehicles.
As of October 2021, Americans in 47 states will be required to show either a passport or a new federally compliant Real ID driver’s license to board any domestic flight. For non-government users of the technology, databases could include opt-in customers who have provided their photos for a variety of reasons including security checkpoints, ID access bracelets, VIP club memberships, etc. All of these databases may also be used to determine the identify of an individual caught on surveillance video and suspected of a civil or criminal infraction.
Because there are so many image databases available, it is imperative for law enforcement to have absolute certainty when surveillance video of a crime is being matched up in an attempt to identify a perpetrator caught on camera. Today’s best facial recognition offerings meet that requirement.
Facial recognition has become a hot button for privacy advocates in recent years. Yet there is no doubting that the technology has exceptional application potential—so much that it will without question continue to proliferate. As the ability to redact faces becomes more efficient and the recognition technology itself becomes more accurate, some of the general public’s concerns will continue to fade away.
Rob Thompkins, i-PRO National Sales Manager | Officer.com