An increasing number of companies and agencies are turning to facial recognition technology for various things, but is it really ready for prime time? It’s a valid question in the wake of some concerning developments, the latest of which involves the South Wales police force using facial recognition technology to scan for criminals at major events. In particular, during the 2017 Champions League final, more than 2,000 people in Cardiff were identified as potential criminals even though most of them were not.
The way it works is cameras scan faces in a crowd, then compare the images to the police force’s database. Around 170,000 people attended the aforementioned event, with the facial recognition technology identifying 2,470 as potential criminals. Out of those, the system got 2,297 of them wrong, which works out to a 92 percent false positive rate. In other words, nine of 10 times, the technology got it wrong.
Even with the high rate of false positives at the soccer event, South Wales police view the technology as a good thing, noting that “no facial recognition is 100 percent accurate.” The police agency also says it has been used to arrest 450 people since being introduced in June of last year, and that no arrests have ever been made based on a false positive.
“Over 2,000 positive matches have been made using our ‘identify’ facial recognition technology, with over 450 arrests. Successful convictions so far include six years in prison for robbery and four-and-a-half years imprisonment for burglary. The technology has also helped identify vulnerable people in times of crisis,” a spokesperson said.
The police agency further notes that the system’s accuracy continues to improve. So how did it wrongly identify so many potential criminals at the soccer event? The agency blamed the false positives on “poor quality images” supplied by some of its partners, including Uefa and Interpol. This was also the first time the system was deployed on a mass scale.