from the but-the-good-stuff-works-too-well! dept
UK Law Enforcement Pushed Hard To Maintain Access To Deeply Flawed Facial Recognition Tech
by Tim Cushing · TechdirtWhile each iteration presents a chance to improve, there are some very real reasons why facial recognition tech will do a bit of stagnating. And that reason is the biggest market for this tech: law enforcement agencies.
In 2019, the US National Institute for Science and Technology studied 189 different facial recognition algorithms. The results were conclusive: every single one of them performed worse when asked to “recognize” anything other than white male faces. Asians and African Americans were more than 100 times more likely to be misidentified by the tech. While some were a little bit better, the average across the board was bad news for people who’ve already been subjected to decades of biased policing.
Adding tech to existing biases only allows them to compound the inequities faster. That’s something that was pointed out less than a year later to the EU Parliament. Allowing cops to control both the input and the output just means the systems will generate plausible deniability for racist policing, rather than create a playing field that’s a bit more level.
Not only does facial recognition tech have a built-in bias problem, it also seems to have a problem with recognizing faces, no matter what color those faces are. Police forces in the UK have seen this happen repeatedly, racking up alarming false positive rates during tech rollouts. Despite these failures (and the unacknowledged flip side of false positives: false negatives), the UK government has continued to expand facial recognition programs.
The UK’s version of the NIST, the National Physical Laboratory (NPL), performed its own examination of tech currently being used by UK law enforcement. Its conclusions were just as unsurprising:
UK forces use the police national database (PND) to conduct retrospective facial recognition searches, whereby a “probe image” of a suspect is compared to a database of more than 19 million custody photos for potential matches.
The Home Office admitted last week that the technology was biased, after a review by the National Physical Laboratory (NPL) found it misidentified Black and Asian people and women at significantly higher rates than white men, and said it “had acted on the findings”.
These findings were passed on to law enforcement by the Home Office last September. The National Police Chiefs’ Council (NPCC) responded about as well as it could: it ordered any users of the tech examined by the NPL to adjust sensitivity settings to raise the “confidence threshold” for matches. This order was meant to counteract (to a point) the false positives generated by the tech’s inability to accurately match images involving women, Black people, and pretty much anyone of any race under the age of 40. (Whew. That’s a lot of failure.)
Well, that apparently angered a whole lot of UK officers and supervisors. With the threshold raised, fewer matches (and, presumably, fewer incorrect matches) were being generated. Rather than recognize this was part of necessary compromise needed to offset faulty tech, they decided to get bitchy about not being given enough false positives to act on.
That decision was reversed the following month after forces complained the system was producing fewer “investigative leads”. NPCC documents show that the higher threshold reduced the number of searches resulting in potential matches from 56% to 14%.
Yep, the NPCC rolled this decision back because officers weren’t getting as many matches as they were used to getting. Sure, the matches they were generating were likely much better than the ones they had generated in the past, but accuracy doesn’t seem to matter to UK law enforcement. It collectively pushed back hard enough to get this order reversed, allowing UK agencies to once again exploit the known, scientifically studied limitations of the facial recognition tech they were using. They valued quantity over quality — the sort of thing that naturally lends itself to the biased policing efforts these officers prefer to engage in.
Chief Constable Amanda Blakeman, an NPCC lead, claims there’s a tradeoff being made here that will ultimately benefit the public, even if it means more of them will be falsely arrested and the increase in false negatives will mean more criminals will escape justice.
“The decision to revert to the original algorithm threshold was not taken lightly and was made to best protect the public from those who could cause harm, illustrating the balance that must be struck in policing’s use of facial recognition.”
Blakeman insists additional training is all that’s needed to overcome the known limitations of the tech. Anyone who has ever attended mandatory training knows this simply isn’t true. All that means is that a bunch of people will doze or daydream through these sessions and pencil whip whatever form is given to them that will supposedly “verify” that all the training they never paid attention to has been put to use. Blakeman even said some of this training will be “reissued,” which makes it clear no one was paying any attention to it the first time around.
It’s fucking amazing. When confronted with the fact that their tech is flawed, UK law enforcement agencies demanded everything be reverted back to the fully-broken “normal” they’d been allowed to abuse since the tech’s inception. And now that this is all out in the open, police spokespeople are back to pretending law enforcement has anything to do with competently and carefully enforcing laws.