Quantcast
Channel: Public Sector – NS Tech
Viewing all articles
Browse latest Browse all 104

The Met Police didn’t check if facial recognition tech was racist before trialling it

$
0
0

The Metropolitan Police didn’t bother to assess whether live facial recognition technology discriminated on the basis of race or gender before they conducted trials with it, according to a new report by think tank WebRoots Democracy. 

The Met Police trialled the technology between 2016 and 2019, during which time it had not carried out an Equality Impact Assessment – a document designed to identify the potential for discrimination against different groups.

Lack of effort applied to finding out whether live facial recognition is biased or not was one of the reasons that South Wales Police was recently ruled to be using the technology unlawfully in a recent court case.  

“Our report finds that the police have been asleep at the wheel when it comes to the impact the technology will have on communities of colour,” Areeq Chowdhury, director of WebRoots Democracy, said in a statement. 

“The fact that the Metropolitan Police did not even bother to undertake an Equality Impact Assessment before trialling the technology is staggering. It is very likely that facial recognition will be used disproportionately against people of colour and will exacerbate racial tensions in future.”

In a report that argues for a “generational ban” on facial recognition technology, WebRoots Democracy finds that it’s likely facial recognition technology will be used disproportionately against Muslims and communities of colour, that it’s likely to bolster calls for a veil ban in the UK, and exert a “chilling effect” on political protest. 

The Met Police has repeatedly denied that there is a bias in the facial recognition technology that it uses. It has expressed its intention to continue using facial recognition technology despite the unlawful ruling in Wales, saying that there are “different crime issues” in London, and that the force uses their own policy documents as well as “the latest accurate algorithm”. 

Although the Met didn’t carry out an Equality Impact Assessment before first trialling the technology, it finally did so ahead of operational deployments in early 2020.  

This document attracted attention at the time for incorrectly claiming that biometrics commissioner Paul Wiles said that he supported the police’s use of the technology – something he disputed.

Professor of sociology at Essex University, Peter Fussey, carried out an independent study of the Met’s facial recognition technology and found it was only 19 per cent accurate. He told NS Tech: “In our report we did state explicitly that a significant issue with the trials was the limited emphasis on equality and diversity issues”.

He notes that the Court of Appeals judgement in South Wales said that a trial period should be treated no differently to an active deployment of the technology and that it’s an obligation to carry out an equalities assessment beforehand. 

“The [Met] claim their uses of the tech to be non-discriminatory. Cressida Dick said this explicitly in her RUSI lecture,” says Fussey. “As far as I can see this is based on evidence from the [Met’s] own statistical evaluation, which relies on an extremely small sample.”  

The Met and South Wales Police forces use NeoFace Live Facial Recognition technology supplied by Japanese company Nec. In its Equality Impact Assessment, the Met says that it uses the NEC-3 algorithm that the National Institute of Standards and Technology (NIST) has tested and found to be the most accurate on “many measures”, compared to other facial recognition algorithms.

In its report, NIST reviewed 189 such algorithms and identified marked “demographic differentials” in their performance across different ethnicities.

Fussey challenges the Met’s claim that “differences in [facial recognition] algorithm performance due to ethnicity are not statistically significant” on the basis that it may be that the total number of matches studied are not themselves significant.

Statistics from the Met show that only twenty-eight people were engaged by police after being matched by live facial recognition systems across the ten test deployments. Fussey maintains that this is not a sufficiently large number to draw definitive conclusions about demographic discrimination.

A spokesperson for the Met Police said: “We are aware that the accuracy of the Live Facial Recognition technology has been subject to some debate and challenge. However, it is important to note that when using the technology, further checks and balances are always carried out before police action is taken. The final decision to engage with an individual flagged by the technology is always made by a human.

“We understand that the public will rightly expect the use of this technology to be rigorously scrutinised and used lawfully. The technology itself is developing all the time and work is being done to ensure the technology is as accurate as possible.”

The post The Met Police didn’t check if facial recognition tech was racist before trialling it appeared first on NS Tech.


Viewing all articles
Browse latest Browse all 104

Trending Articles