Facial recognition software should be dropped by police amid concerns it is “almost entirely inaccurate”, campaigners have warned.
Figures revealed in response to Freedom of Information requests by Big Brother Watch have shown that, for the Metropolitan Police, 98% of “matches” found by the technology were wrong, and for South Wales Police the figure was 91%.
The software is used at major events like the Notting Hill Carnival, sporting fixtures and music concerts to detect people on a watch list, including wanted criminals.
Director of Big Brother Watch Silkie Carlo said: “We’re seeing ordinary people being asked to produce ID to prove their innocence as police are wrongly identifying thousands of innocent citizens as criminals.
“It is deeply disturbing and undemocratic that police are using a technology that is almost entirely inaccurate, that they have no legal power for, and that poses a major risk to our freedoms.
“This has wasted millions in public money and the cost to our civil liberties is too high. It must be dropped.”
Figures released by the Metropolitan Police showed there had been 102 false positives – cases where someone was incorrectly matched to a photo – and only two that were correct.
Neither of those was arrested – one was no longer wanted by police, and the other was classed as a “fixated individual” who attended a Remembrance Day event.
For South Wales, 2,451 out of 2,685 matches were found to be incorrect – 91%. Of the remaining 234, there were 110 interventions and 15 arrests.
The force used the software at various events including the Uefa Champions League 2017 final in Cardiff, international rugby matches and Liam Gallagher and Kasabian concerts.
Big Brother Watch said South Wales Police had stored pictures from both false positive and true positive matches for 12 months, potentially meaning images of more than 2,000 innocent people were stored by the force without the subjects’ knowledge. The force said the images were only stored as part of an academic evaluation for UCL, and not for any policing purpose.
The software used by SWP and the Met has not been tested for demographic accuracy, but in the United States concerns have been raised that facial recognition is less reliable for women and black people.
The report said: “Disproportionate misidentifications risk increasing the over-policing of ethnic minorities on the premise of technological ‘objectivity’.
“This issue will be further compounded if police disproportionately deploy automated facial recognition in areas with high BME (black and minority ethnic) populations, such as the Metropolitan Police’s repeated targeting of Notting Hill Carnival.”
It also highlights the fact that the software is used to search the Police National Database, which contains hundreds of thousands of images of innocent people.
“With more and more biometric images being fed into the database – subsets of which are used with real-time facial recognition cameras – innocent people are increasingly at risk of being wrongfully stopped or even arrested,” it said.
Big Brother Watch is calling for UK authorities to stop using automated facial recognition software with surveillance cameras, backed by Labour MP for Tottenham David Lammy and campaign groups including the Football Supporters Federation, Index on Censorship, Liberty, and the Race Equality Foundation.
A spokeswoman for Scotland Yard said the force is trialling facial recognition technology, and it was used at the previous two Notting Hill Carnivals and the 2017 Remembrance Sunday service “to assess if it could assist police in identifying known offenders in large events, in order to protect the wider public”.
Addressing false positives, the force said: “We do not consider these as false positive matches because additional checks and balances are in place to confirm identification following system alerts.”
Why are you making commenting on The Herald only available to subscribers?
It should have been a safe space for informed debate, somewhere for readers to discuss issues around the biggest stories of the day, but all too often the below the line comments on most websites have become bogged down by off-topic discussions and abuse.
heraldscotland.com is tackling this problem by allowing only subscribers to comment.
We are doing this to improve the experience for our loyal readers and we believe it will reduce the ability of trolls and troublemakers, who occasionally find their way onto our site, to abuse our journalists and readers. We also hope it will help the comments section fulfil its promise as a part of Scotland's conversation with itself.
We are lucky at The Herald. We are read by an informed, educated readership who can add their knowledge and insights to our stories.
That is invaluable.
We are making the subscriber-only change to support our valued readers, who tell us they don't want the site cluttered up with irrelevant comments, untruths and abuse.
In the past, the journalist’s job was to collect and distribute information to the audience. Technology means that readers can shape a discussion. We look forward to hearing from you on heraldscotland.com
Comments & Moderation
Readers’ comments: You are personally liable for the content of any comments you upload to this website, so please act responsibly. We do not pre-moderate or monitor readers’ comments appearing on our websites, but we do post-moderate in response to complaints we receive or otherwise when a potential problem comes to our attention. You can make a complaint by using the ‘report this post’ link . We may then apply our discretion under the user terms to amend or delete comments.
Post moderation is undertaken full-time 9am-6pm on weekdays, and on a part-time basis outwith those hours.
Read the rules here