Use Of Facial Recognition Tech 'dangerously Irresponsible'
Black and minority ethnic people could be falsely identified and face questioning because police have failed to test how well their systems deal with non-white faces, say campaigners.
At least three chances to assess how well the systems deal with ethnicity were missed over the past five years, the BBC found.
Campaigners said the tech had too many problems to be used widely.
"It must be dropped immediately," said privacy rights group Big Brother Watch.
Several UK police forces have been trialling controversial new facial recognition technology, including automated systems which attempt to identify the faces of people in real time as they pass a camera.
Documents from the police, Home Office and university researchers show that police are aware that ethnicity can have an impact on such systems, but have failed on several occasions to test this.
The Home Office said facial recognition can be an "invaluable tool" in fighting crime.
"The technology continues to evolve, and the Home Office continues to keep its effectiveness under constant review," a spokesman told the BBC.
The ability of facial recognition software to cope with black and ethnic minority faces has proved a key concern for those worried about the technology, who claim the software is often trained on predominantly white faces.
Minutes from a police working group reveal that the UK police's former head of facial recognition knew that skin colour was an issue. At an April 2014 meeting, Durham Police Chief Constable Mike Barton noted "that ethnicity can have an impact on search accuracy".
He asked CGI, the Canadian company managing the police's facial image database, to investigate the issue, but subsequent minutes from the working group do not mention a follow-up.
Facial recognition was introduced on the Police National Database (PND), which includes around 13 million faces, in 2014.
The database has troubled privacy groups because it contains images of people subsequently cleared of any offence. A 2012 court decision ruled that holding such images was unlawful.
The "unlawful" images are still held on the PND. The government is currently investigating ways to purge them from the system.
Despite this, the PND facial recognition system, provided by German company Cognitec, has proved very popular.
The number of face match searches done on the PND grew from 3,360 in 2014 to 12,504 in 2017, Freedom of Information requests to the Home Office have revealed.
In 2015, a team of assessors from the Home Office tested the PND facial search system, using about 200 sample images. They had identified ethnicity information about the sample photos but, once again, failed to use this opportunity to check how well the system worked with different skin colours.
The same Home Office report also estimated that, across the entire PND, about 40% of the images were duplicated.
It noted that this meant the UK government has overpaid hundreds of thousands of pounds to Cognitec, because the company charges more once the number of images (or "templates") on the database exceeds 10 million.
The Home Office assessment also found the facial recognition system was only half as good as the human eye. It said: "Out of the initial 211 searches, the automated facial search of PND identified just 20 true matches, whereas visual examination by the tester identified a total of 56 matches."
Cognitec declined to comment on costs, but said its matching technology had improved since the Home Office report, and that facial recognition results always required review by a human.
A spokesman for the National Police Chiefs Council said the technology had the potential to disrupt criminals, but said any roll-out must show its effectiveness within "sufficient safeguards". It added that work is being done to improve the system's accuracy and remove duplicates.
Another chance to check for racial bias was missed last year during trials by South Wales of real-time facial recognition software, which was used at sports events and concerts. Cardiff University carried out an assessment of the force's use of the technology,
That study stated that "due to limited funds for this trial", ethnicity was not tested.
Cardiff's report noted, however, that "during the evaluation period, no overt racial discrimination effects were observed", but said this may be due to the demographic make-up of the watch lists used by the force.
In addition, an interim report by a biometrics advisory group to the government considering ethical issues of facial recognition highlighted concerns about the lack of ethnic diversity in datasets.
Under-representation of certain types of faces, particularly those from ethnic minorities, could mean bias "feeds forward" into the use of the technology, it said.
Silkie Carlo, director of campaign group Big Brother Watch, said: "The police's failure to do basic accuracy testing for race speaks volumes.
"Their wilful blindness to the risk of racism, and the risk to Brits' rights as a whole, reflects the dangerously irresponsible way in which facial recognition has crept on to our streets."
The technology had too many problems to justify its used, she said.
"It must be dropped immediately," Ms Carlo added.
Big Brother Watch is currently taking legal action against the Metropolitan Police over its use of automated facial recognition systems.
From Chip War To Cloud War: The Next Frontier In Global Tech Competition
The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more
The High Stakes Of Tech Regulation: Security Risks And Market Dynamics
The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more
The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics
Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more
The Data Crunch In AI: Strategies For Sustainability
Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more
Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser
After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more
LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue
In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more