Biased And Wrong? Facial Recognition Tech In The Dock
Police and security forces around the world are testing out automated facial recognition systems as a way of identifying criminals and terrorists. But how accurate is the technology and how easily could it and the artificial intelligence (AI) it is powered by - become tools of oppression?
Imagine a suspected terrorist setting off on a suicide mission in a densely populated city centre. If he sets off the bomb, hundreds could die or be critically injured.
CCTV scanning faces in the crowd picks him up and automatically compares his features to photos on a database of known terrorists or "persons of interest" to the security services.
The system raises an alarm and rapid deployment anti-terrorist forces are despatched to the scene where they "neutralise" the suspect before he can trigger the explosives. Hundreds of lives are saved. Technology saves the day.
But what if the facial recognition (FR) tech was wrong? It wasn't a terrorist, just someone unlucky enough to look similar. An innocent life would have been summarily snuffed out because we put too much faith in a fallible system.
What if that innocent person had been you?
This is just one of the ethical dilemmas posed by FR and the artificial intelligence underpinning it.
Training machines to "see" - to recognise and differentiate between objects and faces - is notoriously difficult. Computer vision, as it is sometimes called - not so long ago was struggling to tell the difference between a muffin and a chihuahua - a litmus test of this technology.
Computer scientists, Joy Buolamwini of MIT Media Lab (and founder of the Algorithmic Justice League) and Timnit Gebru the technical co-lead of Google's Ethical Artificial Intelligence Team, have shown that facial recognition has greater difficulty differentiating between men and women the darker their skin tone. A woman with dark skin is much more likely to be mistaken for a man.
"About 130 million US adults are already in face recognition databases," Dr Gebru told the AI for Good Summit in Geneva in May. "But the original datasets are mostly white and male, so biased against darker skin types - there are huge error rates by skin type and gender."
The Californian city of San Francisco recently banned the use of FR by transport and law enforcement agencies in an acknowledgement of its imperfections and threats to civil liberties. But other cities in the US, and other countries around the world, are trialling the technology.
In the UK, for example, police forces in South Wales, London. Manchester and Leicester have been testing the tech to the consternation of civil liberties organisations such as Liberty and Big Brother Watch, both concerned by the number of false matches the systems made.
This means innocent people being wrongly identified as potential criminals.
"Bias is something everyone should be worried about," said Dr Gebru. "Predictive policing is a high stakes scenario."
With black Americans making up 37.5% of the US prison population (source: Federal Bureau of Prisons) despite the fact that they make up just 13% of the US population - badly written algorithms fed these datasets might predict that black people are more likely to commit crime.
It doesn't take a genius to work out what implications this might have for policing and social policies.
Just this week, academics at the University of Essex concluded that matches in the London Metropolitan police trials were wrong 80% of the time, potentially leading to serious miscarriages of justice and infringements of citizens' right to privacy.
One British man, Ed Bridges, has launched a legal challenge to South Wales Police's use of the technology after his photo was taken while he was out shopping, and the UK's Information Commissioner, Elizabeth Denham, has expressed concern over the lack of legal framework governing the use of FR.
But such concerns haven't stopped tech giant Amazon selling its Rekognition FR tool to police forces in the US, despite a half-hearted shareholder revolt that came to nothing.
Amazon says it has no responsibility for how customers use its technology. But compare that attitude to that of Salesforce, the customer relationship management tech company, which has developed its own image recognition tool called Einstein Vision.
"Facial recognition tech might be appropriate in a prison to keep track of prisoners or to prevent gang violence," Kathy Baxter, Salesforce's architect of ethical AI practice, told the BBC. "But when police wanted to use it with their body cameras when arresting people, we deemed that inappropriate.
"We need to be asking whether we should be using AI at all in certain scenarios, and facial recognition is one example."
And now FR is being used by the military as well, with tech vendors claiming their software can not only identify potential enemies but also discern suspicious behaviour.
But Yves Daccord, director-general of the International Committee of the Red Cross (ICRC), is seriously concerned about these developments.
"War is hi-tech these days - we have autonomous drones, autonomous weapons, making decisions between combatants and non-combatants. Will their decisions be correct? They could have mass destruction impact," he warns.
So there seems to be a growing global consensus that AI is far from perfect and needs regulating.
"It's not a good idea just to leave AI to the private sector, because AI can have a huge influence," concludes Dr Chaesub Lee, director of the telecommunication standardisation bureau at the International Telecommunications Union.
"Use of good data is essential, but who ensures that it is good data? Who ensures that the algorithms are not biased? We need a multi-stakeholder, multidisciplinary approach."
Until then, FR tech remains under suspicion and under scrutiny.
From Chip War To Cloud War: The Next Frontier In Global Tech Competition
The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more
The High Stakes Of Tech Regulation: Security Risks And Market Dynamics
The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more
The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics
Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more
The Data Crunch In AI: Strategies For Sustainability
Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more
Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser
After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more
LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue
In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more