The Algorithms Are Watching Us, But Who Is Watching The Algorithms?

Empowering algorithms to make potentially life-changing decisions about citizens still comes with significant risk of unfair discrimination, according to a new report published by the UK's Center for Data Ethics and Innovation (CDEI). In some sectors, the need to provide adequate resources to make sure that AI systems are unbiased is becoming particularly pressing – namely, the public sector, and specifically, policing. 

The CDEI spent two years investigating the use of algorithms in both the private and the public sector, and was faced with many different levels of maturity in dealing with the risks posed by algorithms. In the financial sector, for example, there seems to be much closer regulation of the use of data for decision-making; while local government is still in the early days of managing the issue. 

Although awareness of the threats that AI might pose is growing across all industries, the report found that there is no particular example of good practice when it comes to building responsible algorithms. This is especially problematic in the delivery of public services like policing, found the CDEI, which citizens cannot choose to opt out from.  

Research that was conducted as part of the report concluded that there is widespread concern across the UK law enforcement community about the lack of official guidance on the use of algorithms in policing. "This gap should be addressed as a matter of urgency," said the research. 

Police forces are fast increasing their adoption of digital technologies: at the start of the year, the government announced £63.7 million ($85 million) in funding to push the development of police technology programs. New tools range from data visualization technologies to algorithms that can spot patterns of potential crime, and even predict someone's likelihood to re-offend. 

If they are deployed without appropriate safeguards, however, data analytics tools can have unintended consequences. Reports have repeatedly shown that police data can be biased, and is often unrepresentative of how crime is distributed. According to data released by the Home Office last year, for example, those who identify as Black or Black British are almost ten times as likely to be stopped and searched by an officer than a white person. 

An AI system that relies on this type of historical data risks perpetuating discriminatory practices. The Met Police used a tool called Gangs Matrix to identify those at risk of engaging with gang violence in London; based on out-of-date data, the technology disproportionately featured black young men. After activists voiced concerns, the matrix's database was eventually overhauled to reduce the representation of individuals from Black African Caribbean backgrounds. 

Examples like the Gangs Matrix have led to mounting concern among the police forces, which is yet to be met with guidance from the government, argued the CDEI. Although work is under way to develop a national approach to data analytics in policing, for now police forces have to resort to patchy ways of setting up ethics committees and guidelines – and not always with convincing results. 

Similar conclusions were reached in a report published earlier this year by the UK's committee on standards in public life, led by former head of MI5 Lord Evans, who expressed particular concern at the use of AI systems in the police forces. Evans noted that there was no coordinated process for evaluating and deploying algorithmic tools in law enforcement, and that it is often up to individual police departments to make up their own ethical frameworks. 

The issues that the police forces are facing in their use of data are also prevalent across other public services. Data science is applied across government departments to decisions made for citizens' welfare, housing, education or transportation; and relying on historical data that is stocked with bias can equally result in unfair outcomes.  

Only a few months ago, for example, the UK government's exam regulator Ofqual designed an algorithm that would assign final year grades to students, to avoid organizing physical exams in the middle of the Covid-19 pandemic. It emerged that the algorithm produced unfair predictions, based on biased data about different schools' past performance. Ofqual promptly retracted the tool and reverted back to teachers' grade predictions.  

Improving the process of data-based decisions in the public sector should be seen as a priority, according to the CDEI. "Democratically-elected governments bear special duties of accountability to citizens," reads the report. "We expect the public sector to be able to justify and evidence its decisions." 

The stakes are high: earning the public's trust will be key to the successful deployment of AI. Yet the CDEI's report showed that up to 60% of citizens currently oppose the use of AI-infused decision-making in the criminal justice system. The vast majority of respondents (83%) are not even certain how such systems are used in the police forces in the first place, highlighting a gap in transparency that needs to be plugged. 

There is a lot that can be gained from AI systems if they are deployed appropriately. In fact, argued the CDEI's researchers, algorithms could be key to identifying historical human biases – and making sure they are removed from future decision-making tools.  

"Despite concerns about 'black box' algorithms, in some ways algorithms can be more transparent than human decisions," said the researchers. "Unlike a human, it is possible to reliably test how an algorithm responds to changes in parts of the input.  

The next few years will require strong incentives to make sure that organizations develop AI systems that comply with requirements to produce balanced decisions. A perfectly fair algorithm might not be on the short-term horizon just yet; but AI technology could soon be useful in bringing humans face to face with their own biases. 

RECENT NEWS

Reassessing AI Investments: What The Correction In US Megacap Tech Stocks Signals

The recent correction in US megacap tech stocks, including giants like Nvidia, Tesla, Meta, and Alphabet, has sent rippl... Read more

AI Hype Meets Reality: Assessing The Impact Of Stock Declines On Future Tech Investments

Recent declines in the stock prices of major tech companies such as Nvidia, Tesla, Meta, and Alphabet have highlighted a... Read more

Technology Sector Fuels U.S. Economic Growth In Q2

The technology sector played a pivotal role in accelerating America's economic growth in the second quarter of 2024.The ... Read more

Tech Start-Ups Advised To Guard Against Foreign Investment Risks

The US National Counterintelligence and Security Center (NCSC) has advised American tech start-ups to be wary of foreign... Read more

Global IT Outage Threatens To Cost Insurers Billions

Largest disruption since 2017’s NotPetya malware attack highlights vulnerabilities.A recent global IT outage has cause... Read more

Global IT Outage Disrupts Airlines, Financial Services, And Media Groups

On Friday morning, a major IT outage caused widespread disruption across various sectors, including airlines, financial ... Read more