Site navigation

UK Police Machine Learning Deployment ‘Lacks Regulation & Oversight’

Ross Kelly

,

UK Police Machine Learning

The use of machine learning by UK police services has been criticised in a report published last week. Regulation, oversight and transparency are critical to ensuring the proper use of emerging technology.

A report published by RUSI has raised concerns over the use of machine learning algorithms by British police forces when there is little evidence to justify its use.

A lack of regulation, oversight and adequate codes of practice were all key issues raised in the report titled ‘Machine Learning Algorithms and Police Decision Making’.

As police forces across the UK and the world look to keep pace with an ever-changing society – in which criminals are increasingly capitalising on technological advances – the question of fairness and regulatory practices must be addressed.

Police services in the UK have previously come under fire for their use of emerging technologies such as automated facial recognition (AFR) technology or software used to access mobile devices.

Exploring the Possibilities of ML

The report explores the applications of machine learning algorithms in police decision making. In particular, it focuses on the hypothetical prediction of an individual’s “proclivity for future crime” – a rather concerning statement for any dystopian SciFi thriller fan.

The legal, ethical and regulatory challenges posed by the use of tools during police operations is also an area of concern, according to RUSI.

Although the use of ML algorithms to support police decision making is in its infancy there is a distinct lack of research available to assess how the use of an algorithm can influence an officer’s decision making in the field.

Furthermore, the report suggested there is limited evidence on the ‘efficacy and efficiency’ of different systems, along with their cost-effectiveness and impact upon civil liberties.

“Limited, localised trials should be conducted and comprehensively evaluated to build such an evidence base” before making the decision to deploy this emerging tech on a large scale, the report proposed. It also stated that there is a lack of “clear guidance and codes of practice” to outline the appropriate use of algorithmic tools and as such restraint must be cautioned.

Algorithmic tools are already being used by police forces in a number of different capacities. Automated Facial Recognition (AFR) technology has been deployed extensively – much to the derision of civil rights campaigners, such as Big Brother Watch – and other algorithmic tools have been used to highlight specific areas in which crime is rampant.

South Wales Police and the Metropolitan Police service have both successfully trialled AFR at the Notting Hill Carnival and 2016 Champions League Final in Cardiff.

Kent Police services have also been using an algorithmic system known as ‘PredPol’ to predict areas in which crime may become prevalent. This system combines crime data – taking into account time, location and the type of crime – to anticipate where another criminal act may occur.

The Police service said initial trials of this technology helped to reduce street violence by 6% yet statistics show crime has risen. The deployment of ML technology could have unintended or unanticipated consequences for individuals around the UK, the report claimed.

https://www.digitexpo.uk/

Algorithmic Bias

RUSI highlighted algorithmic bias as the main concern in its report. While a model may not include a variable for race, as an example, there is still room for bias based on other variables; such as a person’s postcode.

The report also said that further mistakes are possible due to the fact that some models rely on police data. On certain occasions, data held by the police may be incomplete or unreliable and as such, cannot be used to make predictions on whether an individual is likely to re-offend, for example.

Continued Use

Despite mounting concerns over the use of emerging technologies, police continue to trial tech on Britain’s streets. These trials, the report suggests, are going ahead despite any regulatory framework or government oversight.

Trials also suffer from a lack of transparency, it said. Previous pilot schemes have shown alarming levels of inaccuracy, specifically in regards to the identification of people of colour. To prevent abuse, RUSI called for the Home Office to establish clear-cut regulations and said that increased governance was a “matter of urgency”.

Most importantly, the report suggests that ML algorithms must be overseen by humans, stating they “must not be initialised, implemented and then left alone to process data”.

It noted that ML algorithms will require continuous attention and vigilance in order to maintain accuracy and consistency. Human oversight will require training, though. RUSI suggested that police training must be improved to ensure proper use.

“Officers may need to be equipped with a new skill set to effectively understand, deploy and interpret algorithmic tools in combination with their professional expertise,” the report stated. “It is essential that the officers using the technology are sufficiently trained to do so in a fair and responsible way.”

Ross Kelly

Staff Writer

Latest News

Cybersecurity Data Protection
Editor's Picks Recruitment Trending Articles
Cybersecurity Featured Skills
%d bloggers like this: