The use of live facial recognition technology by UK police does not meet “minimum ethical and legal standards” and should be banned from being applied in public places, according to researchers from the University of Cambridge.
A group of researchers at the Minderoo Center for Technology and Democracy(Opens in a new window) analyzed three distinct areas of facial recognition technology (FRT) used by two police forces—South Wales Police and the Metropolitan Police Service (MPS). At all times, the FRT was found to be a violation of human rights.
The researchers developed a review tool to check FRT submissions against current legal guidelines—including the UK’s Data Protection and Equality laws—and outcomes from UK court cases.
They applied their ethical and legal standards to three uses of FRT by British police. In both cases, the technology was used by MPS and South Wales Police to screen the crowd and match faces to those on a criminal database and “watch list.” In a third case, officers from South Wales Police used FRT smartphone apps to monitor crowds. and identify “wanted” people in real-time.
In all three cases, there was a lack of transparency, accountability, and oversight in the use of the FRT.
The study found that important information about police use of FRT “is not visible” such as published population data on arrests or other outcomes, which the researchers say is difficult. to evaluate whether the devices “maintain racial identity.” The report also found that police did not conduct internal audits to determine whether their technology was flawed.
In addition to the lack of transparency, the researchers found that there is very little accountability for the police—there is no clear way for people or communities who have been harmed by the police’s use, or misuse, of technology. “Police are not expected to respond or be held accountable for harm caused by facial recognition technology,” said Evani Radiya-Dixit, lead author of the report.
“There is no redress mechanism for individuals and communities harmed by technology policing. To protect human rights and improve responsibility for the use of technology, we need to ask what values we want to embed in technology,” said Radiya-Dixit.
Professor Gina Neff, Chief Executive Officer at Minderoo’s Center for Technology and Democracy, said: “In recent years, police around the world, including in England and Wales, have deployed identification technology Our goal is to assess whether these releases used common sense techniques for the safe and appropriate use of these technologies.
Recommended by our Editors
“Creating a separate accounting system allowed us to examine the issues of privacy, consistency, accountability, and oversight involved in the use of these technologies by police,” Neff said. .
Researchers have joined experts from the EU and the UN High Commissioner for Human Rights to call for a ban.(Opens in a new window) of FRT in public areas.
For many years the British police have been experimenting with the use of FRT in various situations to fight crime and terrorism. Its first use was recorded in the UK in 2015 by the Leicestershire Police on dinner parties. Since then it has been widely used by South Wales Police and the Metropolitan Police to screen hundreds of thousands of people at protests, sporting events, concerts, Notting Hill Carnival, train stations and busy shopping streets.
The world is concerned about the use of FRT by the police. The same technology used by the Metropolitan Police was found to misidentify Black people. In 2020, Amnesty International led the call(Opens in a new window) to ban police use of FRT because it would “increase human rights violations.”
Does it sound like what you’re reading?
Sign up for Safety Watch read about our privacy and security information delivered directly to your inbox.