Reading Time: 3 minutes

TODAY SAW A LANDMARK RULING BY THE COURT OF APPEAL THAT SOUTH WALES POLICE’S USE OF AUTOMATIC FACIAL RECOGNITION TECHNOLOGY IS NOT LAWFUL. THIS IS A MAJOR STEP FORWARD FOR CIVIL LIBERTIES IN THE UK AND REFLECTS CONCERNS THAT THIS TECHNOLOGY IS INTRUSIVE, AUTHORITARIAN AND DISCRIMINATORY. HAVING BROUGHT THE CASE, ALONG WITH THE HUMAN RIGHTS GROUP LIBERTY, I WANT TO SET OUT WHAT THE JUDGEMENT SAID, WHY IT MATTERS AND WHAT THE FUTURE HOLDS FOR THE USE OF AUTOMATIC FACIAL RECOGNITION TECHNOLOGY IN THE UK.

What did the court say?

One of the court’s key findings was that, contrary to the arguments of South Wales Police and the Home Office, automatic facial recognition is not analogous to the taking of photographs or the use of CCTV cameras. Instead, the court found that because the technology involves capturing images and automatically processing sensitive personal data of many members of the public, most of whom won’t be of interest to the police, it is far more intrusive than taking photographs or CCTV. The Court found that there is not currently “the necessary quality of law” for it to be used within existing legislation or policies.

The judgement is also clear that South Wales Police’s policies governing the use of facial recognition have been inadequate. Their Privacy Impact Assessment is written in such a way that facial recognition could be used to find absolutely anyone who is of interest to the police – leaving, in the Court’s judgement “too broad a discretion vested in the individual police officer to decide who should go onto the watchlist”. Secondly, when it came to the police’s Public Sector Equality Duty, the court judged that South Wales Police had “never sought to satisfy themselves… that the software program in this case does not have an unacceptable bias on grounds of race or sex”.

Finally, the court also found that their Data Protection Impact Assessment failed properly to assess the risks to the rights and freedoms of members of the public who are scanned. At the time South Wales Police started using this technology, many of us felt their attitude towards human rights concerns was gung-ho and complacent; today’s judgement vindicates those concerns.

Why does this matter?

Events of the last few months have strengthened my view that facial recognition technology is being operated as a mass surveillance tool that does not belong in a democratic society.

For one thing, the Black Lives Matter movement has further underlined how policing (both in the US and the UK) has a strong racial dimension, with black citizens experiencing worse treatment at the hands of police and security forces. Our case allowed us to highlight how facial recognition technology is worse at recognising black and other ethnic minority (as well as female) faces, which could lead to black citizens being disproportionately and wrongly challenged by police.

This discrimination is obviously an injustice in itself, but it also entrenches unjust policing patterns. It is precisely because of the discriminatory nature of facial recognition technology that IBM recently withdrew from the market, closely followed by Amazon and Microsoft because of concerns about how police forces use this technology for racial profiling. Using facial recognition technology as a mass surveillance tool is anathema to anyone who wants to build a more equal and fairer society, not just because it treats everyone who might cross a camera’s gaze as a criminal, but because those unfortunate enough to be misidentified are disproportionately likely to already be discriminated against in a host of other ways. We should be dismantling discrimination, not reinforcing it.

As if this wasn’t enough, the intrusive nature of the technology is also a cause for concern. The current COVID-19 crisis has perfectly encapsulated the dangers of “mission creep” when we give up our data to government or state agencies. We’ve already seen concerns raised about the ethics of private companies being drafted in to help with technological solutions to COVID-19. Many people may decide to give the government and NHS the benefit of the doubt and to give up personal data for Track & Trace apps or so-called Immunity Passports – but what about when the crisis is over?

Trusting government, let alone private companies, to destroy that data feels like writing a blank cheque. With the fallout of COVID-19 likely to last for some time, it really isn’t so hard to envisage a dystopian future where the state (no doubt claiming benign intent) uses facial recognition to track our movements, our interactions and our activities. Today’s judgement is a significant step in stopping that from happening.

What happens next?

South Wales Police have indicated that they won’t be appealing today’s ruling, so the judgement will stand. This represents a huge win for those of us who have battled against this technology. It remains to be seen whether the UK Government will now decide to develop legislation or regulation to address the issues raised by our case – but it’s clear that if they do, it will offer further opportunity to scrutinise the huge legal and ethical problems with automatic facial recognition, and to consider the sort of society we want to be. These risks will only increase if governments search for more intrusive ways of tracking and monitoring us. Mass surveillance tools which discriminate have no place on our streets.

Ed Bridges is a civil rights campaigner from Cardiff