Britain’s biometrics watchdogs have warned that national oversight of AI-powered face scanning to catch criminals is lagging far behind the technology’s rapid growth.
With the Metropolitan police almost doubling the number of faces they scan in London over the past 12 months and a rising use of the technology by retailers in the UK, Prof William Webster, the biometrics commissioner for England and Wales, said the “slow pace of legislation was trying to catch up with the real world” and “the horse had gone before the cart”.
Dr Brian Plastow, who holds the same role in Scotland, warned the technology was “nowhere near as effective as the police claim it is” and said there was a “patchwork legal framework” throughout the UK. He said in England and Wales, police were “really just marking their own homework”.
The watchdogs said new laws were needed to govern when and how police forces used live facial recognition technology, with a new regulator to clamp down on misuse. Several bodies have oversight of the technology, including the Information Commissioner’s Office (ICO) and the Equality and Human Rights Commission.
The Home Office is considering a new legal framework for the technology as it also plans to introduce nationally what it calls “the biggest breakthrough for catching criminals since DNA matching”.
Members of the public wrongly labelled as suspected criminals by shops using AI cameras said there was no accountability or recourse to complain. They said the system had left them feeling “guilty until proven innocent”. They described the ICO, which is responsible for monitoring facial recognition tech and the biometric data it uses, as “toothless” and unresponsive.
British police forces and high street retailers claim the technology makes streets safer, but others criticise it as Big Brother-style mass surveillance, with risks for civil liberties and data privacy. So far this year the Met has scanned more than 1.7 million faces in London hunting for suspects on watchlists, up 87% on the same period in 2025.
It has also emerged:
● An independent audit of the Met’s use of facial recognition technology (FRT) has been indefinitely postponed after the police requested delays.
● Polling shows 57% of people believe the systems are “another step towards turning the UK into a surveillance society”.
● A whistleblower claimed shop-based face-scanning systems had sometimes been misused by shop or security staff “maliciously” adding members of the public to watchlists.
Webster said: “We could be talking three years, at a minimum, before regulation is in place and active. And we already have a rollout of live face recognition in a dozen different police forces. “The technology is becoming cheaper and cheaper, and in time we will see it everywhere, including in the static surveillance camera network.”
In February, the Guardian revealed how police arrested a man for a burglary in a city he had never visited after face-scanning software deployed across the UK confused him with another person of south Asian heritage. Several other people have told the Guardian about the impact of being misidentified by face-scanning software increasingly used by retailers to fight shoplifting.
Further concern about limited scrutiny of the fast-developing technology has been caused by the postponement of the ICO’s planned audit of the Met’s use of AI-powered face scanning to find wanted criminals. The ICO, which is the UK’s data regulator, had scheduled the investigation for October last year. But the Met asked for it to be pushed back and it is no longer certain it will go ahead, according to emails obtained by the Guardian under the Freedom of Information Act.
They show the Met cited as reasons for delay its need to handle a legal challenge to its face-scanning policy, about which a court ruled in its favour last week, officers taking Christmas leave and the burden of policing new year festivities.






