London’s Metropolitan Police had very little luck with its automated facial recognition (AFR) technology at this year’s Notting Hill Carnival. Billed as Europe’s largest street festival, 2 million people attend the free celebration of the city’s Caribbean communities over the course of the long weekend. For the second year in a row, police deployed AFR technology, and it was equally ineffective as it was last year. According to a statement from London Police Commander Dave Musker on the 2016 carnival, out of the 454 people arrested, not a single one was tagged by the facial recognition technology.
This year, Silkie Carlo, the technology policy officer for civil rights group Liberty, witnessed the technology in use first hand. She was told that the algorithm had produced one correct facial recognition match across the four days of its operation. The individual was identified as having an arrest warrant for a rioting offense. However, it would turn out that the that individual had already been arrested between the construction of the watch list and the carnival and was no longer wanted. Perhaps most startling is the inaccuracy of the AFR technology. Carlo says, “I watched the facial recognition screen in action for less than 10 minutes. In that short time, I witnessed the algorithm produce two ‘matches’ – both immediately obvious, to the human eye, as false positives. In fact both alerts had matched innocent women with wanted men.”
As is always the case when it comes to facial recognition, privacy concerns must be taken into consideration. Carlo asks, “If we tolerated facial recognition at Carnival, what would come next? Where would the next checkpoint be? How far would the next ‘watch list’ be expanded? How long would it be before facial recognition streams are correlated?