Facial Recognition: The Ultimate Privacy Killer?

September 29, 2019 Topic: Technology Region: Americas Blog Brand: The Buzz Tags: Facial RecognitionSocietySecuritySurveillanceInnovation

Facial Recognition: The Ultimate Privacy Killer?

Or could it be a good thing?

by AEIdeas
 

It’s clear that phenomenal advances have been made in facial recognition software as internet platforms, such as Facebook, can now identify people in photos with uncanny accuracy. The sophistication of facial recognition software algorithms is such that they can compare a single image to a file reference photograph to identify individuals with close to the same precision as fingerprints or DNA.

The software is widely used for many official purposes. For example, automated immigration control points at international airports match a photograph of the person standing at the gate with the coded digital image in the passport being scanned. It is also increasingly being used for commercial purposes. For example, many top car brands, including Jaguar Land Rover, BMW, Subaru, Ford, Lexus, Tesla, Mercedes-Benz, and Volkswagen, are already using facial recognition for unlocking and starting cars, and automatically adjusting a vehicle’s settings (seats, mirror positions, radio stations, and even suspension stiffness) depending on the preferences of the identified driver.

 

Privacy vs. efficiency

On the one hand, these applications of technology make life a little easier for the persons concerned. No one wants to take any longer than necessary to pass through passport control, not having to fiddle with keys or even a fob to enter and start a car is appealing, and I can never seem to get the driver’s seat and mirrors of my car back into my preferred positions after my son has borrowed it. The security of knowing that one’s car cannot be driven away by someone not already programmed into the software has appeal for both car owners and insurance firms.

On the other hand, increasing use of facial recognition software poses some significant privacy concerns. While we may willingly agree to its use for Facebook, passport control, or vehicles, what if the same software is used without our explicit consent or knowledge? For example, when “smart” cameras operating in real time (such as closed circuit television cameras or cameras used with autonomous vehicles) are loaded with images of individuals of interest, who is informed when the individual is spotted in that camera’s range? All that is needed is a digital photo of a person — which need not even be recent — and access to the mechanism via which the data are loaded into the device.

A UK case: Use and court challenge

Such uses of facial recognition software in public places has occurred within three UK police forces — London Met, Leicestershire, and South Wales — since 2015. The cameras have been used to scan faces in shopping centers, football crowds, and music events such as the Notting Hill carnival.

Last week, in a landmark case — the first in the world addressing the use of automated facial recognition (AFR) — the high court in Cardiff, South Wales, ruled that police use of the technology in its jurisdiction was not unlawful.

Ed Bridges, a former Liberal Democrat councillor from the city, brought the case after noticing the cameras when he went out to buy a lunchtime sandwich. He was supported by the human rights organisation Liberty. He plans to appeal against the judgment. Bridges said he was distressed by police use of AFR, which he believes captured his image while out shopping and later at a peaceful protest against the arms trade. During the three-day hearing in May, his lawyers alleged the surveillance operation breached data protection and equality laws.

The judges found that although AFR amounted to interference with privacy rights, there was a lawful basis for it, and the legal framework used by the police was proportionate. Dismissing the challenge, Lord Justice Charles Haddon-Cave, sitting with Mr. Justice Swift, said: “We are satisfied both that the current legal regime is adequate to ensure appropriate and non-arbitrary use of AFR Locate, and that South Wales police’s use to date of AFR Locate has been consistent with the requirements of the Human Rights Act and the data protection legislation.”

US implications

 

The implications of the British court finding are significant for the United States, amidst calls for politicians to implement privacy laws like those in the UK as a means of providing users with greater security against the use of their data for purposes not explicitly agreed to. The British laws give effect to the European Union’s General Data Protection Regulation, which places a bigger burden on companies to protect their users’ data than is applied in the US. Yet, it appears that these provisions have not been sufficient to prevent the use of the technology by law enforcement without the knowledge or consent of the affected individuals.

It would be interesting to see if the same decision would prevail if a private firm such as Facebook or Tesla was using this technology. Watch this space, as a case is sure to emerge before long.

This article originally appeared at the American Enterprise Institute.

Image: Reuters.