You can’t hide from facial-recognition systems – Or can you?
There’s about a 50-50 chance you are in one of the many facial-recognition databases now being used by police and government security agencies across the country. A study by the Georgetown Law Center on Privacy and Technology entitled The Perpetual Line-up found that use of facial recognition by law enforcement affects more than 117 million adults in the country. In nearly all cases, the systems have been implemented with no oversight by courts or anyone outside the law-enforcement agency itself.
Facial-recognition databases differ from fingerprint and DNA databases in one important way: the pictures that comprise the database of people’s faces were collected from state driver’s licenses issued to law-abiding citizens, while fingerprint and DNA samples were gleaned “almost exclusively” as part of criminal investigations, according to the Georgetown Law researchers.
Some police forces now use real-time recognition of faces from live surveillance video feeds, yet not one state has passed a law regulating the use of facial-recognition technology by law enforcement. None of the agencies requires a court-ordered search warrant before conducting the facial-database searches, nor do they restrict such searches for use only when investigating serious crimes. Of the 52 agencies currently using facial-recognition databases, only one prohibits their use to track people exercising their right to political, religious, or other free speech.
No standards, no oversight, and no training of officers conducting the searches
Human eyewitnesses have an uncanny knack for misidentifying people, and the same fallibility affects the police officers charged with operating the facial-recognition databases. Despite this, the Georgetown Law study found that there is no specific training for use of the databases, which leads to a wrong identification in about half of all cases. In only eight of the facial-recognition operations examined by the researchers were specialists employed to confirm the finding of the untrained police officers.
Another problem that hasn’t been addressed is the tendency of facial-recognition systems to misidentify African-Americans in particular, as a separate FBI report concluded. None of the systems in use provides any mechanism for identifying and removing racially biased errors. In most cases, police departments implemented the system without disclosing its existence to the public, which makes the technology more susceptible to misuse. Still, only four of the 52 police departments surveyed have acceptable-use policies in place, and only one, the San Diego Association of Governments, received legislative approval for the practice.
The Georgetown Law researchers recommend that laws regulating the use of facial-recognition databases by law enforcement be enacted to ensure the systems are used only when police have reasonable suspicion of criminal conduct. They also call for after-the-fact investigative searches to be limited to felonies, and for mug shots to be the default photo databases rather than driver’s licenses or other ID photos. Likewise, searches of license and ID photos should require a court order upon showing of probable cause of identity theft or other serious crime.
Finally, the study authors call for laws banning the use of facial-recognition databases to track people for their political or religious beliefs, or because of their race or ethnicity. All the systems should be subject to public reporting and internal audits, according to the researchers. The report includes a Model Face Recognition Act for the U.S. Congress and state legislatures.
Market develops for products that fool the facial recognizers
If you’re concerned about being identified by a public facial scanner, it turns out you can thwart at least some of the systems without having to resort to wearing a ski mask. Researchers at Carnegie Mellon University created a pair of eyeglass frames that trick the recognition algorithms, and that you can print out yourself for only about 22 cents. Lisa Vaas writes about the glasses in a November 4, 2016, post on the Sophos Naked Security blog.
The Carnegie Mellon researchers claim the glasses not only prevent you from being identified, they also let you impersonate celebrities, as they explain in “Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition” (pdf). Other products designed to block recognition, such as the Privacy Visor, cost up to $240. Those devices make it readily apparent you’re trying not to be recognized, while the Carnegie Mellon frames could pass for regular old glasses.
According to Vaas, a white male researcher wearing the frames was able to fool the facial-recognition system into thinking he was the white female actress Milla Jovovich 88 percent of the time, while a South Asian female imitated a Middle Eastern male at the same 88-percent rate. However, a Middle Eastern male managed to impersonate the white male actor Clive Owen only about 16 percent of the time.
The goal of the research was not to develop ways to trick facial-recognition systems, but rather to point out how inaccurate their recognition algorithms could be rendered with very little effort. The researchers state that recognition systems are getting more talented and can now identify people based on previous scans even when the subjects' faces aren’t visible.
Web services counter attempts to restrict their biometric data collection
Facebook, Google, and other web services gather a great amount of personal information about their “users” without getting any explicit consent. This includes their extensive image databases containing hundreds of millions of photos. An Illinois law called the Biometric Information Privacy Act imposes fines of $1000 to $5000 each time a person’s image is used without their permission. As Bloomberg Technology’s Joel Rosenblatt writes in an October 26, 2016, article, Facebook and Google have been sued under the Illinois law and face a potential judgment that even those behemoths would find painful to pay.
The legal hurdle the plaintiffs must overcome is to convince the court that use of their photos by the services constitutes a “concrete injury,” as per the ruling in Spokeo, Inc. v. Robins. In a nutshell, the damage must be something that costs the plaintiffs money, one way or another. In the case against Facebook, the plaintiff claims both a “property interest in the algorithms that constitute their digital identities,” according to Rosenblatt, as well as an “information injury” because Facebook failed to ask for consent to collect the plaintiff's “faceprints.”
However you slice it, facial-recognition technology is galloping along much faster than the laws intended to ensure biometric identification isn’t misused by law-enforcement agencies and private companies alike. Maybe veils will be the new fashion statement of 2017.