Skip Navigation
Analysis

Face it: This is risky tech

We need to put strong controls on face-recognition technology

Cross-posted from the New York Daily News.

As you read this, facial recognition cameras are secretly capturing photos of every driver who crosses key entry points into Manhattan. The technology is part of a pilot program launched by Gov. Cuomo, a test he hopes to expand to every MTA facility and crossing. According to the governor, the cameras will compare the photos against unspecified databases in order to identify suspected criminals and terrorists.

Not so fast. Less than a week after the governor publicized the expansion of this program at a press conference, the ACLU revealed that Amazon’s facial recognition software could often not accurately distinguish between members of Congress and individuals in a police database of criminal suspects.

In fact, facial recognition technology works so poorly that Axon, the leading provider of police body cameras, recently told shareholders that it has no plans to incorporate it into its products, warning of “technical failures with disastrous consequences.” And Microsoft is so worried about the impact of facial recognition that it has taken the unusual step of calling for government regulation.

The truth is that this is unproven technology that threatens the liberty of every person caught up in its virtual dragnet. The people of New York must demand that the governor explain what the technology is being used for and how he intends to safeguard against its well-known risks.

Study after study shows that facial recognition technology is error-prone, especially when it comes to people of color. In the ACLU’s test, nearly 40 percent of the members of Congress misidentified as potential criminals by Amazon’s Rekognition software were people of color, even though they make up only 20 percent of Congress.
 

Previously, an MIT researcher test-drove facial-recognition systems from Microsoft, IBM and a Chinese company called Megvii to see how well they could guess the gender of people with different skin tones. The system incorrectly identified one in three women of color.

The MIT and ACLU studies were conducted with photographs of people sitting still. Success rates for systems tasked with analyzing fuzzy photos from moving vehicles, with faces behind windshields, will likely perform worse. The governor hasn’t told us which databases these photos will be compared against, but those that claim to list gang members or potential terrorists are unreliable.

And what will the police do when they receive a “match”? If used in the same way as license plate readers, as the governor has suggested, the system will alert a nearby police officer to initiate a stop or an arrest. That could cause thousands of innocent New Yorkers, most of them black or brown, to be needlessly pulled over.

Traffic stops are already especially perilous for people of color; now a biased camera will reinforce an officer’s dread that a suspect is dangerous. Suspicion and fear can quickly turn a bad situation worse.

For now, according to an MTA spokesman, data generated from the program will not be shared with “anyone outside the pilot.” But we just don’t know who will have access to a facial recognition database once it’s fully in operation.

Then there are the consequences for immigrants. At a time when President Trump’s xenophobic policies are scaring families from going to court or reporting crimes, New Yorkers should not have to live in fear that driving across the Triborough could put them at risk of being picked up by ICE agents.

The governor needs to put in place concrete safeguards against abuse. And our elected representatives must slam the brakes, demanding transparency and accountability, before the technology spreads further.

(Image: Creative Commons)