Skip Navigation
Analysis

Face it: This is risky tech

We need to put strong controls on face-recognition technology

Cross-posted from the New York Daily News.

As you read this, facial recog­ni­tion cameras are secretly captur­ing photos of every driver who crosses key entry points into Manhat­tan. The tech­no­logy is part of a pilot program launched by Gov. Cuomo, a test he hopes to expand to every MTA facil­ity and cross­ing. Accord­ing to the governor, the cameras will compare the photos against unspe­cified data­bases in order to identify suspec­ted crim­in­als and terror­ists.

Not so fast. Less than a week after the governor publi­cized the expan­sion of this program at a press confer­ence, the ACLU revealed that Amazon’s facial recog­ni­tion soft­ware could often not accur­ately distin­guish between members of Congress and indi­vidu­als in a police data­base of crim­inal suspects.

In fact, facial recog­ni­tion tech­no­logy works so poorly that Axon, the lead­ing provider of police body cameras, recently told share­hold­ers that it has no plans to incor­por­ate it into its products, warn­ing of “tech­nical fail­ures with disastrous consequences.” And Microsoft is so worried about the impact of facial recog­ni­tion that it has taken the unusual step of call­ing for govern­ment regu­la­tion.

The truth is that this is unproven tech­no­logy that threatens the liberty of every person caught up in its virtual drag­net. The people of New York must demand that the governor explain what the tech­no­logy is being used for and how he intends to safe­guard against its well-known risks.

Study after study shows that facial recog­ni­tion tech­no­logy is error-prone, espe­cially when it comes to people of color. In the ACLU’s test, nearly 40 percent of the members of Congress misid­en­ti­fied as poten­tial crim­in­als by Amazon’s Rekog­ni­tion soft­ware were people of color, even though they make up only 20 percent of Congress.
 

Previ­ously, an MIT researcher test-drove facial-recog­ni­tion systems from Microsoft, IBM and a Chinese company called Megvii to see how well they could guess the gender of people with differ­ent skin tones. The system incor­rectly iden­ti­fied one in three women of color.

The MIT and ACLU stud­ies were conduc­ted with photo­graphs of people sitting still. Success rates for systems tasked with analyz­ing fuzzy photos from moving vehicles, with faces behind wind­shields, will likely perform worse. The governor hasn’t told us which data­bases these photos will be compared against, but those that claim to list gang members or poten­tial terror­ists are unre­li­able.

And what will the police do when they receive a “match”? If used in the same way as license plate read­ers, as the governor has sugges­ted, the system will alert a nearby police officer to initi­ate a stop or an arrest. That could cause thou­sands of inno­cent New York­ers, most of them black or brown, to be need­lessly pulled over.

Traffic stops are already espe­cially peril­ous for people of color; now a biased camera will rein­force an officer’s dread that a suspect is danger­ous. Suspi­cion and fear can quickly turn a bad situ­ation worse.

For now, accord­ing to an MTA spokes­man, data gener­ated from the program will not be shared with “anyone outside the pilot.” But we just don’t know who will have access to a facial recog­ni­tion data­base once it’s fully in oper­a­tion.

Then there are the consequences for immig­rants. At a time when Pres­id­ent Trump’s xeno­phobic policies are scar­ing famil­ies from going to court or report­ing crimes, New York­ers should not have to live in fear that driv­ing across the Triborough could put them at risk of being picked up by ICE agents.

The governor needs to put in place concrete safe­guards against abuse. And our elec­ted repres­ent­at­ives must slam the brakes, demand­ing trans­par­ency and account­ab­il­ity, before the tech­no­logy spreads further.

(Image: Creat­ive Commons)