Skip Navigation
Analysis

Oversight of Face Recognition Is Needed to Avoid New Era of ‘Digital Stop and Frisk’

The NYPD’s misuse of facial recognition shows the need for a law providing for more transparency and community control when it comes to surveillance technologies.

May 31, 2019

Mass surveillance is not the fictional dystopia of Orwell or only found in China — it is today’s New York City. In lower Manhattan, clusters of cameras and sensors blanket nearly every city block, and New York Police Department vans with 3-story surveillance cranes tower above many minority communities across the five boroughs. The NYPD’s Domain Awareness System alone links with at least 9,000 cameras and 500 license plate readers. The message is clear: you are being watched. The NYPD’s surveillance dragnet operates in secret, without any accountability or oversight. Ignoring this reality any longer is unconscionable — but there is a way to fix it.

Cities across the country are taking steps to restore community control over their police departments’ use of invasive new technologies. These regulations mandate police departments to disclose the surveillance tools they use, and many even require city council approval before police can acquire new technologies. They also mandate clear policies to prevent abuse. Last week, San Francisco joined their ranks and went a step further by becoming the first city to ban the use of facial recognition by city agencies.

Meanwhile, New York City’s Public Oversight of Surveillance Technology Act (the POST Act), which includes common-sense transparency requirements, continues to stall. The need for a law like this is especially acute in New York, where the police have an unrivaled arsenal of surveillance tools. To name just a few, the NYPD uses software to monitor social media and deploys Stingray devices to track cellphones. As the New York Times editorial board wrote, New York City “isn’t even close” to San Francisco in addressing the problem of secret surveillance.

In the meantime, an unregulated NYPD continues to show why it cannot be trusted to police itself. A new report from Georgetown Law’s Center on Privacy & Technology found that the NYPD is misusing facial recognition. In one example, NYPD used a photograph of the actor Woody Harrelson in order to find a suspected lookalike beer-thief. In another, NYPD training material demonstrated how to get a usable image of a suspect by copying and pasting from another person’s picture found online by searching for “Black Male Model.” When facial recognition systems are fed flawed face data, flawed answers come out.

While the NYPD says it only uses facial recognition as an investigative lead, its guide does not specify the additional steps that must be taken in order to establish probable cause to arrest a suspect. And yet, the Georgetown study found that NYPD apprehended a suspect based solely on a facial recognition match and placed him in a lineup, and arrested another suspect based on an eyewitness confirming that the facial recognition potential match resembled the suspect.

The battle to understand NYPD use of surveillance tools isn’t new. For years, the department fought public records requests aiming to show how it uses Stingray cellphone tracking devices, including whether they were used against Black Lives Matter protesters. We shouldn’t have to rely on reports based on years of combative litigation to understand basic information about how facial recognition is being deployed. The POST Act would require the NYPD to issue an impact and use policy that would explain how each of the NYPD’s surveillance technologies work and outline the steps taken to protect the privacy of New Yorkers.

Study after study shows that facial recognition systems are riddled with bias and unacceptable error rates. These issues are most pronounced when analyzing the faces of communities of color. Massachusetts Institute of Technology researcher Joy Buolamwini’s examination of leading facial recognition systems found they incorrectly identified one in three women of color. Another test by the ACLU of Northern California found that Amazon’s facial recognition software could not accurately distinguish between members of Congress and individuals in a police database of criminal suspects, with nearly 40 percent of the false matches being for people of color.

This means that NYPD officers relying on facial recognition are more likely to unduly detain people of color. Without oversight, these error rates can usher in an era of digital stop and frisk.

This isn’t a problem that can be solved by NYPD assurances — we need City Council action. Donovan Richards, head of the City Council’s Public Safety Committee, and Speaker Corey Johnson should schedule a new hearing for the POST Act and lead the council’s efforts to pass it. Any further delay betrays the trust New Yorkers place in the council to protect their rights and hold police accountable.

(Image: Douglas Sacha/Getty)