Skip Navigation
Analysis

Predictive Policing Goes to Court

The Brennan Center for Justice went to court on August 30, 2017, to challenge the NYPD’s refusal to produce crucial information about its use of predictive policing technologies.

September 5, 2017

The Brennan Center for Justice went to court on August 30, 2017, to challenge the New York Police Department’s (NYPD’s) refusal to produce crucial information about its use of predictive policing technologies. The hearing was the latest step in the Brennan Center’s ongoing Article 78 litigation against the police department to get information about the purchase, testing, and deployment of predictive policing software.

Black-box predictive algorithms are increasingly in use in the criminal justice system, from bail and bond calculations to sentencing decisions to determinations about where and when crimes might occur and even who might commit them. These systems can be frustratingly opaque for anyone who wants to know how they work. The software is often sourced from private companies that fiercely protect their intellectual property from disclosure, and machine-learning algorithms can constantly evolve, meaning that outputs can change from one moment to the next without any explanation or ability to reverse engineer the decision process. Yet as these ubiquitous systems dictate more and more aspects of government, transparency as to their processes and effects is crucial. (Indeed, a recent bill introduced in the New York City Council would require just such transparency.)

In June 2016, the Brennan Center submitted a Freedom of Information Law (FOIL) request to the NYPD, seeking records relating to the acquisition, testing, and use of predictive policing technologies. Publicly available purchase records indicated that the City of New York had spent nearly 2.5 million dollars on software from Palantir, a known predictive policing software vendor. Predictive policing software typically relies on historic policing data, which can replicate and entrench racially biased policing. Combined with a lack of transparency and oversight, these systems may violate individual constitutional rights and evade community efforts to hold police accountable for their actions. The Brennan Center filed the FOIL request in the interest of educating the public about the use of these systems and promoting a meaningful and well-informed public debate about the costs and benefits of these systems.

Just fifteen days after the Brennan Center filed the request, the department issued a blanket denial on the grounds that “such information, if disclosed, would reveal non-routine techniques and procedures.” The Brennan Center appealed this determination and received another cursory denial. Left with no other choice, the Brennan Center filed suit in December 2016; faced with legal action, the NYPD finally produced some responsive documents, showing that the department had built its own predictive policing system in-house. At the same time, the NYPD continued to ignore several significant parts of the request, including requests for records describing testing and utilization of the software; audit logs; and documents reflecting the NYPD’s policies and procedures for predictive policing. The Brennan Center thus continued to pursue its legal action against the police department. As a show of good faith, the Brennan Center narrowed its request to exclude the predictive policing algorithm itself as well as the most recent six months’ worth of inputs into and outputs from the system. 

At last Wednesday’s hearing, attorney Ellison (Nelly) Merkel of Quinn Emanuel Urquhart & Sullivan, LLP, on behalf of the Brennan Center, detailed the NYPD’s “flippant approach” to FOIL disclosure. She noted that the NYPD provided only blanket denials until the Brennan Center filed suit, making it impossible to adequately assess the exemptions raised by the police department and forcing the Brennan Center to expend additional resources to obtain documents whose disclosure was required under the law. She urged the judge to compel the NYPD to supplement their disclosures to address the narrowed request for historical system data, and emphasized the importance of obtaining governing policies, technology audits, and data about testing and past usage, in order to shed light on the use, evaluation, accuracy, and impact of the systems. Merkel also noted the need to search the counterterrorism bureau for responsive documents; although the Domain Awareness System that houses predictive policing data was born out of the NYPD’s counterterrorism efforts, the NYPD had not looked to see if responsive documents existed within that bureau, potentially excluding additional disclosable items.

In response, the NYPD’s attorney intimated that it is standard practice for the NYPD to disregard FOIL requests until the requester gives up and files suit. She also defended the NYPD’s use of FOIL exemptions to deny both the request and the appeal in their entirety; the fact that the NYPD produced responsive documents immediately upon the filing of the lawsuit, however, strongly indicates that the exemptions were applied indiscriminately in the first instance. The NYPD’s lawyer also suggested that if historical data about inputs to and outputs from the algorithm were released, criminals could game the system and predict where police officers would be stationed. This claim is belied by the fact that the algorithm is regularly evolving, as the NYPD itself represented, and predictions change as new data emerges. The ongoing refinement of the model means that historical information from even six months ago should be obsolete as far as replicating current results.  

When it comes to FOIL, disclosure is the rule, not the exception. Citizens and watchdog organizations should not have to file lawsuits to get information about how law enforcement is allocating resources and policing the community. In the criminal justice system especially, predictive algorithms need to be carefully scrutinized to ensure that they are not entrenching systematized bias while laundering the evidence. Recent reporting suggests that the NYPD’s relationship with at least one predictive policing software vendor, Palantir, has soured in part because of high costs and data standardization issues. The information sought by the Brennan Center’s FOIL request would help the public evaluate if predictive policing – whether in-house or outsourced – is a worthwhile use of police resources.

The case will be submitted on September 13, 2017, and we hope to have a ruling soon after. 

(Image: Flickr.com/ Marco Catini)