Skip Navigation
Analysis

Predictive Policing Goes to Court

The Brennan Center for Justice went to court on August 30, 2017, to challenge the NYPD’s refusal to produce crucial information about its use of predictive policing technologies.

September 5, 2017

The Bren­nan Center for Justice went to court on August 30, 2017, to chal­lenge the New York Police Depart­ment’s (NYPD’s) refusal to produce crucial inform­a­tion about its use of predict­ive poli­cing tech­no­lo­gies. The hear­ing was the latest step in the Bren­nan Center’s ongo­ing Article 78 litig­a­tion against the police depart­ment to get inform­a­tion about the purchase, test­ing, and deploy­ment of predict­ive poli­cing soft­ware.

Black-box predict­ive algorithms are increas­ingly in use in the crim­inal justice system, from bail and bond calcu­la­tions to senten­cing decisions to determ­in­a­tions about where and when crimes might occur and even who might commit them. These systems can be frus­trat­ingly opaque for anyone who wants to know how they work. The soft­ware is often sourced from private compan­ies that fiercely protect their intel­lec­tual prop­erty from disclos­ure, and machine-learn­ing algorithms can constantly evolve, mean­ing that outputs can change from one moment to the next without any explan­a­tion or abil­ity to reverse engin­eer the decision process. Yet as these ubiquit­ous systems dictate more and more aspects of govern­ment, trans­par­ency as to their processes and effects is crucial. (Indeed, a recent bill intro­duced in the New York City Coun­cil would require just such trans­par­ency.)

In June 2016, the Bren­nan Center submit­ted a Free­dom of Inform­a­tion Law (FOIL) request to the NYPD, seek­ing records relat­ing to the acquis­i­tion, test­ing, and use of predict­ive poli­cing tech­no­lo­gies. Publicly avail­able purchase records indic­ated that the City of New York had spent nearly 2.5 million dollars on soft­ware from Palantir, a known predict­ive poli­cing soft­ware vendor. Predict­ive poli­cing soft­ware typic­ally relies on historic poli­cing data, which can replic­ate and entrench racially biased poli­cing. Combined with a lack of trans­par­ency and over­sight, these systems may viol­ate indi­vidual consti­tu­tional rights and evade community efforts to hold police account­able for their actions. The Bren­nan Center filed the FOIL request in the interest of educat­ing the public about the use of these systems and promot­ing a mean­ing­ful and well-informed public debate about the costs and bene­fits of these systems.

Just fifteen days after the Bren­nan Center filed the request, the depart­ment issued a blanket denial on the grounds that “such inform­a­tion, if disclosed, would reveal non-routine tech­niques and proced­ures.” The Bren­nan Center appealed this determ­in­a­tion and received another curs­ory denial. Left with no other choice, the Bren­nan Center filed suit in Decem­ber 2016; faced with legal action, the NYPD finally produced some respons­ive docu­ments, show­ing that the depart­ment had built its own predict­ive poli­cing system in-house. At the same time, the NYPD contin­ued to ignore several signi­fic­ant parts of the request, includ­ing requests for records describ­ing test­ing and util­iz­a­tion of the soft­ware; audit logs; and docu­ments reflect­ing the NYPD’s policies and proced­ures for predict­ive poli­cing. The Bren­nan Center thus contin­ued to pursue its legal action against the police depart­ment. As a show of good faith, the Bren­nan Center narrowed its request to exclude the predict­ive poli­cing algorithm itself as well as the most recent six months’ worth of inputs into and outputs from the system. 

At last Wednes­day’s hear­ing, attor­ney Ellison (Nelly) Merkel of Quinn Emanuel Urquhart & Sulli­van, LLP, on behalf of the Bren­nan Center, detailed the NYPD’s “flip­pant approach” to FOIL disclos­ure. She noted that the NYPD provided only blanket deni­als until the Bren­nan Center filed suit, making it impossible to adequately assess the exemp­tions raised by the police depart­ment and forcing the Bren­nan Center to expend addi­tional resources to obtain docu­ments whose disclos­ure was required under the law. She urged the judge to compel the NYPD to supple­ment their disclos­ures to address the narrowed request for histor­ical system data, and emphas­ized the import­ance of obtain­ing govern­ing policies, tech­no­logy audits, and data about test­ing and past usage, in order to shed light on the use, eval­u­ation, accur­acy, and impact of the systems. Merkel also noted the need to search the coun­terter­ror­ism bureau for respons­ive docu­ments; although the Domain Aware­ness System that houses predict­ive poli­cing data was born out of the NYPD’s coun­terter­ror­ism efforts, the NYPD had not looked to see if respons­ive docu­ments exis­ted within that bureau, poten­tially exclud­ing addi­tional disclos­able items.

In response, the NYPD’s attor­ney intim­ated that it is stand­ard prac­tice for the NYPD to disreg­ard FOIL requests until the requester gives up and files suit. She also defen­ded the NYPD’s use of FOIL exemp­tions to deny both the request and the appeal in their entirety; the fact that the NYPD produced respons­ive docu­ments imme­di­ately upon the filing of the lawsuit, however, strongly indic­ates that the exemp­tions were applied indis­crim­in­ately in the first instance. The NYPD’s lawyer also sugges­ted that if histor­ical data about inputs to and outputs from the algorithm were released, crim­in­als could game the system and predict where police officers would be stationed. This claim is belied by the fact that the algorithm is regu­larly evolving, as the NYPD itself repres­en­ted, and predic­tions change as new data emerges. The ongo­ing refine­ment of the model means that histor­ical inform­a­tion from even six months ago should be obsol­ete as far as replic­at­ing current results.  

When it comes to FOIL, disclos­ure is the rule, not the excep­tion. Citizens and watch­dog organ­iz­a­tions should not have to file lawsuits to get inform­a­tion about how law enforce­ment is alloc­at­ing resources and poli­cing the community. In the crim­inal justice system espe­cially, predict­ive algorithms need to be care­fully scru­tin­ized to ensure that they are not entrench­ing system­at­ized bias while laun­der­ing the evid­ence. Recent report­ing suggests that the NYPD’s rela­tion­ship with at least one predict­ive poli­cing soft­ware vendor, Palantir, has soured in part because of high costs and data stand­ard­iz­a­tion issues. The inform­a­tion sought by the Bren­nan Center’s FOIL request would help the public eval­u­ate if predict­ive poli­cing – whether in-house or outsourced – is a worth­while use of police resources.

The case will be submit­ted on Septem­ber 13, 2017, and we hope to have a ruling soon after. 

(Image: Flickr.com/ Marco Catini)