Skip Navigation
Explainer

Predictive Policing Explained

Attempts to forecast crime with algorithmic techniques could reinforce existing racial biases in the criminal justice system.

Published: April 1, 2020

Police depart­ments in some of the largest U.S. cities have been exper­i­ment­ing with predict­ive poli­cing as a way to fore­cast crim­inal activ­ity. Predict­ive poli­cing uses computer systems to analyze large sets of data, includ­ing histor­ical crime data, to help decide where to deploy police or to identify indi­vidu­als who are purportedly more likely to commit or be a victim of a crime.

Proponents argue that predict­ive poli­cing can help predict crimes more accur­ately and effect­ively than tradi­tional police meth­ods. However, crit­ics have raised concerns about trans­par­ency and account­ab­il­ity. Addi­tion­ally, while big data compan­ies claim that their tech­no­lo­gies can help remove bias from police decision-making, algorithms rely­ing on histor­ical data risk repro­du­cing those very biases.

Predict­ive poli­cing is just one of a number of ways police depart­ments in the United States have incor­por­ated big data meth­ods into their work in the last two decades. Others include adopt­ing surveil­lance tech­no­lo­gies such as facial recog­ni­tion and social media monit­or­ing. These devel­op­ments have not always been accom­pan­ied by adequate safe­guards.

What is predict­ive poli­cing?

Predict­ive poli­cing involves using algorithms to analyze massive amounts of inform­a­tion in order to predict and help prevent poten­tial future crimes.

Place-based predict­ive poli­cing, the most widely prac­ticed method, typic­ally uses preex­ist­ing crime data to identify places and times that have a high risk of crime. Person-based predict­ive poli­cing, on the other hand, attempts to identify indi­vidu­als or groups who are likely to commit a crime — or to be victim of one — by analyz­ing for risk factors such as past arrests or victim­iz­a­tion patterns.

Proponents of predict­ive poli­cing argue that computer algorithms can predict future crimes more accur­ately and object­ively than police officers rely­ing on their instincts alone. Some also argue that predict­ive poli­cing can provide cost savings for police depart­ments by improv­ing the effi­ciency of their crime-reduc­tion efforts.

Crit­ics, on the other hand, warn about a lack of trans­par­ency from agen­cies that admin­is­ter predict­ive poli­cing programs. They also point to a number of civil rights and civil liber­ties concerns, includ­ing the possib­il­ity that algorithms could rein­force racial biases in the crim­inal justice system. These concerns, combined with inde­pend­ent audits, have led lead­ing police depart­ments, includ­ing in Los Angeles and Chicago, to phase out or signi­fic­antly reduce the use of their predict­ive poli­cing programs after audit­ing them.

What are notable examples of predict­ive poli­cing projects?

Predict­ive poli­cing tools are mainly deployed by muni­cipal police depart­ments, though private vendors and federal agen­cies play major roles in their imple­ment­a­tion.

One of the earli­est adop­ters was the Los Angeles Police Depart­ment (LAPD), which star­ted work­ing with federal agen­cies in 2008 to explore predict­ive poli­cing approaches. Since then, the LAPD has imple­men­ted a vari­ety of predict­ive poli­cing programs, includ­ing LASER, which iden­ti­fies areas where gun viol­ence is thought likely to occur, and Pred­Pol, which calcu­lates “hot spots” with a high like­li­hood of prop­erty-related crimes. Both programs were funded by the federal Bureau of Justice Assist­ance. (LASER was shut down in 2019 after the LAPD’s inspector general released an internal audit find­ing signi­fic­ant prob­lems with the program, includ­ing incon­sist­ences in how indi­vidu­als were selec­ted and kept in the system. Some police depart­ments have also discon­tin­ued their Pred­Pol programs.)

The New York Police Depart­ment (NYPD), the largest police force in the United States, star­ted test­ing predict­ive poli­cing soft­ware as early as 2012. A series of docu­ments released by the depart­ment in 2018 after the Bren­nan Center filed a lawsuit iden­ti­fied three firms — Azavea, KeyStats, and Pred­Pol — that were involved in an NYPD predict­ive poli­cing trial. Ulti­mately, the NYPD developed its own in-house predict­ive poli­cing algorithms and star­ted to use them in 2013. Accord­ing to a 2017 paper by depart­ment staff, the NYPD created predict­ive algorithms for several crime categor­ies, includ­ing shoot­ings, burg­lar­ies, felony assaults, grand larcen­ies, grand larcen­ies of motor vehicles, and robber­ies. Those algorithms are used to help assign officers to monitor specific areas. While the NYPD has described the inform­a­tion that is fed into the algorithms — complaints for seven major crime categor­ies, shoot­ing incid­ents, and 911 calls for shots fired — it has not disclosed the data sets in response to a public records request from the Bren­nan Center.

The Chicago Police Depart­ment ran one of the biggest person-based predict­ive poli­cing programs in the United States. First piloted in 2012, the program, called the “heat list” or “stra­tegic subjects list,” created a list of people it considered most likely to commit gun viol­ence or to be a victim of it. The algorithm, developed by research­ers at the Illinois Insti­tute of Tech­no­logy, was inspired by research out of Yale Univer­sity that argued that epidemi­olo­gical models used to trace the spread of disease can be used to under­stand gun viol­ence. Chicago police frequently touted the program as key to their strategy for combat­ing viol­ent crime.

However, an analysis of an early version of the program by the RAND Insti­tute found it was inef­fect­ive, and a legal battle revealed that the list, far from being narrowly targeted, included every single person arres­ted or finger­prin­ted in Chicago since 2013. Civil rights groups had also criti­cized the program for target­ing communit­ies of color, and a report by Chica­go’s Office of the Inspector General found that it overly relied on arrest records to identify risk even where there was no further arrest or arrests did not lead to convic­tions. The program was ulti­mately shelved in Janu­ary 2020.

Why are there trans­par­ency concerns?

Some of the skep­ti­cism around predict­ive poli­cing programs has less to do with specific tech­no­lo­gies than with the lack of trans­par­ency from the agen­cies that admin­is­ter them — both in terms of what kinds of data are analyzed and how the depart­ments use the predic­tions. Major details about predict­ive poli­cing in Los Angeles, for example, emerged only after years of activ­ism demand­ing more inform­a­tion from the LAPD about the nature of the programs’ oper­a­tions.

Trans­par­ency concerns have also surroun­ded the NYPD’s predict­ive poli­cing efforts. As part of the Bren­nan Center’s efforts to obtain docu­ments under the Free­dom of Inform­a­tion Law, the organ­iz­a­tion was forced to file a lawsuit to obtain the mater­i­als it was request­ing; after an expens­ive, multi-year legal battle, the depart­ment finally disclosed some docu­ment­a­tion about the agency’s use of in-house algorithms and predict­ive poli­cing soft­ware. Numer­ous concerns remain, however. The NYPD claims not to use enforce­ment data, such as arrest data, for predict­ive poli­cing purposes. But as they remain reluct­ant to produce docu­ment­a­tion to back up their claims, there is ulti­mately still little trans­par­ency about the source of the data sets used as inputs for the NYPD’s algorithms.

There is also a short­age of inform­a­tion about how crime predic­tions are ulti­mately used — a prob­lem exacer­bated by the fact that the NYPD does not keep audit logs of who creates or accesses predic­tions and does not save the predic­tions it gener­ates. This ulti­mately limits the amount of avail­able inform­a­tion on the depart­ment’s use of predict­ive poli­cing and makes it diffi­cult for inde­pend­ent audit­ors or poli­cy­makers to prop­erly eval­u­ate these tools, includ­ing whether predict­ive poli­cing is rein­for­cing histor­ical over-poli­cing of communit­ies of color and whether there is a mean­ing­ful correl­a­tion between police deploy­ment to hot spots and crime reduc­tion.

Why are there Consti­tu­tional concerns?

Some legal experts argue that predict­ive poli­cing systems could threaten rights protec­ted by the Fourth Amend­ment, which requires “reas­on­able suspi­cion” for a police officer stop — a legal stand­ard that helps protect indi­vidu­als against “unreas­on­able searches and seizures” by the police. Predict­ive analyt­ics tools may make it easier for police to claim that indi­vidu­als meet the reas­on­able suspi­cion stand­ard, ulti­mately justi­fy­ing more stops.

Addi­tion­ally, civil rights organ­iz­a­tions, research­ers, advoc­ates from overly policed communit­ies, and others have expressed concerns that using algorithmic tech­niques to fore­cast crime, partic­u­larly by rely­ing on histor­ical police data, could perpetu­ate exist­ing racial biases in the crim­inal justice system. A 2019 study by the AI Now Insti­tute, for example, describes how some police depart­ments rely on “dirty data” — or data that is “derived from or influ­enced by corrupt, biased, and unlaw­ful prac­tices,” includ­ing both discrim­in­at­ory poli­cing and manip­u­la­tion of crime stat­ist­ics — to inform their predict­ive poli­cing systems. Rely­ing on histor­ical crime data can replic­ate biased police prac­tices and rein­force over-poli­cing of communit­ies of color, while manip­u­lat­ing crime numbers to meet quotas or produce ambi­tious crime reduc­tion results can give rise to more poli­cing in the neigh­bor­hoods in which those stat­ist­ics are concen­trated.

Some crit­ics have labeled predict­ive poli­cing a form of “tech-wash­ing” that gives racially biased poli­cing meth­ods the appear­ance of objectiv­ity, simply because a computer or an algorithm seems to replace human judg­ment.

Rachel Levin­son-Wald­man, a senior coun­sel in the Bren­nan Center’s Liberty & National Secur­ity Program, is struck by the consist­ent lack of enthu­si­asm for predict­ive poli­cing from community groups. “What stands out for me in my inter­ac­tions with the people most likely to actu­ally inter­act with police,” she says, “is that groups and community organ­iz­a­tions are not actively push­ing for predict­ive poli­cing as a preferred way to serve their neigh­bor­hood or community.”