Skip Navigation
Explainer

Predictive Policing Explained

Attempts to forecast crime with algorithmic techniques could reinforce existing racial biases in the criminal justice system.

Published: April 1, 2020

Police departments in some of the largest U.S. cities have been experimenting with predictive policing as a way to forecast criminal activity. Predictive policing uses computer systems to analyze large sets of data, including historical crime data, to help decide where to deploy police or to identify individuals who are purportedly more likely to commit or be a victim of a crime.

Proponents argue that predictive policing can help predict crimes more accurately and effectively than traditional police methods. However, critics have raised concerns about transparency and accountability. Additionally, while big data companies claim that their technologies can help remove bias from police decision-making, algorithms relying on historical data risk reproducing those very biases.

Predictive policing is just one of a number of ways police departments in the United States have incorporated big data methods into their work in the last two decades. Others include adopting surveillance technologies such as facial recognition and social media monitoring. These developments have not always been accompanied by adequate safeguards.

What is predictive policing?

Predictive policing involves using algorithms to analyze massive amounts of information in order to predict and help prevent potential future crimes.

Place-based predictive policing, the most widely practiced method, typically uses preexisting crime data to identify places and times that have a high risk of crime. Person-based predictive policing, on the other hand, attempts to identify individuals or groups who are likely to commit a crime — or to be victim of one — by analyzing for risk factors such as past arrests or victimization patterns.

Proponents of predictive policing argue that computer algorithms can predict future crimes more accurately and objectively than police officers relying on their instincts alone. Some also argue that predictive policing can provide cost savings for police departments by improving the efficiency of their crime-reduction efforts.

Critics, on the other hand, warn about a lack of transparency from agencies that administer predictive policing programs. They also point to a number of civil rights and civil liberties concerns, including the possibility that algorithms could reinforce racial biases in the criminal justice system. These concerns, combined with independent audits, have led leading police departments, including in Los Angeles and Chicago, to phase out or significantly reduce the use of their predictive policing programs after auditing them.

What are notable examples of predictive policing projects?

Predictive policing tools are mainly deployed by municipal police departments, though private vendors and federal agencies play major roles in their implementation.

One of the earliest adopters was the Los Angeles Police Department (LAPD), which started working with federal agencies in 2008 to explore predictive policing approaches. Since then, the LAPD has implemented a variety of predictive policing programs, including LASER, which identifies areas where gun violence is thought likely to occur, and PredPol, which calculates “hot spots” with a high likelihood of property-related crimes. Both programs were funded by the federal Bureau of Justice Assistance. (LASER was shut down in 2019 after the LAPD’s inspector general released an internal audit finding significant problems with the program, including inconsistences in how individuals were selected and kept in the system. Some police departments have also discontinued their PredPol programs.)

The New York Police Department (NYPD), the largest police force in the United States, started testing predictive policing software as early as 2012. A series of documents released by the department in 2018 after the Brennan Center filed a lawsuit identified three firms — Azavea, KeyStats, and PredPol — that were involved in an NYPD predictive policing trial. Ultimately, the NYPD developed its own in-house predictive policing algorithms and started to use them in 2013. According to a 2017 paper by department staff, the NYPD created predictive algorithms for several crime categories, including shootings, burglaries, felony assaults, grand larcenies, grand larcenies of motor vehicles, and robberies. Those algorithms are used to help assign officers to monitor specific areas. While the NYPD has described the information that is fed into the algorithms — complaints for seven major crime categories, shooting incidents, and 911 calls for shots fired — it has not disclosed the data sets in response to a public records request from the Brennan Center.

The Chicago Police Department ran one of the biggest person-based predictive policing programs in the United States. First piloted in 2012, the program, called the “heat list” or “strategic subjects list,” created a list of people it considered most likely to commit gun violence or to be a victim of it. The algorithm, developed by researchers at the Illinois Institute of Technology, was inspired by research out of Yale University that argued that epidemiological models used to trace the spread of disease can be used to understand gun violence. Chicago police frequently touted the program as key to their strategy for combating violent crime.

However, an analysis of an early version of the program by the RAND Institute found it was ineffective, and a legal battle revealed that the list, far from being narrowly targeted, included every single person arrested or fingerprinted in Chicago since 2013. Civil rights groups had also criticized the program for targeting communities of color, and a report by Chicago’s Office of the Inspector General found that it overly relied on arrest records to identify risk even where there was no further arrest or arrests did not lead to convictions. The program was ultimately shelved in January 2020.

Why are there transparency concerns?

Some of the skepticism around predictive policing programs has less to do with specific technologies than with the lack of transparency from the agencies that administer them — both in terms of what kinds of data are analyzed and how the departments use the predictions. Major details about predictive policing in Los Angeles, for example, emerged only after years of activism demanding more information from the LAPD about the nature of the programs’ operations.

Transparency concerns have also surrounded the NYPD’s predictive policing efforts. As part of the Brennan Center’s efforts to obtain documents under the Freedom of Information Law, the organization was forced to file a lawsuit to obtain the materials it was requesting; after an expensive, multi-year legal battle, the department finally disclosed some documentation about the agency’s use of in-house algorithms and predictive policing software. Numerous concerns remain, however. The NYPD claims not to use enforcement data, such as arrest data, for predictive policing purposes. But as they remain reluctant to produce documentation to back up their claims, there is ultimately still little transparency about the source of the data sets used as inputs for the NYPD’s algorithms.

There is also a shortage of information about how crime predictions are ultimately used — a problem exacerbated by the fact that the NYPD does not keep audit logs of who creates or accesses predictions and does not save the predictions it generates. This ultimately limits the amount of available information on the department’s use of predictive policing and makes it difficult for independent auditors or policymakers to properly evaluate these tools, including whether predictive policing is reinforcing historical over-policing of communities of color and whether there is a meaningful correlation between police deployment to hot spots and crime reduction.

Why are there Constitutional concerns?

Some legal experts argue that predictive policing systems could threaten rights protected by the Fourth Amendment, which requires “reasonable suspicion” for a police officer stop — a legal standard that helps protect individuals against “unreasonable searches and seizures” by the police. Predictive analytics tools may make it easier for police to claim that individuals meet the reasonable suspicion standard, ultimately justifying more stops.

Additionally, civil rights organizations, researchers, advocates from overly policed communities, and others have expressed concerns that using algorithmic techniques to forecast crime, particularly by relying on historical police data, could perpetuate existing racial biases in the criminal justice system. A 2019 study by the AI Now Institute, for example, describes how some police departments rely on “dirty data” — or data that is “derived from or influenced by corrupt, biased, and unlawful practices,” including both discriminatory policing and manipulation of crime statistics — to inform their predictive policing systems. Relying on historical crime data can replicate biased police practices and reinforce over-policing of communities of color, while manipulating crime numbers to meet quotas or produce ambitious crime reduction results can give rise to more policing in the neighborhoods in which those statistics are concentrated.

Some critics have labeled predictive policing a form of “tech-washing” that gives racially biased policing methods the appearance of objectivity, simply because a computer or an algorithm seems to replace human judgment.

Rachel Levinson-Waldman, a senior counsel in the Brennan Center’s Liberty & National Security Program, is struck by the consistent lack of enthusiasm for predictive policing from community groups. “What stands out for me in my interactions with the people most likely to actually interact with police,” she says, “is that groups and community organizations are not actively pushing for predictive policing as a preferred way to serve their neighborhood or community.”