Skip Navigation
Fellows

Can Predictive Policing Be Ethical and Effective?

Predictive policing shouldn’t just become racial profiling by another name.

November 18, 2015

Cross-posted from The New York Times Room for Debate.

More police departments are trying to predict crime through computer analysis of data, part of the growing trend of using algorithms to analyze human behavior. Advocates say this approach focuses on those most likely to commit crimes, allowing for better relationships between police and residents. But critics say the computer models perpetuate racial profiling and infringe on civil liberties with little accountability, especially when the forecasting models are built by companies that keep their methods secret.

Does predictive policing work? Can it decrease crime without infringing on civil liberties?

Faiza Patel and other national security experts weigh in at The New York Times Room for Debate series.


Be Cautious About Data-Driven Policing

By Faiza Patel

In every age, police forces gain access to new tools that may advance their mission to prevent and combat crime. Predictive technologies — the use of data from a variety of sources to create an assessment model for the probability of future crimes — have been touted as a means to forecast where crime will likely take place and sometimes who is likely to commit a crime.

Given the far-reaching implications of acting on such projections, any police department considering a predictive analytical tool must thoroughly test the reliability of its claims. So far, research is in its infancy.

handful of studies have shown short-term decreases in crime when police allocate resources to predicted “hotspots,” but other assessments have showed no statistically significant correlation or diminishing returns.

At a time of rising concern about over-policing in minority communities, surging police to particular locations may have its own compounding negative consequences. Technology that purports to zero in on categories of people likely to commit crimes is even more suspect. It undermines the constitutional requirement that police should target people based upon an individual suspicion of wrongdoing, not statistical probability.

Of course, even algorithms used to predict the location of crime will only be as good as the information that is fed into them. If an algorithm is populated primarily with crimes committed by black people, it will spit out results that send police to black neighborhoods. This is a serious and perhaps insurmountable problem, considering that people of color are stopped, detained, arrested and incarcerated at higher levels than whites regardless of crime rates. The New York Police Department’s infamous stop-and-frisk program overwhelmingly targeted black and Latino men, even though the number of arrests or summons resulting from stops was actually lower for minority targets. The risks of “driving while black” are also well-documented.

These realities mean we, as a society, should proceed with extreme caution despite the hype about the promise of data. Predictive policing shouldn’t just become racial profiling by another name.