Skip Navigation
Analysis

When it Comes to Justice, Algorithms are Far From Infallible

Algorithms are playing an increasingly important role in police departments and courtrooms across the nation. But “objective” algorithms can produce biased results.

  • Erica Posey
March 27, 2017

Early on in Tuesday’s confirmation hearing, Neil Gorsuch suggested that the judiciary may be in danger of automation. When asked how political ideology can affect judicial decision-making, Judge Gorsuch joked that “they haven’t yet replaced judges with algorithms, though I think Ebay is trying, and maybe successfully.” The joke fell flat, but Judge Gorsuch isn’t completely wrong – though Ebay doesn’t seem to have anything to do with it.

Algorithms already play a role in courtrooms across the nation. “Risk assessment” software is used to predict whether or not an offender is likely to commit crimes in the future. The software uses personal characteristics like age, sex, socioeconomics, and family background to generate a risk score that can influence decisions about bail, pre-trial release, sentencing, and probation. The information fed into the system is pulled from either defendant surveys or criminal records.

Algorithms also help determine who ends up in the courtroom in the first place. Police are investing in “predictive policing” technology — powerful software that uses data on past crime to forecast where, when, and what crimes might occur. Police use the predictions to make deployment decisions. Some software even claims to predict who may be involved in a future crime. A pilot program in Chicago used software to identify roughly 400 people likely at high risk of being involved in violent crime in the next year. Law enforcement notified the individuals and followed up with them in an attempt to cut the city’s crime rate. Facial recognition algorithms are already used with surveillance footage, and emerging technology will allow real-time facial recognition with police body cameras.

Proponents of the tools laud the software’s potential to cut costs, drive down prison populations, and reduce bias in the criminal justice system. The expensive and prejudicial outcomes of our human-driven criminal justice system are well documented. As Judge Gorsuch lamented, “I’m not here to tell you I’m perfect. I’m a human being, not an algorithm.”

Unfortunately, the algorithms aren’t perfect either. A ProPublica analysis of a widely-used risk assessment algorithm found that only 20% of people the software predicted would commit violent crimes went on to do so in the two years after the assessment was conducted. When all crimes – including misdemeanors – were taken into account, the algorithm was only slightly more accurate than a coin flip in predicting recidivism rates. Worse still, it was nearly twice as likely to mislabel black defendants as high risk than white defendants.

“Objective” algorithms, relying on biased input data, can produce biased results. Predictive policing systems primarily analyze past policing data to develop crime forecasts; the algorithms may be more skilled at predicting police activity than crime. As criminal justice professor Christopher Herrmann noted, “at best, these predictive software programs are beginning their predictions with only half the picture,” given that only 50% of crimes are ever reported to the police. Facial recognition algorithms fall prey to similar issues. Studies suggest– and commercial applications reveal – that facial recognition algorithms tend to be less accurate for women and people of color, likely because the original data used to train the software didn’t include sufficient examples of minority faces.

Additionally, the tools themselves aren’t well understood. The algorithms are often generated by private companies that protect their software as intellectual property. There are relatively few independent assessments of their validity, so judges and attorneys lack the information they’d need to adequately understand or challenge their use. A public defender in California struggled to even access the surveys taken by her clients to calculate their risk scores. One Wisconsin judge overruled a plea deal because of the defendant’s high risk score, only to later reduce the sentence on appeal after hearing testimony from the risk score’s creator that clarified its meaning.

There are ways to mitigate these concerns, and proponents note that algorithms can still perform equally as well or better than human judgment. Why demand perfection from computers when the human alternative is also imperfect?

But Judge Gorsuch’s testimony highlights perhaps the biggest issue with using algorithms in a criminal justice context. Computers and algorithms are popularly perceived to be infallible and unbiased. History warns of the dangers of using math and statistics to lend credence to racism. One shoddy interpretation of 1890s census data drew a false causation between blackness and criminality. The study, hailed as objective because of the data source and perceived neutrality of the immigrant author, was used to justify Jim Crow laws and insurance discrimination. Imperfect algorithms predicting criminality can provide a veneer of impartiality to a system of institutionalized bias.

We are not far from a future in which a person may end up in prison after an algorithm sends cops to their neighborhood: an algorithm identifies their face as having an outstanding warrant: and an algorithm tells a judge that the person is at high risk of committing further crimes. We need to make sure our society – and especially our judiciary – fully understands these tools, and takes their limitations into account.