Too many people spend time in jail before they’re ever found guilty, merely because they’re suspected of committing a crime.
Some states have tried to correct this problem by using “risk assessments.” When a judge is considering sending someone to jail ahead of trial, rather than setting bail — essentially tying someone’s freedom to their ability to pay for it — the judge will evaluate whether the defendant is a public safety threat. If they are a threat, they’ll be sent to jail. If not, they get to go home.
But how do you decide whether someone is a safety threat? That’s a harder problem to solve. And that’s where risk assessments come in.
What are risk assessments?
In short, risk assessments are a process or a tool the court uses to decide whether someone is too dangerous to release, or a “flight risk” — that is, not likely to show up for trial.
There are two types of assessments: clinical and algorithmic. In a clinical assessment, a psychologist or clinician evaluates someone individually. These assessments can be time-consuming, requiring informal evaluations, interviews, and investigations of various factors of a defendant’s life.
An alternative method is an “algorithmic risk assessment.” Used to improve or completely replace clinical assessments, algorithmic assessments use data, statistics, and social science to categorize a person’s risk level based on their responses to questions or based on data about them. This typically results in a numerical risk score and a designation of low, medium, or high risk.
When are they used?
Risk assessments can help inform judges as they make decisions about pretrial detention, bail, sentence length, and more. Some states have implemented these assessments at one or more of these stages. At the pretrial detention and bail stages, an understanding of whether the defendant poses a risk for flight or violence heavily informs the judge’s decision-making process. At the sentencing stage, risk of violence and recidivism help a judge decide how long someone should be sent to prison. And another, similar type of assessment — a “needs” assessment — can help prison officials decide what types of services, programs, and housing a person will need during their time in prison.
Are they useful? Do they have unintended consequences?
Risk assessments might help reduce the use of bail — which results in people going to jail because they are poor, not because they’re a public safety risk. Theoretically risk assessments might help end that practice, making jail a place for people judged too dangerous to be released rather than people too poor to pay to get out.
But there are some reasons to exercise some caution. As noted by former Attorney General Eric Holder, algorithmic risk assessments are built upon data, and that data itself can reflect the systemic bias of the criminal justice system. For example, a prior arrest might make someone look “riskier” to an algorithm. But we know that people of color are more likely to be arrested than whites for the same crime. Despite relying on neutral data, risk assessments might “bake in” a history of bias.
Recent research suggests that these concerns are valid. More research can help determine how to create effective, unbiased risk assessments.