IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

What Judges Think of Pretrial Risk Assessment Algorithms

A new study found that many judges said the tools were flawed, but helpful in some areas, including when they were forced to make quick decisions with scant information.

A gavel being brought down in the foreground by a seated judge (blurred) in the background.
A new study that digs into when and how judges decide to use pretrial risk assessment algorithms found that many judges think these controversial tools are flawed but still useful.

The tools, the judges said, are especially helpful when little other information is available or when judges can point to risk scores to defend their decisions against criticism. Judges also noted, however, that risk scores were never the full reason for making a decision.

“I initially went into this project thinking that it was going to be about resistance, and a story where judges were completely resisting the tool,” said Sino Esthappan, report author and Ph.D. candidate at Northwestern University.

Based on earlier conversations, he’d expected judges to view the technology as something trying to take over their jobs. In interviews, judges shared “all sorts of criticisms” about how the risk assessment tools were flawed and unreliable, he said. And yet, when he asked about just getting rid of the tools, judges pushed back.

“They very much wanted to keep risk assessments in their courts,” Esthappan said. “That’s sort of the puzzle that I found really interesting, which was, judges don't like this tool, they think it's flawed, and yet they don't want to get rid of it.”

During pretrial hearings, judges determine whether someone accused of a crime — but not yet deemed guilty or innocent — gets to go free while they wait for their court date. And in busy city courts, judges often have only a few minutes to hear what limited information is available before making a decision that must balance defendants’ individual liberty against community safety, according to Esthappan’s study. Letting someone go means they might commit a criminal offense while free or simply fail to show up for their court date. But jailing someone means they miss work, cannot take care of their family and may face more difficulties preparing their legal defense.

Pretrial risk assessment algorithms aim to help judges better weigh the risks of releasing individuals pretrial. The tools draw on details like the defendants’ criminal histories and “sociodemographic, family, and community profiles” to create a risk score, per the report.

Some advocates hope the tools can help sentencing become more data-driven and objective. But others say the scores can perpetuate disparities by drawing on historical records that reflect biased policing practices, and that assessing an individual based on data about other people may be legally questionable. A 2021study found that simply viewing the tools’ predictions unconsciously shifted how laypeople weighed decisions, making them likely to put a higher priority on avoiding the risks of no-shows and rearrests over avoiding risks of unnecessarily jailing someone.

Now this new study goes straight to the judges, focusing on four large U.S. criminal courts. The courts often gave little guidance on how or how much to use the risk scores, only telling judges to “consider” them, Esthappan said.

Many of the 27 judges Esthappan interviewed said the tool helped give them some additional information, even if they didn’t fully trust the accuracy. Per the report, “judges selectively relied on them [the tools] to help justify their decisions in cases when they were hampered by information and time restrictions.” In these large, urban courts, judges often faced hefty caseloads.

The tools aim to be a standardized way of synthesizing what information is available in a pretrial hearing, although judges would need contextual information and statistics training to understand how the tools derived their scores, Esthappan said.

Judges said they didn’t make decisions solely based on the risk scores, although some said the score “might cause them to second-guess their decision or push them to seek out more information about a particular case,” Esthappan said.

Most of the time, judges used the tools to back up decisions they were already making.

Some judges cited the algorithm risk scores in their remarks when they wanted help legitimizing and defending decisions for which they expected media criticism or public pushback. Judges facing re-election could find this especially desirable, with many likely to see political repercussions for releasing someone who is rearrested or fails to appear.

“In those cases, what a judge is able to do is point to the risk assessment and say, ‘Well, I made this decision because I evaluated the risk and this is what the risk assessment told me,’” Esthappan said.

Other court participants appeared to hold some doubts about the tools. Some pretrial risk assessment tools draw on the interviews that pretrial officers conduct with defendants as one information source. One inmate in a class Esthappan taught said that his pretrial officer advised him on what answers to give to improve his chance of being released pre-hearing. Esthappan also observed attorneys appearing to selectively embrace risk assessment tools, critiquing them as faulty when the risk scores disfavored their clients and holding up the same tool as valid in other cases where the scores favored their clients.
Jule Pattison-Gordon is a senior staff writer for Governing and former senior staff writer for Government Technology, where she'd specialized in cybersecurity. Jule also previously wrote for PYMNTS and The Bay State Banner and holds a B.A. in creative writing from Carnegie Mellon. She’s based outside Boston.