IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Risk Assessment Algorithms Can Unfairly Impact Court Decisions

Researchers say just seeing the algorithm’s risk predictions about an arrestee could change how judges weigh pretrial decisions, prompting them to put more priority on risk than they would otherwise.

Closeup of a court gavel and scales.
Pretrial risk assessment algorithms are intended to help judges make more informed decisions, but researchers have raised concerns not only about the fairness and accuracy of the tools themselves, but also their influence on judges’ thinking.

A new study suggests that the algorithms may prompt judges to alter their priorities when making pretrial decisions, without being aware they’re doing so.

Algorithms can make mistakes, and courts do not entrust the tools with making sensitive detention or intervention rulings. Humans stay in charge and judges are expected to only use the tools’ predictions as one factor informing their choices.

Yet a study published in late August 2021 by University of Michigan’s Ben Green and Harvard University’s Yiling Chen suggests that such limitations may not stop the algorithms from changing not only what pretrial decisions are reached, but also how judges arrive at them.

Judges need to disrupt non-convicted people’s lives as little as possible, while still protecting public safety. Viewing the tools’ predictions caused study participants to weigh such factors differently when reaching their conclusions. Participants using the tools put a higher priority on the risk of defendants’ failing to appear in court or getting re-arrested.

This shift made sentencing more unequal, with Black defendants more likely to be deemed high risk and thus given harsher decisions, compared to white defendants, Green and Chen said.

And participants did not seem to realize that their thinking shifted — meaning that unless stakeholders are alerted, courts adopting risk assessment algorithms could effectively be changing their policies without anyone consciously signing off on it.

The widening racial disparities and other effects “would occur without deliberation or oversight,” Green and Chen wrote. “These shifts… would be the consequence of an algorithm’s unintended influence on human decision-making rather than a democratic policymaking process.”

WHAT’S A PRETRIAL RISK ASSESSMENT ALGORITHM?


Arrestees awaiting trial have not yet been found guilty of anything. It’s up to judges to assess both whether these defendants are likely to flee to avoid their court dates and whether they are so dangerous they are likely to hurt someone if let go before the case against them can be settled. Court officials might decide to lock up the individuals in the meantime or let them go with certain supervisions, like requiring them to wear GPS trackers.

Algorithms are intended to standardize this decision-making and provide more accurate evaluations.

“Pretrial risk assessments are often touted as being a scientific solution to inequities in the criminal justice system,” said Franklin Cruz, interim chief financial officer and former senior policy adviser at The Bail Project, a nonprofit that provides bail and pretrial support and advocates against money bail practices. “They’re supposed to equalize the decision-making across different people and make sure people are treated the same way across the board.”

The tools attempt to predict the likelihood of a defendant’s re-arrest before trial or failure to appear in court. They draw on data about previous defendants’ outcomes to make predictions about the arrestee in question. The algorithms present their findings either as a numerical score (i.e., a 60 percent chance of an issue) or by marking the individual as high, medium or low risk.

Washington, D.C.’s Pretrial Services Agency, for example, uses a tool that considers prior failures to appear, violent convictions during the past decades, “suspected drug disorder problems” and other factors, according to its website.

“Research shows that when risk assessment tools are designed thoughtfully, independently tested and validated and objectively applied, they can reduce racial and economic bias in decision-making and improve overall outcomes for justice-involved individuals,” D.C.’s Pretrial Services Agency stated in a May 2020 memorandum. The agency did not respond to a request for interview and uses a bespoke risk assessment tool to aid in release condition recommendations.

The tools only provide information — they don’t tell judges what to do with it, Kelly Roberts Freeman, senior court research associate at the National Center for State Courts (NCSC), told Government Technology.

HOW TOOLS CAN CEMENT BIAS


Freeman said that the tools can perform well — for the particular functions they’re tailored around.

“Research has shown [the algorithms] perform better in terms of predicting future justice system contact than a person just making assessment without use of a tool,” Freeman said.

Of course, a finding of high risk doesn’t actually mean the person is dangerous or needs to get locked up.

Individuals living in neighborhoods that are highly surveilled by police are more likely to get arrested or stopped than those living in areas where police aren’t closely watching, after all. Being at high risk of being arrested doesn’t mean the same thing as being at high risk of committing a crime.

Risk assessment tools are designed to make predictions about a criminal justice system that has historically been biased against low-income individuals and people of color, Freeman said. That means that if these tools say someone is likely to get re-arrested, they may simply — yet accurately — be reflecting that that person belongs to an overpoliced demographic.

“What the research is showing is that it’s not possible to have a tool that is maximizing accuracy in our prediction, as well as promoting equity in terms of how it is classifying people into these risk categories,” Freeman said. “There’s a problem with creating a race-neutral algorithm, within an unequal system of justice.”

Cruz spoke similarly, telling Government Technology, “when you’ve got a situation where African Americans are being arrested six times more often than white Americans for similar crimes, that kind of disparity in the source data is necessarily going to impact how the algorithm functions.”

Some tools perform better than others, Cruz said, but the errors they make tend to hit people of color hardest. The algorithms are more likely to give a defendant who is Black an inflated risk score and erroneously label a white defendant as less dangerous than they really are, he said.

The algorithms also raise legal questions, Freeman and Cruz note, because they are evaluating an individual based on what other people have done, by considering historical data about people with similar characteristics.

DECISION-MAKING IMPACT


Issues like these provide plenty of reason not to put an algorithm in the driver’s seat. And, indeed, the tools aren’t intended to be making decisions, simply providing judges with more information to help them reach their ultimate decisions.

But the University of Michigan and Harvard study suggests that just presenting participants with assessment tools’ findings changes how much the participants care about risk when making decisions. That’s different than simply going through the same kind of decision-making, only equipped with more information.

Judges who detain someone pretrial are locking up a person who has not been found guilty, potentially exposing them to abuse in jail, causing them to miss work and preventing them from caring for their family. In exchange, they are guaranteeing the person will not miss trial or commit a serious offense in the meantime. Judges need to weigh those and other factors when making detention decisions, and researchers found that risk assessment tools prompted them to prize preventing failures to appear or arrests more highly than they otherwise would.

“Many policy decisions require balancing competing goals and therefore lack a straightforward correct answer,” the authors note.

The experiment had regular laypeople participate in simulated experiences, one of which asked participants to make decisions about releasing or detaining individuals pretrial. Judges may receive more specialized training about handling risk assessments than the average person, the authors note, but they add that even court professionals are “susceptible to cognitive and racial biases when making decisions in much the same manner as laypeople.”

Showing participants risk scores seemed to have an exaggerated effect: these decision-makers were even more likely to release defendants ranked low risk and detain those ranked high risk.

“For example, the risk assessment reduces the detention likelihood by 6.3 percent for a defendant with a perceived risk of 30 percent but increases the detention likelihood by 8.7 percent for a defendant with a perceived risk of 60 percent,” the study states.

This greater emphasis on risk increased racial disparities in pretrial decisions made by study participants. The study only had participants consider Black and white felony defendants.

“Because risk is intertwined with legacies of racial discrimination in the criminal justice … systems, more heavily basing decisions on risk would likely exacerbate racial disparities in incarceration,” authors wrote. “Due to past and present oppression in the United States, Blacks have disproportionately higher risk levels than whites for being arrested... making them particularly vulnerable to increased attention to risk.”

SHIFTING COURT MINDSETS


Risk assessment algorithms — despite these concerns — attempt to respond to real problems. For example, in 2018, California Sen. Bob Hertzberg promoted them as a more equitable alternative for the money bail system. The latter often leads to people being jailed based on their personal wealth rather than practical concerns.

While Herzberg sought to replace money bail with risk-based assessments, other courts combine the two. Freeman said a judge considering a monetary bail might first see if the risk assessment ranks a defendant as low enough risk to justify releasing them without imposing that financial barrier or without supervision.

Courts are — and should be — increasingly shifting their mindsets on pretrial decisions to emphasize whenever possible releasing people who have not been found guilty, Freeman said. D.C.’s Pretrial Services Agency states that it uses its risk scores to aid release conditions, and not to make the case for detention.

HEART OF THE MATTER


Pretrial risk assessment tools thus far fall short of addressing the heart of the problem, because they do not look deep enough to uncover and report the specific causes and obstacles preventing individuals from showing up at court, Cruz said.

Getting those details is key to finding the least invasive way to resolve the issue.

“They don’t distinguish between times when somebody’s really trying to run away from being prosecuted from the times when folks are absent from court because they couldn’t get off work, or they didn’t know when their court date was, or some other extenuating circumstance,” Cruz said.

Similarly, he said, many tools’ efforts to assess defendants’ risk of re-arrest don’t actually reflect whether they are a public safety threat. That’s because the algorithms might predict likelihood of being arrested, which isn’t the same as being convicted, or may not distinguish between violent crimes and other offenses.

TECH-BOOST


Certain technologies also show promise for increasing pretrial equity.

Cruz noted that the shift to virtual pretrial hearings during the pandemic has largely been a boon. Arrestees during normal in-person hearings could be required to spend hours waiting in court until the judge got to them, he said, which places a significant burden on anyone who cannot easily afford to miss work or hand off child-care responsibilities.

Remote hearings instead let defendants hop on just when they are needed, making it easier to appear. Still, more efforts will be needed to ensure that virtual breakout rooms or other communication methods are confidential enough to let defendants speak privately with their counsel, and to expand broadband so that everyone has this option, Cruz said.

Freeman, meanwhile, said there’s been growing interest in automatic criminal record clearances, so such efforts do not depend on individuals figuring out how to navigate such requests. This could prevent long-ago offenses from trailing a person, thus both improving employment chances and preventing a risk assessment algorithm from potentially digging up particularly old events.
Jule Pattison-Gordon is a senior staff writer for Governing and former senior staff writer for Government Technology, where she'd specialized in cybersecurity. Jule also previously wrote for PYMNTS and The Bay State Banner and holds a B.A. in creative writing from Carnegie Mellon. She’s based outside Boston.