Justin Forkner, chief administrative officer of the Indiana Supreme Court, is co-chair of the new AI Rapid Response Team (AI RRT), which aims to help state courts understand both generative AI and traditional AI, as well as policies around the tech. The goal is to help state courts better craft their own approaches to the technology. The AI RRT is made up of four state court administrators and four chief justices from across the U.S., with the National Center for State Courts (NCSC) providing staffing.
AI has plenty of potential to boost courts. Short-staffed courts could potentially use it to help with administrative tasks like redactions or case processing, Forkner said, and it could help rewrite jargon-heavy language into accessible plain language. It could also help justices write speeches or assist with research, provided they check for inaccuracies or plagiarized material.
And generative AI has already been impacting courts, with some participants turning to it for research or answers.
“It seems like every couple of weeks there’s a new story about an attorney or even a self-represented litigant using one of these technologies, and the information is not always credible or even accurate,” said Shay Cleary, managing director of the National Center for State Courts.
In Forkner’s view, AI is just another tool — albeit one that demands caution. He recommends against courts banning AI but said embracing the tech before understanding its risks is dangerous.
The AI RRT aims to help courts make sense of what AI is and ways to approach it. The goal is to produce a centralized resource hub, where courts can view different states’ policies, use cases and initiatives. The AI RRT is also producing its own interim guidance documents.
For example, some courts have split over how to handle attorneys filing AI-generated briefings that include hallucinations. Some courts believe they need rules specifically for generative AI, while others believe they can apply existing rules around certifying the accuracy of information. What matters most is that courts make an informed choice, Forkner said.
The AI RRT is intentionally short term and fast-moving. It formed in December, and members will meet every two weeks through the summer, when they expect to produce final deliverables, Cleary said.
Even with that frequent schedule, it’s challenging to keep up with the technology’s evolution.
“The hardest thing is we meet every two weeks, and almost every two weeks some part of what we thought two weeks prior is out of date,” Forkner said.
The AI RRT intends to be a starter, providing initial resources that the NCSC could maintain and update long term as software, policies and initiatives change, Forkner said.
So far, the AI RRT has produced an online resource center and a talking points guide around generative AI. The guide originally opened with a section explaining risks, but the team decided to shift that portion later, so as to start on a more positive note, Cleary said, so as not to scare folks off AI all together.
Looking ahead, the AI RRT is developing advice on finding AI vendors, including what to seek in contracts and conditions, Forkner said. This will also include advice on the risks of generative AI trained on the open Internet versus similar tools designed specifically for courts, Cleary said.
The AI RRT is far from the only group examining AI in the judicial arena, with the American Bar Association and National Council of Lawyer Disciplinary Boards covering their own space. As such, Forkner said it was important to take a focused approach, both to avoid trying to tackle an area already being addressed and to narrow the work to produce resources in a timely manner.