IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Courts Consider the Coming Risk of Deepfake Evidence

Catching convincing AI-fabricated evidence is still a work in progress, but courts could benefit from thinking now about how they might confront the challenges posed by the emerging technology.

A gavel resting on a pedestal on a wooden table with a set of brass scales in the background.
PHOENIX — Courts aren’t immune to the rise of generative AI, and the prospect of deepfakes appearing as evidence has been drawing attention in the justice realm.

If such fabricated text, audio or visual content is submitted as evidence and manages to slip pass detection and affect a case, innocent parties could be harmed — or justice denied. Such news could also shake the public’s faith in the judicial system, said Andre Assumpcao, data scientist at the National Center for State Courts (NCSC), during this week’s Court Technology Conference in Phoenix.

Just the specter of deepfakes raises its own issues. As Elon Musk demonstrated, defendants may try to protect themselves from genuine inculpatory evidence by falsely claiming it was a deepfake, noted an audience member during the panel.

As technologies advance, deepfakes will become more convincing, said Fred Lederer, chancellor professor of law and director of the Center for Legal and Court Technology at William and Mary Law School, during a separate panel.

“From a court perspective, this does literally mean that we are close to the point at which you cannot believe what you see,” Lederer said. “We may in fact be heading toward [a] world in which, with limited exceptions, the only thing we’re going to believe is human testimony.”

But humans aren’t always accurate, either. Witnesses can be forgetful or make mistakes and judges and juries have not shown themselves to be consistently adept at telling when witnesses are lying, Lederer said.
Jannet Okazaki standing with microphone faces camera. Andrew Assumpcao beside her, is seated at a table. A "CTC Phoenix 2023 NCSC" sign hands to the side of the table.
Jannet Okazaki (right) and Andre Assumpcao (left) present during the 2023 CTC conference.
Jule Pattison-Gordon

DEEPFAKE DETECTION?


Trying to filter real from forged could consume a lot of court resources, Assumpcao said. And courts have often taken the perspective that that isn’t their role, said NCSC Principal Court Management Consultant Jannet Okazaki. Many courts regard the parties who present evidence as the ones responsible for authenticating it. But given the ongoing rise in self-represented litigants, that may have to change.

“I would often hear, ‘That‘s not our responsibility, we just review evidence as it’s presented to us,’” Okazaki said. “But courts are evolving, and not everyone is coming in with an attorney.”

Assumpcao said one response could be to adopt policies holding parties liable for manipulating evidence. Courts may also need to hire experts or adopt systems that aim to help detect deepfake evidence. Today, such systems are still in early stages, he said.

Various efforts are underway to confront the problem.

Oakland University Associate Professor Khalid Malik and graduate student JJ Ryan told Government Technology after a panel that they’ve been developing a system they hope to pilot with courts that assesses videos for revealing clues. Those can include an abnormal amount of space between the eyes on a face or moments of misalignment between audio and visual elements of a video that may be too subtle for a human viewer to detect.

Some also consider if it's possible to shift the overall media environment.

In July, seven leading AI companies promised to develop mechanisms that would indicate if audio or visuals were created by AI, such as a digital watermarking system. University of California, Berkeley generative AI expert Hany Farid has suggested that smartphones or other devices that capture genuine, original images and audio should impart their own metadata watermark or stamp indicating when and where the content was recorded. Looking ahead, Assumpcao suggested that blockchain could track alterations to files, should it become more mainstream.

Deepfakes aren’t only a threat to the integrity of evidence, but also to court security. Okazaki said AI can enable more sophisticated social engineering. For example, criminals have been using the tech to realistically mimic the voices of specific individuals. Such voice spoofing could be turned against a courthouse, with criminals mimicking the voice of an IT staff member or the court administrator to call employees and ask for their passwords.

But generative AI isn’t necessarily all bad news. During the NCSC panel, one audience member said their court found it helpful for tasks like finding alternative ways to phrase feedback on staff evaluations or generating fresh interview questions. Assumpcao suggested it could support legal research, if precautions are taken to avoid revealing private information when querying and check for inaccurate answers.
Jule Pattison-Gordon is a senior staff writer for Government Technology. She previously wrote for PYMNTS and The Bay State Banner, and holds a B.A. in creative writing from Carnegie Mellon. She’s based outside Boston.