IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

‘Cat and Mouse’: Keeping Up With the Evolution of AI

As AI evolves, government must do so as well to effectively leverage the technology for improved service delivery, attendees said at the yearly Digital Benefits Conference. Accurate data is essential to make AI-powered systems work as designed for government.

AI race portrayed by showing illustrated robots running from left to right standing on blue and red arrows.
Shutterstock
Artificial intelligence (AI) has the potential to dramatically change service delivery — and in fact, is doing so already, experts said this week at BenCon 2024.

The event, the Digital Benefits Conference, was the second annual gathering of the Digital Benefits Network, at the Beeck Center for Social Impact and Innovation, Tuesday and Wednesday.

The use of AI in benefits programs can pose risks for vulnerable populations, and doing so effectively requires digital identification verification that many governments do not yet have in place. However, there can also be potential benefits for governments and those they serve when risk is mitigated effectively.

The digital benefits arena is changing as new technologies emerge that could help with benefits eligibility assessment and delivery, Lynn Overmann, Beeck Center executive director, said during the event: “I think this is a really interesting opportunity where generative AI could actually be quite helpful in making those decisions more consistently, equitably, and effectively.”

Today, the rapid advance of AI poses a “cat and mouse problem,” according to Sarah Bargal, assistant professor of computer science at Georgetown University, who led one event session. It is a cycle in which AI models are released, followed by deepfake disruption algorithms to mitigate risk, and then the creation of new AI models.

The rise of big data is a major factor in why AI is evolving at the rate it is today, Bargal said. When the artificial neuron was first introduced decades ago, large-scale data sets were not readily available as they are now.

To effectively make AI-powered systems work in government, agencies need to know what data is being input into them and ensure that it accurately represents the population being served, said Grant Fergusson, Equal Justice Works fellow and counsel at the Electronic Privacy Information Center, during a session.

If the data does not represent the populations being served, the risk of algorithmic bias is increased. In the benefits world, this can lead to discriminatory outcomes.

“We are living in this era in which emerging technologies — like artificial intelligence, but other algorithmic systems as well — are reimagining historical patterns of racial and social inequality, and doing so at a scale and a pace that is really kind of confounding the law’s ability to respond,” Georgetown Law Center on Privacy and Technology Senior Associate Clarence Okoh said in a session.

In fact, according to Elizabeth Laird, Center for Democracy and Technology (CDT) director of equity in civic technology, while some experts are raising concerns about the potential use of predictive analytics in schools and policing, the technology is already in use. An annual CDT polling found 60 percent of teachers surveyed are already using predictive analytics to identify students who are at risk of dropping out.

As Nikki Zeichner, transformation officer at the U.S. Department of Health and Human Services, said in one session, government agency staff are already using AI for a variety of things. As such, it is important for agency leaders and policymakers to strategize how to govern responsible use, rather than prohibit it altogether.

CDT Senior Policy Analyst Quinn Anex-Ries underlined federal action and guidance on AI and noted states are also moving to regulate it, with more than 40 bills related to public use of AI introduced during the 2024 legislative session.

The core principle when governing AI should be public trust, and how policy can help the people an organization is serving, Fergusson advised: “Think always about how a family in need would consider what you’re doing.”

ADVANCING AI TOGETHER


When it comes to responsible AI adoption and implementation, collaboration is key.

Pennsylvania Director of Emerging Technologies Harrison MacRae recommended government agencies get involved in the GovAI Coalition, led by the city of San Jose, which is a consortium of governments tackling AI-related challenges. The group provides resources agency leaders can implement within their own communities.

Civil society organizations are another helpful resource, Fergusson said. External stakeholder engagement with these kinds of organizations is valuable, he said, as these organizations are not seeking money because they are funded independently. Organizations like EPIC and CDT can provide expertise and resources to support government work, Fergusson said.

“There are organizations like ours who have been working on these issues for a long time,” Laird said. “So please know you’re not starting from scratch; there are a lot of us who are ready and willing to help you.”
Julia Edinger is a staff writer for Government Technology. She has a bachelor's degree in English from the University of Toledo and has since worked in publishing and media. She's currently located in Southern California.