IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Good vs. Risk in Value-Based AI: The ‘Challenge of Our Time’

Using artificial intelligence from a value-based perspective was a major theme during the 2024 Code for America annual summit. The organization also announced its new AI Studio.

A human hand and a robotic hand both wearing business suits and shaking hands. Gray background.
When it comes to deploying artificial intelligence (AI), its use must align with an organization’s values, experts said at last week’s Code for America Summit.

Even as governments and related entities prepare for disruption, AI is already here. And while some public-sector leaders are still learning how to maximize the technology’s impact, others have already begun implementation. From Washington, D.C., to Oklahoma and beyond, many governments are prioritizing a value-based approach.

Amanda Renteria, CEO of Code for America, drew a mixed reaction from the audience when she asked whether attendees were excited about AI or terrified. But Robin Carnahan, administrator at the U.S. General Services Administration, emphasized that AI’s emergence is not drastically different than that of the engine; it is a technology that can make people’s lives better. What makes AI technology unique is its potential for expansive application; however, Carnahan underlined, government has the power to control how the technology is used.

“AI is just a tool,” Carnahan said. She argued that governments have the data, talent and capacity to use this tool for good, in a way that mitigates risk. Achieving that balance, Carnahan said, is the “challenge of our time.”

To solve this challenge, she highlighted the importance of balancing innovation with the values prioritized by a government organization — and those of the nation at large.

In that regard, Carnahan said there are three primary things to think about with AI technologies. The first is ensuring they are secure and protect privacy. The second is ensuring they are accessible for everyone, as government has a responsibility to make tools usable for all constituents. The third is ensuring the tools are responsible, which goes beyond ethics and legality; it also includes transparency on how these tools are used and where the data that powers them comes from.

During another part of the summit, held May 28-30, California’s Chief Technology Officer Jonathan Porat mirrored this sentiment. California’s intentionally iterative AI implementation process allows for both experimentation and evaluation, Porat said, allowing for improvements to be made as necessary. It is this experimental process, as he put it, that allows the state to ensure AI use aligns with the state’s governance and values.

“So, it’s been critical for us to be able to work with our community partners [and] our labor partners throughout to ensure that our technology and the way that we’re implementing this technology is in alignment with those things that we care the most about,” he said.

He recommended other organizations start with identifying their values early on and using them as a method of ensuring accountability throughout the planning, experimentation and implementation of AI technologies.

Successful AI implementation, Porat said, will demonstrate to the public not only that government has the capacity to innovate but that the public sector is already doing so.

AI and other emerging technologies give organizations a chance to explore and improve their mission at the foundational level, said speaker Justin Brown*, launch CEO of the nonprofit Center for Public Sector AI and the former Oklahoma secretary of human services: “Because I do believe that disruptions like this are an opportunity to entrench values.”

Such disruptions, Brown explained, should not cause fear, but rather should be used as a tool to accelerate transformation. He also underscored the importance of building metrics that help agencies monitor outcomes as they begin to use these technologies.

During the event's broad conversation on AI, Code for America announced the launch of its new AI Studio, which aims to help governments implement AI tools with a human-centered approach.

The organization will host workshops this year, both in person and virtually, to help attendees learn how to test and scale AI tools. Those interested can sign up on the Code for America website to learn more about the workshops as they are made available.

*Justin Brown is a senior fellow at the Center for Digital Government, part of e.Republic, Government Technology's parent company.
Julia Edinger is a staff writer for Government Technology. She has a bachelor's degree in English from the University of Toledo and has since worked in publishing and media. She's currently located in Southern California.