IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

National AI Committee: U.S. at 'Critical Crossroads' for AI

A new report advises the White House and Congress on how to push for responsible AI, noting that a public awareness campaign is also important to help residents make informed choices about the evolving technology.

digital illustration of an AI brain
Shutterstock/cono0430
Artificial intelligence is a tool that can help or harm the public.

If used to help, it could become equitable, a true benefit for humanity, but if used to harm, it could become discriminatory and dangerous. Which way it goes depends on governments acting now to create and enforce the right safeguards, said several members of the National Artificial Intelligence Advisory Committee (NAIAC), speaking recently during a Brookings Institution panel.

This issue is one that state governments have been mulling as well, with Colorado, Connecticut andCalifornia all taking recent steps to address it. But federal action could be far-reaching, and the NAIAC has been working since May 2022 to develop recommendations for the president and Congress about actions they should take. The committee comprises 26 AI experts from across academia, civic society and private companies. It released its first year’s draft report last week.

NAIAC Chair Miriam Vogel said AI fuels many helpful services, but also presents serious risks of both deliberate and unintentional harms.

“We’re at a critical crossroads, because this tool can also be a weapon,” Vogel said during the panel. “And what's so important about this weapon is not just that it can be misused, but that it can be scaling discrimination — that lines of code and iterations of code can undo decades of progress and the perpetrator may not know it.”

Vogel is also president and CEO of EqualAI, a nonprofit aimed at reducing unconscious bias in AI.

Pressure is on to act urgently because AI technologies have been rapidly evolving. They’re becoming more powerful and more deeply infused into society.

AI is a “technology that requires immediate, significant and sustained government attention,” the report said. “The U.S. government must ensure AI-driven systems are safe and responsible, while also fueling innovation and opportunity at the public and private levels.”
miriam vogel, seated, holds microphone and gestures. Head and shoulder shot. Brookings institute background is visible behind her.
Miriam Vogel, NAIAC chair and president and CEO of EqualAI, discusses the risks from AI use during a Brookings Institution panel.

Screenshot

TRUSTWORTHY AI


NAIAC spent the past year focused on four core ideas: trustworthy AI, research and development, workforce and opportunity, and international collaboration.

Four NAIAC members convened for the Brookings panel recently, and several pointed to a need to prevent AI use from disproportionately harming marginalized communities. Reggie Townsend is the vice president of Data Ethics Practice at the analytics software company SAS Institute. He said he got involved in ethical AI work to prevent the technology from leading to “Jim Crow 3.0.”

Among the issues: the ways that machine learning systems arrive at their conclusions are often opaque to those impacted and potentially even to those who created the systems, said Swami Sivasubramanian, vice president of Data and Machine Learning Services for Amazon Web Services (AWS). That’s especially true for large language models: “I don't think even people who build that [system] can explain to you why it generated that response,” he said.

That makes it important to raise awareness about automated systems’ limitations. And to distinguish between situations where AI use is likely innocuous — video streaming sites using such technology to recommend other films to watch, for example — and those that are too high stakes to trust to the systems alone, such as making medical diagnoses.

Swami Sivasubramanian, wearing a suit and holding a microphone. He's seated and looking right.
Swami Sivasubramanian, vice president of Data and Machine Learning Services for Amazon Web Services, discusses understandability of AI.

Screenshot
Law enforcement’s use of AI also deserves deeper examination, and NAIAC intends to soon launch a subcommittee focused on the matter, Vogel said.

Fostering ethical AI can require incorporating more diverse perspectives and voices in the development and testing of the tools, said Vogel. Every point of an AI system’s life cycle is an opportunity at which biases could get embedded, and that’s more likely to happen when the people creating the tools all have similar viewpoints.

Safeguarding AI use may not require a full set of new laws: In some cases, existing civil rights and anti-discrimination laws can be applied to respond to harms stemming from use of predictive and decision-making algorithms, but federal enforcement units need more resources to help them do so, the report said.

The report also called for establishing a federal chief responsible AI officer to oversee “implementation and advancement of trustworthy AI principles across [federal agencies],” as well as for the filling of vacant leadership positions like that of the director of the National Artificial Intelligence Initiative Office.

INTERNATIONAL PICTURE


AI technologies developed in one country get used in others, and so responsible AI governance practices “must be workable and understandable for users across the globe, operating in the wide landscape of legal jurisdictions,” per the report.

Townsend said it’s important for countries to settle on a set of international standards that are at least loosely aligned. That would be akin to international electricity standards, which ensure that devices can charge in different countries’ power sockets, even if converters are needed. Government or industry could act to push for this commonality, he said.

Joint research and development efforts are one opportunity for like-minded nations to come together and develop guardrails that support shared values, Sivasubramanian said. And outreach shouldn’t end with ideologically aligned nations, either, Townsend said. He recommended using the offer of collaboration as an “opportunity for the extension of olive branches to those with whom we don’t completely share 100 percent of our values.”

Seated in chairs in a semi-circle in front of screens that display the word "Brookings" on a blue background are Cameron Kerry, Miriam Vogel, Swami Sivasubramanian, Susan Gonzales, Reggie Townsend and Jessica Brandt. Susan Gonzales is speaking and others look on. Miriam Vogel flips through papers on her lap.
Left to right: Brookings Institution moderator Cameron Kerry; EqualAI's Miriam Vogel; AWS's Swami Sivasubramanian; AIandYou's Susan Gonzales; SAS Institute's Reggie Townsend; and co-moderator Jessica Brandt, of the Brookings Institution's AI and Emerging Technology Initiative.
Screenshot

AWARENESS AND DAILY LIFE


AI is being used in ways that impact daily lives, meaning that the U.S. should not let residents remain in the dark about how the technology works. People don’t have to become AI specialists, but efforts should be taken to ensure everyone obtains a general level of knowledge about the technology’s workings and how it affects them, Townsend said.

“As a society, there's a baseline of understanding that we all have to have now,” he said.

Similar comments came from Susan Gonzales, the founder and CEO of AIandYou, a nonprofit that works to keep marginalized communities informed about new technologies like AI. For example, people ought to know that when they apply for loans online, AI tools track how often they make mistakes in the application, subsequently judging them for it.
Susan Gonzales, wearing a tan jacket and seated, holds a microphone and speaks. She's looking straight on at the camera.
Susan Gonzales, the founder and CEO of AIandYou
Screenshot
“You’re probably going to make more mistakes [if you’re applying] on your phone, and you might get declined for a loan because of that,” Gonzales said. “There are fundamental points regarding AI tools that people are interacting with every day.”

She and Townsend called for public education and awareness-raising campaigns.

WORKFORCE IMPACT


There’s been plenty of consternation over whether these technologies are coming for people’s jobs, and large language models like ChatGPT have seen that worry spread to more sectors, including knowledge workers, said Townsend.

“We have to be honest about this as well, which is to say, there will be displacement,” Townsend said.

Reggie Townsend, seated, wearing a suit, looks left and talks. He's holding a microphone and seated in front of a Brookings Institution background.
Reggie Townsend, vice president of Data Ethics Practice at the SAS Institute.

Screenshot
Technology has, and continues to, change the nature of work, with history showing ATMs replacing banking clerks, for example, Townsend said. AI will impact jobs, too, and the technology — and its uses — are changing more rapidly than previous tools have. AI’s rise has implications for the kinds of skills people need to learn to operate in work and society shaped by this technology.

But decisions made now can see the technology more often support workers and change the kinds of tasks they do, rather than fully replace them. Developers, for example, could use AI to handle rudimentary testing, freeing them up to tackle more involved or creative tasks.

“Over time we will start to weave this in — or integrate it — with our lives much like we have with the ATM … but this moment requires us to be intentional about it,” Townsend said. “ … This technology can come for all of us, but it doesn't have to substitute us.”

NAIAC’s next efforts will focus on education, workforce, human rights, inclusivity, international collaboration and generative AI, Vogel said. She acknowledged that the fast pace of evolution in this space means the White House and Congress cannot wait on the year two report before taking the next steps, saying that as such, the NAIAC will be meeting more frequently.
Jule Pattison-Gordon is a senior staff writer for Governing and former senior staff writer for Government Technology, where she'd specialized in cybersecurity. Jule also previously wrote for PYMNTS and The Bay State Banner and holds a B.A. in creative writing from Carnegie Mellon. She’s based outside Boston.