If used to help, it could become equitable, a true benefit for humanity, but if used to harm, it could become discriminatory and dangerous. Which way it goes depends on governments acting now to create and enforce the right safeguards, said several members of the National Artificial Intelligence Advisory Committee (NAIAC), speaking recently during a Brookings Institution panel.
This issue is one that state governments have been mulling as well, with Colorado, Connecticut andCalifornia all taking recent steps to address it. But federal action could be far-reaching, and the NAIAC has been working since May 2022 to develop recommendations for the president and Congress about actions they should take. The committee comprises 26 AI experts from across academia, civic society and private companies. It released its first year’s draft report last week.
NAIAC Chair Miriam Vogel said AI fuels many helpful services, but also presents serious risks of both deliberate and unintentional harms.
“We’re at a critical crossroads, because this tool can also be a weapon,” Vogel said during the panel. “And what's so important about this weapon is not just that it can be misused, but that it can be scaling discrimination — that lines of code and iterations of code can undo decades of progress and the perpetrator may not know it.”
Vogel is also president and CEO of EqualAI, a nonprofit aimed at reducing unconscious bias in AI.
Pressure is on to act urgently because AI technologies have been rapidly evolving. They’re becoming more powerful and more deeply infused into society.
AI is a “technology that requires immediate, significant and sustained government attention,” the report said. “The U.S. government must ensure AI-driven systems are safe and responsible, while also fueling innovation and opportunity at the public and private levels.”
TRUSTWORTHY AI
NAIAC spent the past year focused on four core ideas: trustworthy AI, research and development, workforce and opportunity, and international collaboration.
Four NAIAC members convened for the Brookings panel recently, and several pointed to a need to prevent AI use from disproportionately harming marginalized communities. Reggie Townsend is the vice president of Data Ethics Practice at the analytics software company SAS Institute. He said he got involved in ethical AI work to prevent the technology from leading to “Jim Crow 3.0.”
Among the issues: the ways that machine learning systems arrive at their conclusions are often opaque to those impacted and potentially even to those who created the systems, said Swami Sivasubramanian, vice president of Data and Machine Learning Services for Amazon Web Services (AWS). That’s especially true for large language models: “I don't think even people who build that [system] can explain to you why it generated that response,” he said.
That makes it important to raise awareness about automated systems’ limitations. And to distinguish between situations where AI use is likely innocuous — video streaming sites using such technology to recommend other films to watch, for example — and those that are too high stakes to trust to the systems alone, such as making medical diagnoses.
Fostering ethical AI can require incorporating more diverse perspectives and voices in the development and testing of the tools, said Vogel. Every point of an AI system’s life cycle is an opportunity at which biases could get embedded, and that’s more likely to happen when the people creating the tools all have similar viewpoints.
Safeguarding AI use may not require a full set of new laws: In some cases, existing civil rights and anti-discrimination laws can be applied to respond to harms stemming from use of predictive and decision-making algorithms, but federal enforcement units need more resources to help them do so, the report said.
The report also called for establishing a federal chief responsible AI officer to oversee “implementation and advancement of trustworthy AI principles across [federal agencies],” as well as for the filling of vacant leadership positions like that of the director of the National Artificial Intelligence Initiative Office.
INTERNATIONAL PICTURE
AI technologies developed in one country get used in others, and so responsible AI governance practices “must be workable and understandable for users across the globe, operating in the wide landscape of legal jurisdictions,” per the report.
Townsend said it’s important for countries to settle on a set of international standards that are at least loosely aligned. That would be akin to international electricity standards, which ensure that devices can charge in different countries’ power sockets, even if converters are needed. Government or industry could act to push for this commonality, he said.
Joint research and development efforts are one opportunity for like-minded nations to come together and develop guardrails that support shared values, Sivasubramanian said. And outreach shouldn’t end with ideologically aligned nations, either, Townsend said. He recommended using the offer of collaboration as an “opportunity for the extension of olive branches to those with whom we don’t completely share 100 percent of our values.”
AWARENESS AND DAILY LIFE
AI is being used in ways that impact daily lives, meaning that the U.S. should not let residents remain in the dark about how the technology works. People don’t have to become AI specialists, but efforts should be taken to ensure everyone obtains a general level of knowledge about the technology’s workings and how it affects them, Townsend said.
“As a society, there's a baseline of understanding that we all have to have now,” he said.
Similar comments came from Susan Gonzales, the founder and CEO of AIandYou, a nonprofit that works to keep marginalized communities informed about new technologies like AI. For example, people ought to know that when they apply for loans online, AI tools track how often they make mistakes in the application, subsequently judging them for it.
She and Townsend called for public education and awareness-raising campaigns.
WORKFORCE IMPACT
There’s been plenty of consternation over whether these technologies are coming for people’s jobs, and large language models like ChatGPT have seen that worry spread to more sectors, including knowledge workers, said Townsend.
“We have to be honest about this as well, which is to say, there will be displacement,” Townsend said.
But decisions made now can see the technology more often support workers and change the kinds of tasks they do, rather than fully replace them. Developers, for example, could use AI to handle rudimentary testing, freeing them up to tackle more involved or creative tasks.
“Over time we will start to weave this in — or integrate it — with our lives much like we have with the ATM … but this moment requires us to be intentional about it,” Townsend said. “ … This technology can come for all of us, but it doesn't have to substitute us.”
NAIAC’s next efforts will focus on education, workforce, human rights, inclusivity, international collaboration and generative AI, Vogel said. She acknowledged that the fast pace of evolution in this space means the White House and Congress cannot wait on the year two report before taking the next steps, saying that as such, the NAIAC will be meeting more frequently.