As AI gains ground in American society, the public reaction is mixed. And while the federal government has taken several steps to mitigate risks — including the creation of an AI task force — questions remain around how Congress can and should regulate these tools. In fact, AI regulation was the topic of discussion among 60 senators and tech representatives last week during a closed forum.
As discussed by U.S. Rep. Ted Lieu during the fireside chat portion of the Brookings event Sept. 14, the many different AI tools and applications can be separated into two categories: an ocean and a lake. The ocean, he said, is full of AI tools about which the public and regulators do not need to be concerned. As an example, he mentioned an AI-driven smart toaster that could gauge common toast preferences as a harmless AI use case.
On the other hand, in the lake of AI that may potentially cause harm, the risks of AI can be defined in three broad categories.
The first and most significant category is AI that “can destroy the world,” which includes AI uses related to the creation of nuclear or biological weapons. The second category is AI that may not threaten to destroy the world but still could potentially kill one or more people. Lieu pointed to autonomous vehicles and the use of AI in transportation systems like planes and trains as one possible example.
Finally, the third category is AI that may not directly threaten lives but can still cause widespread harm. This could include the use of AI tools related to hiring or facial recognition, areas in which biased systems can create injustice on a broad scale.
To address these risks, Lieu believes that Congress has a responsibility to implement regulatory guidance to manage risks, but also noted that there may be costs associated with over-regulation.
As such, Lieu has introduced a bipartisan, bicameral bill with several other members of Congress to address these concerns. The legislation, known as the National AI Commission Act, would create a national commission to focus on AI regulation. The commission would be made up of experts from civil society, government, industry and labor.
Lieu explained that there is a precedent for such a commission, as a similar commission was already created to make recommendations on the military and defense side; the commission described in Lieu’s legislation would focus instead on the civilian side.
Lieu argued that such a commission would be useful to advise Congress in areas related to defining and understanding AI, to understand possible harms related to AI, and to get expertise from those most knowledgeable about it.
“It's very clear to me that it will be impossible for Congress to keep trying to pass individual laws on every single possible harmful AI application,” he said.
Anton Korinek, a nonresident fellow for Brookings Institution and professor with the University of Virginia, said that he has been advocating for years for the establishment of a new agency focused on AI oversight.
In such a case, Lieu explained, regulators would focus on this issue specifically each day, allowing Congress to step in as needed. He also noted that he is neutral on whether to have one overarching agency — similar to the Food and Drug Administration — or multiple agencies dedicated to specific sectors of AI use. To this issue, he believes the input of national experts would help inform the best approach.
While AI does have potential risks and harms, there are also many benefits, Lieu noted. This is especially clear in the medical field, where AI is unlocking increased efficiency for medical researchers, enabling what he described as “tremendous advances.”
“These innovations signal enormous potential for society, the promise of greater productivity, more dynamic and fulfilling work and technological progress and improve lives,” said Korinek of generative AI systems like ChatGPT. “There are no easy answers, but through collaboration and open exchange of ideas, we can pave a responsible path forward.”
Korinek argued that there must be a balance between regulations that can increase safety and restrictions that can limit innovation.
Dan Hendrycks, executive director for the Center for AI Safety, during a panel discussion explained that there is a pressure to rapidly develop and deploy AI tools to stay competitive internationally. He underlined the existence of a structural risk that exists with maintaining control over rapidly advancing AI systems.
“No one’s to blame, but it’s a collective action problem fueled by AI race dynamics,” Hendrycks stated, arguing that there is a need to balance such risks with a cautious approach.