IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Universities to Train AI to Outmaneuver Cyber Threats

A consortium of major universities will research AI's cybersecurity applications as part of the National Science Foundation's new AI Institute for Agent-based Cyber Threat Intelligence and Operation (ACTION).

Cybersecurity
Shutterstock
A group of universities led by the University of California at Santa Barbara (UCSB) is using $20 million in funding from the U.S. National Science Foundation (NSF) to research how artificial intelligence (AI) might detect and respond to cybersecurity breaches at scale.

According to a recent news release, the newly formed AI Institute for Agent-based Cyber Threat Intelligence and Operation (ACTION) will assess the ways IT professionals can use advances in AI technology to combat cyber threats, as well as what new threats can be expected as the field continues to advance. In addition to UCSB, the effort will involve researchers from 10 other institutions: the University of California at Berkeley, Purdue University, Georgia Tech, the University of Chicago, University of Washington, University of Illinois Chicago, Rutgers, Norfolk State University (NSU), University of Illinois and University of Virginia.

The institute, which began operations on June 1, is one of seven National Artificial Intelligence Research Institutes that received a total of $140 million from the NSF and other federal agencies to study how recent advances in AI could enhance work in sectors such as climate science, cybersecurity, education and public health.

ACTION’s principal investigator Giovanni Vigna, a computer science professor at UCSB, said in an email to Government Technology that researchers will focus on “developing intelligent security agents that can operate both autonomously and in cooperation with humans to protect computer networks.”

“While a substantial corpus of research has been developed in the field of machine learning — with generative models, like large language models, receiving a lot of attention right now — the mission of the ACTION Institute is to develop novel AI capabilities, like domain knowledge collection, logic reasoning, planning and collaboration, that can be used to create 'intelligent' programs, or agents, that can carry out cybersecurity tasks at the scale and speed necessary to protect our critical infrastructure from sophisticated threat actors,” he wrote. “One of the challenges in developing these new AI concepts is that they need to work in an adversarial setting, in which the opponent might change their behavior to adapt to the countermeasures. This is why autonomous behavior is important: Being able to detect changes in the strategy and tactic of a threat actor and automatically adjust the security posture of a computer network requires ‘intelligence.’”

According to David Evans, a computer science professor and one of ACTION’s co-principal investigators from the University of Virginia, adversarial nations and cyber criminals have access to powerful tools that can probe systems and develop new types of attacks to circumvent AI-based network protections. He said that researchers hope to gain a deeper understanding of the AI-based cybersecurity tools available to U.S. agencies and sophisticated cyber criminals in order to develop systems that are more secure.

“[The availability of AI technologies] presents an arms race between the people defending systems and those attacking them, and our research goal is to understand that arms race and develop sound, principled approaches that can end the arms race by producing systems that are secure, even against creative attackers with strong capabilities,” he said. “We are also interested in how machines and humans work together, which includes developing ways to share information between a human analyst and automated system and to enable human operators to control the behavior of the automated system.”

Evans added that luckily, “machine learning is very well-suited" to monitoring network traffic and finding patterns in large volumes of data, making AI tools essential when it comes to bolstering network security.

“We’ve seen tremendous advances in AI over the last few years. There are ways that adversaries trying to attack computer systems can take advantage of that, and there’s a lot of fear about using AI tools to automate and scale attacks that used to take a lot of manual effort," he said. "But there are also a lot of opportunities and interest in using these advances to defend systems better and build systems that are resilient to attack."

ACTION’s research will combine resources from the institutions involved, such as NSU’s Cybersecurity Complex, which is equipped with two data centers and a high-performance computing test bed. According to a joint email from NSU co-principal investigators Mary Ann Hoppa, a computer science professor, and Tonya Fields, a cybersecurity researcher and operations manager for NSU’s Cybersecurity Complex, researchers there hope to play a key role in creating AI-enabled intelligent security agents that are trained to reason and learn over time about cyber threats, attacks, defenses and new vulnerabilities.

“These agents will cooperate with each other and with humans to jointly improve the security posture of complex computer systems. Other areas where NSU may contribute include how to represent the knowledge needed and created by these agents as they learn and reason, and potential uses for game theory to help with agent training,” the email read. “ACTION’s multiyear research collaboration, involving a large group of top researchers across multiple institutions and disciplines, will revolutionize the ways AI can be used to protect digital assets including the nation’s critical infrastructure.”
Brandon Paykamian is a staff writer for Government Technology. He has a bachelor's degree in journalism from East Tennessee State University and years of experience as a multimedia reporter, mainly focusing on public education and higher ed.