IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Federal Task Force to Identify Implications of Evolving AI

The multiagency group will facilitate the research and testing of advanced artificial intelligence models in vital areas of national security and public safety. Its membership is expected to expand.

Illustration of a brain surrounded by computer circuits.
Advanced AI has revolutionized data analysis and threat detection, but it’s also opened the door to new dangers. Now, a federal agency is helping confront these real and potential threats.

The U.S. Artificial Intelligence Safety Institute, part of the National Institute of Standards and Technology (NIST), recently introduced the Testing Risks of AI for National Security (TRAINS) Taskforce, charged with identifying and managing the emergent “national security and public safety implications of rapidly evolving AI technology.”

The TRAINS Taskforce will be responsible for researching and testing AI models, focusing on several key areas of national security and public safety, including cybersecurity; critical infrastructure; radiological, chemical and biological security; and conventional military capabilities.

“Every corner of the country is impacted by the rapid progress in AI, which is why establishing the TRAINS Taskforce is such an important step to unite our federal resources,” U.S. Secretary of Commerce Gina Raimondo said in a news release.

The TRAINS Taskforce will include expertise from several federal agencies, including the Chief Digital and Artificial Intelligence Office and the National Security Agency at the Department of Defense; the Cybersecurity and Infrastructure Security Agency at the Department of Homeland Security; and the National Institutes of Health (NIH) at the Department of Health and Human Services. However, that membership is expected to expand across the federal government. The group will be chaired by the U.S. AI Safety Institute.

The task force's objectives include creating new methods and benchmarks for evaluating AI technologies, and holding joint national security risk assessments and red-teaming exercises. These simulations will help identify weaknesses in AI systems before they can be exploited.

“Enabling safe, secure, and trustworthy AI innovation is not just an economic priority — it's a public safety and national security imperative,” Raimondo said.