Last week, U.S. President Joe Biden signed a broad executive order on AI, about which experts have mixed opinions. Since then, the Office of Management and Budget has released a draft of implementation guidance related to the order.
NIST’s announcement builds on the federal government’s work to make AI advancement responsible. Participants in the new consortium will support the development of evaluation methods for AI systems.
The consortium is part of NIST’s new U.S. AI Safety Institute (USAISI), which was announced at the U.K.’s AI Safety Summit 2023.
In order to help advance responsible use of AI, the consortium will act as a convening space for sharing information and insight among experts. It will help support collaborative research and discussion through shared projects. The goal is that the consortium’s work will help inform future measurements of the safety and effectiveness of AI systems.
At the beginning of this year, NIST released a voluntary AI Risk Management Framework, which aims to guide organizations to manage risks when using AI. The president's executive order tasks NIST with developing a companion resource to that framework.
That resource will focus on generative AI and offer guidance for authenticating content created by humans. It will also include a new initiative to create guidance for evaluating and auditing AI capabilities. Finally, it includes the creation of test environments for AI systems. NIST will engage with industry experts and other stakeholders toward this end.
“Participation in the consortium is open to all organizations interested in AI safety that can contribute through combinations of expertise, products, data and models,” said Jacob Taylor, NIST’s senior adviser for critical and emerging technologies in the announcement.
The agency is soliciting responses from all organizations with expertise and capabilities to enter a consortium cooperative research and development agreement.
Members would be expected to contribute expertise in one or more specific subject areas: AI metrology, responsible AI, AI system design and development, human-AI teaming and interaction, socio-technical methodologies, AI explainability and interpretability and economic analysis.
Members would also be expected to contribute models, data or products to support pathways that enable safe AI systems through the AI Risk Management Framework, infrastructure support for consortium projects as well as facility space and hosting consortium researchers, workshops and conferences.
NIST will be holding a workshop on Nov. 17 for those interested in learning more about the consortium or engaging in the national conversation on AI innovation and safety.
Organizations with the relevant expertise and capabilities interested in participating in the consortium should submit a letter of interest by Dec. 2.
“We want the U.S. AI Safety Institute to be highly interactive because the technology is emerging so quickly, and the consortium can help ensure that the community’s approach to safety evolves alongside,” Taylor said.