IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

As Local Governments Have Done, Fed Stays Open on AI

Similar to cities and states, the National Telecommunications and Information Administration recommended governments be open toward artificial intelligence — but called for risk monitoring in larger AI models.

Hand holding a bubble with the letters AI and surrounded by question marks
Shutterstock
New federal actions advance work toward a comprehensive artificial intelligence (AI) policy, but the higher-level governance process is proving slower than that of state and local governments.

Many state and local governments have implemented their own policies and guidelines to address AI, in part due to a lack of such policy at the federal level. The federal government has, in fact, taken action on AI through executive order, but industry reactions have been mixed.

Last week, the White House announced progress on the work laid out in the executive order. Federal agencies have completed on schedule all required 270-day actions laid out in the executive order, work with longer timelines is moving forward, and the AI Safety Institute is seeking public comment on technical guidelines for AI developers.

And, the Department of Commerce’s National Telecommunications and Information Administration (NTIA) has released policy recommendations that embrace open models for AI — something it was directed to do in the executive order.

NTIA’s Report on Dual-Use Foundation Models with Widely Available Model Weights urges the U.S. government not to restrict the wide availability of open model weights in the largest AI systems at this time; and instead, to develop new capabilities to monitor for risks.

“Open-weight models,” according to a news release Tuesday, enable developers to build on existing work. This makes AI tools and their development more widely available to smaller companies, researchers and even individuals.

The report calls for the U.S. government to develop an ongoing program to collect and evaluate further evidence on risks and benefits.

Such evidence, the report said, should include research into the safety of AI models, and into the present and future capabilities of dual-use foundation models. It also recommends establishing risk-specific indicators — using this evidence to develop them — and evaluating benchmarks for when action should be taken. If action is needed, the report said, it might then warrant restrictions on access to models or other risk-mitigation measures.

The specific focus on open models is not one that’s commonly seen in AI policy at the state and local government level. However, in the private sector, IBM has advocated for an open source approach to AI advancement.

And, notably, governments like Indiana have implemented AI policies that can evolve as needs change. Local governments like Seattle and Boston, too, have set short-term policies that position the cities to evolve as AI technology capabilities do.

Also this week at the federal level, a bipartisan bill setting guardrails for AI procurement advanced in the Senate. The legislation would require agencies to assess and address AI risks before buying the technology.

“The bipartisan PREPARED for AI Act lays a strong foundation by codifying transparency, risk evaluation and other safeguards that will help agencies make smarter and more informed procurement decisions,” Alexandra Reeve Givens, president and CEO of the Center for Democracy and Technology, said in a statement.

The PREPARED for AI Act closely resembles California’s GenAI Guidelines for Public Sector Procurement, Uses and Training. These guidelines include mandates on the purchase and implementation of GenAI tools across state government, including continuous monitoring and annual review.