IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

The Impacts of a Fragmented AI Legislative Landscape

In the absence of comprehensive federal legislation on artificial intelligence, states have taken policymaking into their own hands, leading to a varied legislative landscape. Doing so, however, can clarify the rules of the road.

A judge holding a gavel over a blue AI button.
Shutterstock
Differing state-level policies on artificial intelligence (AI) create a complex legislative landscape to navigate, but can also create opportunity, according to experts.

The federal government has not yet implemented comprehensive policy to address the development and use of AI, leaving the responsibility to implement policy largely to states. In Congress, many AI-related pieces of legislation have been under consideration, but states are not waiting to enact AI governance.

For example, Nevada’s 2025 legislative session will include a focus on AI technology, as Nevada CIO Timothy Galluzi told Government Technology.

“Right now, we are at the beginning of our legislative session, so we are actively tracking multiple [bills] from our hardworking lawmakers regarding AI — and generative AI, specifically,” Galluzi said.

Nevada released an AI policy in November to guide the technology’s use in the state. Many other states have now enacted their own policies targeting AI, including California, Indiana, New Jersey, Ohio and others.

“While we see differences between states, when a state adopts an AI use policy or legislation on public-sector AI, this creates a lot of clarity and clear rules of the road for every public agency within the state and for every government service provider,” said Quinn Anex-Ries, senior policy analyst on the Center for Democracy and Technology (CDT)’s Equity in Civic Technology team.

Having varied policies at the state level creates a learning opportunity through which state and local entities can work together to share knowledge and best practices, Anex-Ries said, citing the GovAI Coalition as one such collaborative model.

Some state-level policies, like that of Arizona, are malleable by design to support the rapid evolution of AI. This approach can be advantageous in many ways, Anex-Ries said. Tech-agnostic policies allow agencies to be agile as AI technologies change.

Regarding government business, procurement rules and regulations already exist for states and often apply to AI as written, Anex-Ries said. In some cases, additional requirements may supplement an existing framework.

“Government service providers have already figured out how to navigate varied state landscapes,” Anex-Ries said. He argued that the rise of state-level AI policies will improve the procurement process — both for public agencies and private companies — because expectations will be clear from the outset.

Aaron Poynton, chair of the board of directors for the American Society for AI, cautioned that while a lot of opportunity exists with AI, statehouses approaching policy individually can create a fragmented patchwork of legislation. This can push companies to move from a state if they do not like its policies, he said. Poynton said he believes there is an informal competition among states to attract AI startup companies through incentives, philosophies and principles.

FEDERAL POLICY’S IMPACT ON STATES


At the federal level, a January executive order revoked the October 2023 executive order on AI in an effort to position the United States as a leader in the AI sector. This federal action has the potential to impact state governments.

“I think what we’ve already seen is that states are learning from different approaches at the federal level,” Anex-Ries said, referencing memos from the Office of Management and Budget, federal-level agency approaches and federal AI inventories.

State-level policy work thus far has largely called for AI studies and the creation of AI inventories, the latter of which Anex-Ries indicated may be a result of the success and bipartisan support of federal-level AI inventories. The creation of such resources lays the groundwork for state lawmakers to take informed actions based on the current use cases of AI in their state, he noted.

Tara Wisniewski, executive vice president of advocacy, global markets and member engagement at ISC2, pointed out ISC2 member organizations are seeking further guidance on AI use.

“One of the things that we advocate for on a regular basis is for policymakers to understand and appreciate the need for harmonization of regulation,” Wisniewski said, indicating the absence of such harmony can create a complex compliance landscape from a cybersecurity perspective. This challenge is more likely to impact smaller businesses without the robust cyber teams that are often available to larger corporations, she said.

In the 2025 legislative session, Anex-Ries said he expects to see states follow the trend of recognizing AI risks for the public sector and crafting legislative proposals for government; this may include more inventory requirements, risk management guidance and appointing a staff member to coordinate AI activities.

CDT will continue to release analysis as state lawmakers advance AI legislation. The work to implement effective AI governance will go on, he said: “It’s the start of a process for states and locals, and there’s a lot of learning opportunity and collaboration in the future that’s going to require lawmakers, civil society, academics, to continue digging in as we see the results of these new policies.”
Julia Edinger is a staff writer for Government Technology. She has a bachelor's degree in English from the University of Toledo and has since worked in publishing and media. She's currently located in Southern California.
Sign up for GovTech Today

Delivered daily to your inbox to stay on top of the latest state & local government technology trends.