That’s the question many government leaders are tasked with answering, and so far there doesn’t seem to be a breakout tactic among states or local governments.
A Government Technology analysis of state and local government strategies toward AI revealed a few trends.
GOVERNORS ARE USING EXECUTIVE POWERS TO FORCE AI POLICY
Since August, governors in California, Virginia, Wisconsin, Oklahoma, Pennsylvania and New Jersey have announced executive orders centered around exploring AI.
Bypassing legislators is a move that is usually reserved for public health emergencies or disasters. However, in this case, most have used their gubernatorial powers to mandate that the state must create a task force to harness AI technology and create recommendations for ethical AI use.
LOCAL GOVERNMENTS ARE MAKING THEIR OWN AI RULES
The governments of Seattle, New York City, San Jose, Calif., and Santa Cruz County have all issued independent policies or guidelines for how their employees should use AI on the job.
The focus of these frameworks centers on responsible use of AI, while avoiding sharing sensitive information and introducing risks that may jeopardize government operations or cause unintended negative consequences to constituents.
A majority of the agencies that enacted their own policies are located in places that had not yet created statewide mandates or guidelines at the time.
SOME AGENCIES ARE TAKING A CONSERVATIVE APPROACH TO AI
While many states have created task forces and research groups to study AI and expand its use in ethical government functions, at least one is taking a “wait and see” approach that restricts employees from experimenting with AI on the job.
In June, Maine Information Technology (MaineIT) directed all executive branch state agencies not to use generative AI for at least six months on any device connected to the state’s network. The ban does not include any chatbot technology currently approved for use by MaineIT, and instead focuses on ChatGPT and any other software that generates images, music, computer code, voice simulation and art.
According to the moratorium, “This will allow for a holistic risk assessment to be conducted, as well as the development of policies and responsible frameworks governing the potential use of this technology.”
North Dakota was one of the first state agencies to pass legislation related to AI at the start of the year, but the law differs from what other states have experimented with since then. The North Dakota emergency measure stipulates that AI is not a person.
A handful of states have attempted to introduce new laws centered around the use of AI for government agencies, but have yet to have their plans finalized and put into action. Several bills that would have created AI task forces or research groups didn’t make it much farther than their initial introduction in legislature.