IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

ChatGPT, Generative AI Gets 6-Month Ban in Maine Government

As of June 21, Maine’s executive branch entities are barred from using generative AI. This moratorium is intended to give the state time to research and evaluate risks posed by the technology.

The word "ChatGPT" in black font on a white background covered with silver chains.
Shutterstock
Governments are wrestling with how to approach game-changing novel technologies like generative AI and control for risks that many are still working to fully understand.

For Maine’s IT department, the answer is an “at least” six-month ban on state employees using the tools.

That ban went into effect last Wednesday, June 21, and prohibits any executive branch entities from using or adopting generative AI on devices that connect to the state network or for any state business. Entities can request an exemption by filing a waiver request, per the new directive.

State CISO Nate Willigar said generative AI is particularly worrying.

“It can deliver information independently of structured input, meaning it can produce uncontrolled results, which can lead to potential regulatory, legal, privacy, financial and reputational risks,” Willigar said in an email.

The temporary ban is intended to give the state time to more thoroughly assess the concerns posed by generative AI, including threats of misinformation, bias and privacy, and cybersecurity challenges.

There hasn’t yet been enough research into the risks for the state to decide where or how generative AI could be used, Willigar said.

“Numerous stakeholders need to be engaged for an informed decision to be made. Many stakeholders have not examined the security and privacy risks associated with the use of generative AI in state government,” he wrote.

The directive detailed several key generative AI risks: “These systems lack transparency in their design, raising significant data privacy and security concerns,” it said. “Their use often involves the intentional or inadvertent collection and/or dissemination of business or personal data. In addition, generative AI technologies are known to have concerning and exploitable security weaknesses, including the ability to generate credible-seeming misinformation, disseminate malware, and execute sophisticated phishing techniques.”

Willigar didn’t respond to a question about how extensively the impacted executive branch entities had been using generative AI tools, if it all, prior to the ban.

According to the directive, executive branch organizations can continue using chatbots that have already been approved by MaineIT, but are prohibited from using “large language models that generate text like ChatGPT, as well as software that generates images, music, computer code, voice simulation, and art.”

The state is “very early” in its investigation into the risks. Thus far is has largely been using existing research from vendors with which it has established relationships, Willigar said. Later, the state may look to other outside entities for further support and take insights from any federal or state regulatory frameworks that emerge.

STATES TEST OUT CHATGPT


Maine’s approach isn’t a universal one, and some peers have been making use of generative AI tools.

Massachusetts CIO Jason Snyder toldGovTechin May that the state had “implemented ChatGPT on a relatively small scale,” and with controls. That primarily included using the tool’s language capabilities for natural-language queries and for translating content into different languages.

While the state hadn’t used ChatGPT to generate new content, Snyder said he saw “real value” in leveraging it to produce routine content for HR and finance. “But we’re not there yet,” he said.

Vermont also makes some use of ChatGPT. Agency of Digital Services deputy secretary Denise Reilly-Hughes will become interim state CIO in July, and she told GovTech that the state currently uses ChatGPT within certain guardrails intended to protect privacy and security. That use has largely been internal, although generative AI does support some bots on public-facing state websites that direct users to resources.

“We have recommended standards around the use of ChatGPT and AI models such as that. We’re embracing the use with those guardrails,” Reilly-Hughes said. “We want to make sure that they are used in ways that will allow people to be better informed, but also not rely on it for confidential information. … It is an evolving conversation.”

TWO CITIES PICK GUIDELINES OVER BANS


Several cities have eschewed flat-out bans like Maine’s and instead issued interim guidelines and policies allowing employees to use the tools while advising them on precautions to take.

Boston released interim guidelines in May 2023, which encourage staff to “embrace the spirit of responsible experimentation,” CIO Santiago Garces had told GovTech. Similarly, in April 2023 Seattle issued its own interim policy.

“[The policy says,] ‘Go ahead and use this stuff, but here are the ways in which you need to use it cautiously, carefully and responsibly,” the city’s interim CTO Jim Loter previously told GovTech.

Both approaches are intended to be replaced once the cities have time to finish crafting more long-lasting policies.

FEDERAL PICTURE


Federal legislators are also sorting out their approach, with Sens. Ed Markey and Gary Peters last Thursdayasking the Government Accountability Office to assess potential harms posed by the tools and strategies for mitigating them.

On Monday, the House Chief Administrative Officer Catherine Szpindor told staff to avoid using any large language models for work purposes, with one exception: the paid, $20-a-month version of ChatGPT called ChatGPT Plus, which she said has better privacy features, Axios reports. Per the notice sent to staff, they are only allowed to use this tool for research and evaluation — not for “regular workflow” — and only with non-sensitive data. They also must enable the privacy settings.
Jule Pattison-Gordon is a senior staff writer for Governing and former senior staff writer for Government Technology, where she'd specialized in cybersecurity. Jule also previously wrote for PYMNTS and The Bay State Banner and holds a B.A. in creative writing from Carnegie Mellon. She’s based outside Boston.