IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Want Gov Tech Innovation? Consider the Legal Issues Too

At the recent NASCIO 2024 Annual Conference, CIOs talked about the legal concerns that will help guide the development of AI and other technologies. Freedom of Information matters around public data are in the forefront.

Washington state Chief Privacy Officer Katy Ruckle
Katy Ruckle, Washington state chief privacy officer, explains how AI prompts and chatbot conversations, like meeting records, could be subject to public information requests, at the NASCIO 2024 Annual Conference.
Government Technology/David Kidd
Every innovation and change births new questions about laws and legal liability — that’s certainly the case when it comes to government technology, and especially with AI.

That was one of the messages spread earlier this week at the National Association of State Chief Information Officers (NASCIO) 2024 Annual Conference in New Orleans.

The coming year promises to bring even more activity around generative artificial intelligence (GenAI) and other forms of AI, as public agencies embrace ChatGPT and other tools. But officials responsible for the use of AI would do well to consider their legal risks and obligations, according to several speakers at the event.

For starters, those concerns should include public records, said Katy Ruckle, chief privacy officer for the state of Washington, during a session devoted to the legal implications of AI in the public sector.

“A public record is not just a piece of paper,” she said, “but anything owned, used or retained under some laws.”

That can include the prompts, inputs and outputs employed by ChatGPT, along with chatbot conversations and meeting records.

Though state public record laws can vary widely, it is unlikely that work related to AI will fall outside of those regulations — and that means officials and other state employees need to take care about how they approach AI.

For instance, Ruckle advised attendees to “keep it professional” when typing in team chats, as they could eventually see the light of day via a Freedom of Information request filed by a resident.

As AI gains more footing in the public sector and elsewhere, legal issues are becoming more of a concern as the tools move into the mainstream. Privacy, bias, data protection and ethical use of AI are the main concerns under discussion.

Those are not the only worries about AI that promise to grow with time, even as use accelerates. More than one state CIO at the conference talked about the challenge of harnessing the massive power required to run AI, a point brought home recently by news that Microsoft wants to reopen the Three Mile Island nuclear power plant to power an AI data center.

And as AI spreads its wings, public-sector technology leaders face legal pressures to make their digital offerings more accessible to the wider population, said Joshua Jones, program lead for the Virginia IT Agency’s website modernization effort.

Public agencies face fresh and increasing pressure to widen digital and mobile access to the 1 in 4 adults in the U.S. — or more than 70 million — who have disabilities.

“Accessibility matters to everyone,” Jones said, indicating that includes people who are elderly and may simply need larger text or better website vocal features.

Manual testing of government sites, along with a push for vendor compliance, are among the main ways to make sure that those digital properties conform to the Americans with Disabilities Act and other standards.

“This is the cost of doing business,” Jones said during his own panel. “If you are not meeting the needs of 1 in 4 [people], is your solution working? I’d say it’s not.”
Thad Rueter writes about the business of government technology. He covered local and state governments for newspapers in the Chicago area and Florida, as well as e-commerce, digital payments and related topics for various publications. He lives in Wisconsin.