The city is actively encouraging its staff to test out the tools, while taking precautions. It announced the move in a May 18 email and interim guidelines sent to city staff.
Boston may be one of the first to take such an approach.
Boston CIO Santiago Garces told Government Technology that he wanted the city to “embrace the spirit of responsible experimentation.”
Rather than wait, he said the city should start learning about the potential benefits and risks.
“Whenever there’s an opportunity of delivering government services better, I think that it is our obligation to also learn about it, and if there’s risks, understand those risks,” Garces said. That could only happen if the city first established a framework guiding safe exploration.
Plus, lack of guidance or an official stance doesn’t mean people will necessarily hold off on engaging with the tools.
“We started to think that there’s a number of people that were probably already using it either for work [or in personal lives] and we figured it was better for us to be ahead and provide guidelines, rather than wait and ignore the fact that there’s this kind of revolutionary technology that was at the disposal of a broad set of people,” Garces said.
Boston isn’t the only city thinking this way. Seattle interim CTO Jim Loter sent an interim generative AI policy to city staff in April.
That document takes a more cautionary tone than Boston’s; it does not recommend specific use cases and it focuses on outlining concerns and ways to reduce risks.
“[Boston’s] provides more guidelines and direction for staff when they are using these technologies and experimenting with the technologies,” Loter told GovTech. “The risks and the considerations in the Boston policy are very, very similar to ours. So, I think we each independently cogitated on this and came up with the same list of concerns.”
Seattle’s interim policy lasts through October, after which it will need to be extended or replaced. But the technology’s quick uptake made it important to provide some guidance now, rather than wait until a more permanent policymaking process could be completed. At the same time, Seattle is forming an advisory team to help develop a more formal, long-lasting policy.
“We’ve seen the generative AI technologies like ChatGPT and other tools just achieve such rapid adoption over such a short amount of time that it felt like the responsible thing to do to address it head on, very quickly assess risks, consider the opportunities and offer direction to city employees,” Loter said. “[The interim policy says,] ‘Go ahead and use this stuff, but here are the ways in which you need to use it cautiously, carefully and responsibly.’”
BOSTON SEES EFFICIENCY, EQUITY RISKS & BENEFITS
Boston’s guidelines said generative AI could be helpful for purposes like summarizing documents and writing first-draft job descriptions or translations into other languages. Such tools can help staff produce drafts with clear and simple phrasing and can be instructed to write materials tailored for different reading levels.
But the document also advises staff to use their best judgement and take responsibility for correcting the technology’s mistakes. The guideline warns that generative AI can produce incorrect, biased or offensive results, and that it cannot be expected to keep information shared with it private. Staff should also be transparent about their AI use, disclosing when they’ve used generative AI tools, which model and version they used and who edited it.
“Think about how racial and ethnic minorities, women, non-binary, people with disabilities or others could be portrayed or impacted by the content,” the guidelines note.
Alongside such warnings, Boston’s guide also suggests that staff could use the tool to help them think about a topic from different perspectives. Users can tell a generative AI system to respond to a prompt from a particular point of view, which might prompt them to see the issue in a new light. As an example, Garces said he tried asking for John Adams’ perspective on immigration reform.
“We think that the tool can also help people be considerate of certain groups,” Garces said. “It doesn’t replace community engagement, but it is a very low-cost, quick way of getting a different perspective when we’re trying to get an idea around reactions.”
The guidelines are an early-stage effort and should ultimately be replaced by firmer policies and standards, per the document. And while the city broadly recommends that staff experiment, it advised the public school system to hold off and wait for more tailored guidance.
One reason for caution is that city guidelines expect users to vet the accuracy of the generative AI’s output using their professional judgement, but school kids are still developing such expertise, Garces said. Still, the city will need to address this context, because “the reality is that these kids are going to grow up in an environment where these tools are going to be available.”
A CODING AID?
Boston’s Garces envisions generative AI as a timesaver for IT staff. For example, he said it could help developers translate code into programming languages with which they have less familiarity, by recommending code snippets.
“One of the things that we struggle [with] in government technology is, usually we have a few employees that are supposed to know a lot about a number of different languages. Here, the tool can help you translate code from one language into another,” Garces said.
The technology could also help developers think through their broader-level strategy for tackling a problem, by offering pseudocode recommendations for the steps to take, Garces said.
“You would typically go and have to read a number of textbooks, or you’d have to have a lot of experience. And here you’re kind of getting this average result of tons of information around how you would approach a problem,” he explained.
But this doesn’t mean just anyone can pull up ChatGPT or Bard and start coding.
“This is where having some expertise and some backgrounds in the actual subject matter is helpful, so that you are aware of the usefulness of what’s being created,” Garces said. Plus, staff need to understand enough to vet the code for any potential privacy or security risks.
Loter saw several security challenges when applying generative AI to coding. For one, generative AI tools don’t show how they developed their code snippets, making it harder to vet them, Loter said.
“It’s one thing to go to Stack Overflow and browse the forums and grab code snippets that are obviously written by people — people that you can reach out to, people you can talk to and ask questions about the code,” Loter said. “[But] to just plug something into a bot, and have it spit out code, I think we’re concerned about the ability to track the provenance of that code.”
Theoretically, malicious actors could seek to abuse this lack of transparency by trying to feed false information into the data on which a generative AI system draws, with the goal of distorting the code snippets it produces.
“We’re concerned about the ways in which the very source data that the technology is relying on could be skewed. …You can do a lot of damage by injecting malicious stuff into computer code, much more so than you can by injecting it into something that’s intended for a report, just because computer code has a tendency to propagate,” Loter said.
SEATTLE, KING COUNTY EYE BETTER SEARCH
Seattle is currently focused on simpler uses of the tech. There’s particular interest in using it to automate certain routine internal processes and using natural language search capabilities to give government personnel and the public quicker and more accurate search results, Loter said.
Users typing keywords into an online search engine can sometimes see the search tool fail to narrow down results effectively and return thousands of potentially relevant documents, Loter said. Generative AI-powered search tools, however, are often able to parse questions written as people would speak them and to give more precise, accurate results.
That’s a use case that’s interesting to King County, Wash., too, where CIO Megan Clarke said the technology could potentially be used to better field residents’ online queries, provide more personalized experiences and reduce administrative burdens on staff.
“Somebody could start typing, ‘I am building a garage and I am not sure what I am supposed to do.’ Generative AI can give you a more detailed response of, ‘Oh, you need a permit, and you need to go here to get it, and here’s how much it will cost,’” Clarke said.
For Seattle’s IT department, one internal use case looms large: answering finance-related questions. The department spends considerable time spinning up new data dashboards and analytics to help provide answers each time a different stakeholder has a new question. Instead, providing stakeholders with a generative AI-powered search tool that can search through the data and provide answers might save the department time while still meeting stakeholders’ needs.
But first, more testing is needed to ensure the tools’ answers will be accurate, Loter said. The city wants to find ways to help every employee fact-check generative AI’s answers, not just employees with specialized knowledge. Seattle is trying to come up with scalable approaches and determine the trainings, guidelines or other methods necessary.
While staff test out the tool under the interim policy, Seattle is gathering an array of perspectives to help inform a more permanent policy. University of Washington academics and community members represented by the city’s Community Technology Advisory Board will help advise on a longer-lasting policy, expected out later this year. The group will discuss guidelines and training, as well as broadly acceptable or unacceptable uses cases and overarching principles. Establishing principles will help keep the policy relevant as the technology’s use evolves and new use cases emerge.
TRANSPARENCY & FOURTH-PARTY RISKS
Generative AI systems are particularly tricky to vet because governments often cannot see how their algorithms work or all data they draw on, Loter said.
As the city considers risks and its ability to minimize them, it needs to focus on its vendor relationships. Governments often use enterprise software from major providers, some of which are now starting to license tools from generative AI companies and infuse those into their software suites. That means that if government employees keep using the tools they’re used to, they’ll now be interacting with this emerging technology, Loter said.
When licensing software from a third-party provider, government customers have leverage to hold that company accountable for how its software works, Loter said. But governments lack a direct legal relationship with the fourth-party AI tool providers serving those software companies, and thus have little sway over them.
“We have to rely even more on our partners — on the big tech companies, on the big productivity application suite makers. … We’re relying on them to enforce a particular code of ethics or standards on to their partners who are providing this AI technology,” Loter said. “That introduces another set of concerns. What if OpenAI decides to go in a particular way with their technology that runs counter to the city’s interests and we are heavily invested in a suite of applications in which the OpenAI technology is becoming increasingly interwoven?”
THE COUNTY SCENE
Some counties are also taking a look at generative AI. In an early May conversation with GovTech, National Association of Counties (NACo) CIO Rita Reynolds said generative AI is too useful for government to give it up, but that staff should take safeguards when engaging with it. NACo has published advice about use cases and precautions, including warnings against using the tool in place of critical thinking.
NACO recently announced an AI Exploratory Committee. The committee would work to identify possible benefits and risks for county government and, among other things, “develop a preliminary policy and practice toolkit with sample guidelines and standards for AI.”
In King County, Clarke sees potential for the technology to help, such as letting residents search for information more easily, but isn’t ready to start applying it just yet. The county doesn’t currently have formal guidelines or prohibitions around generative AI use, although Clarke said, “If someone were to say to me, ‘Hey Megan, what do you think?’ I would say, ‘Don't use it yet.’” She’s not aware of staff currently using it.
Internal conversations are just starting up and Clarke said she wants to bring all county agencies together to discuss potential beneficial uses, risks and necessary regulations.
“There needs to be regulation and control in place. … But, like any technology, I think there are some tremendous applications of generative AI, and especially in government, and, for me, first and foremost, citizen experience,” Clarke said. “… I don’t want us to say, ‘Oh, let’s be scared of the new technology.’ Absolutely not. But let’s see where it makes most sense, and where we can be the most responsible in its use.”