- Implement AI use policies to minimize risks. These policies define practices for deploying AI tools, ensuring data privacy and security, and identifying and mitigating disparate impacts the technologies may have on different users or communities.
- Update procurement rules to require responsible vendors. Look for AI solutions from vendors that adopt the NIST AI Risk Management Framework (AI RMF) as part of their development processes.
- Encourage responsible innovation. Forward-thinking governments are implementing sandboxes to allow pilots and testing of new AI capabilities in compliance with use policies and other safeguards.
- Accelerate cloud migrations. Modern AI capabilities could not have happened without cloud computing; similarly, benefiting from AI requires data to be stored in the cloud. Moving to the cloud also provides access to technology innovations, reduced costs and increased cybersecurity.
- Upskill the workforce. Agencies will need to train employees on how to use AI technology effectively and responsibly.
Overall, the goal is to create a strategy that embraces the positive impact of AI while setting up the necessary guardrails, says Microsoft Government Strategy Leader Vidhu Shekar.1
“How do we implement and build safety frameworks? And what are the safety brakes?” Shekar says.
As governments assess AI tools, they must understand the difference between what’s available on public AI platforms versus what capabilities might be baked into the solutions agencies already use.
Organizations that use Microsoft’s suite of tools, such as Microsoft 365 and Azure cloud services, already have access to the biggest and most broadly used types of generative AI. And because these solutions are walled off from the internet at large, agencies can use them safely and securely, without compromising sensitive public data.
“Your data is your data in the Microsoft cloud,” Shekar says. “It is not used to train any of the public foundational models. And it remains protected by the comprehensive enterprise-grade compliance and security that we have at Azure.”
Start Exploring Generative AI
Generative AI has transformative potential for governments.
- Constituent experience. Generative AI can create a next-generation constituent experience by leveraging vast amounts of unstructured or structured government data. “Imagine having a ChatGPT-like experience for your state, for your city, for your county, with your data,” says Keith Bauer, Microsoft’s national data and AI leader for state and local government. A resident looking for information on parking, for example, could access that information in an easy, intuitive way, without having to search for it.
- Employee experience. Generative AI can optimize workforce productivity. Rather than navigating complex internal sites or documents to retrieve information, employees can use generative AI to simply ask a question and receive a cogent and comprehensive response. “Knowledge workers who might need to create their next emergency response plan, or who need to establish a priority for appeals cases, or analyze documents to detect them for fraud — there’s a ton of internal generative AI use cases,” Bauer says.
Most agencies are just trying to understand how to get started, Bauer says.
“I get asked this question a lot: ‘How do I use this? I don’t have everything set up yet in my environment. How do I start exploring generative AI?’”
One immediate way to start using generative AI, Bauer says, is Bing Chat, an interactive AI component of Microsoft’s Bing search engine. Additionally, employees using eligible Microsoft 365 licenses can leverage Bing Chat Enterprise, which lets them use generative AI while protecting their organization’s data.
For example, someone might use Bing Chat Enterprise to help craft an agency-wide email. Of course, Bauer notes, those kinds of AI-generated messages should always be reviewed by a human before being distributed. “Just like all generative AI, it’s meant to be your assistant, your copilot,” he says. “It gives you that starting point, that template to improve the productivity for whatever you’re trying to achieve.”
In other words, he says, “Think of Bing Chat as your copilot for the Web.”
In fact, Microsoft has incorporated that “copilot experience” into all its most relied-upon tools, including Microsoft 365, Dynamics 365, Windows, Power Platform, GitHub, Power BI, Fabric and Security solutions.
“It creates an easy, intuitive experience by having generative AI built into all those offerings,” Bauer says.
Some agencies may be ready to begin tailoring custom generative AI solutions for their organization. Microsoft’s Azure OpenAI Studio lets agencies build and train their own models to create unique generative AI experiences. Using “retrieval augmented generation,” agencies can layer their own data on top of the large language models that power OpenAI. Like Bing Chat Enterprise, this provides a way for public organizations to index their own internal data and documents without feeding those into the larger OpenAI experience.
Moving Ahead with Responsible AI
Embracing generative AI can feel daunting. Agencies may feel frozen because their data estate is immature, and not yet in appropriate condition to leverage an AI model.
Bauer and Shekar advise organizations to use generative AI initially only on discrete data sets they have high confidence in — such as manuals , statutes, regulations and agency websites — while they undertake a more comprehensive data journey.
And by using proven solutions they’re already familiar with, including Microsoft 365, agencies can trust that the generative AI solutions they adopt and build will incorporate the highest standards of ethical, responsible and secure use.