These are also the kinds of tasks that lend themselves to automation. And increasingly steering that automation is artificial intelligence, the latest iteration of which is agentic AI. But what is agentic AI?
“In general … agentic AI can be described as the kind of AI that’s doing autonomous decision-making and goal-oriented behaviors,” said Kate O’Neill, author of the book What Matters Next: A Leader's Guide to Making Human-Friendly Tech Decisions in a World That's Moving Too Fastand founder and CEO of KO Insights, a strategic advisory firm that works with the technology.
“It’s making at least one decision, or taking at least one action, on your behalf,” she added.
HOW IS AGENTIC AI DIFFERENT FROM GENERATIVE AI?
Much of what government officials — and the public — know of AI has to do with generative AI. The most well-known system is probably ChatGPT, which works by users asking for information and the AI tool creating content. Agentic AI systems — the word is derived from "agent" — are focused on making decisions without the need for human prompts, instead relying on a complex choreography of machine learning, automation technologies, language processing and more to perform a task to achieve a desired outcome.
Agentic AI can be described as the kind of AI that’s doing autonomous decision-making and goal-oriented behaviors. It's making at least one decision, or taking at least one action.
Kate O'Neill, CEO of KO Insights
Jumbi Edulbehram, director of smart spaces and local government at NVIDIA, said agentic AI “is not that different from the core AI applications people have been using,” adding that different functions of the city will have their own AI “agents.”
There will be an agent for the finance and treasury department, another in health and human services, and others that will all require "different kinds of training to do their jobs, to be those agents,” said Edulbehram, responding to questions from Government Technology during a March 26 webinar on smart city systems, an event hosted by TechConnect.
Generative AI vs. Agentic AI
- Generative AI creates content based on prompts or input data.
- Agentic AI use a complex choreography of machine learning, automation technologies, language processing and more to perform tasks without human prompts.
For now, government tech officials are cautiously easing their way into agentic AI systems, starting small, but with the understanding that these could bring about operational efficiencies. Dan Fruchey, director of the Department of Information Systems in Sonoma County, Calif., offered more examples around how agentic AI tools could generate content to populate across agency websites when a holiday is approaching, prompting the closure of departments.
“Once the holiday passes, the system removes all of those notifications, saving website developers and account managers tremendous manual effort,” Fruchey wrote in an email.
Or, when a worker calls in sick, this action could put in place a series of automated processes where an AI agent could begin rearranging appointments to ensure clients are still able to receive services as scheduled.
“If the system can’t rearrange appointments successfully, it notifies clients of what has occurred and offers alternative appointments for a future date,” said Fruchey, who also noted Sonoma County is not currently using agentic AI but offered examples of how these tools could help to smooth government operations and efficiencies.
HOW WILL AGENTIC AI IMPACT THE GOVERNMENT WORKFORCE?
Many of the tasks connected to agentic AI are largely done today by people, underscoring the way the technology stands to recalibrate workforces. Advocates say the trajectory of this tech will not hollow out the ranks of government workers, but it will instead facilitate a shift toward focusing jobs and skills with more nuance, where critical thinking or emotional intelligence becomes central.
“Soft skills will be more important,” said Steve Mills, chief AI ethics officer at Boston Consulting Group.
“As AI automates many of the more routine or analytical tasks, the human edge will come from the ability to lead, interpret nuance, and think creatively,” he said in an email. “Agencies should start investing in upskilling programs now — not just in technical AI literacy, but in areas like emotional intelligence, adaptability, and collaboration.”
Preparing the workforce for this shift is about more than training, said Mills, adding, “it’s also about building a culture that embraces change and continuous learning.”
How much automation comes into play is often a question of leadership, and who’s in charge.
Government at the regional, state and local levels will likely look to AI to take a more central role in handling the public’s interactions and requests, but that won't necessarily mean a reduction in staff.
“As agentic systems take on more repetitive and transactional work, workers will need greater creativity, empathy and emotional intelligence to focus on citizen service delivery,” Mills said, underscoring the shifts in workplace culture as AI becomes more ubiquitous.
AI in the workplace will likely mean a shift from “doing work to managing work,” said Dara Wheeler, division chief of research, innovation and system information at the California Department of Transportation and the senior lead for AI.
“There’s no doubt that AI is going to reshape the workforce over the next five years by automating routine tasks, by augmenting human capabilities and creating new job opportunities,” Wheeler said during a webinar panel in March for the Transportation Research Board, exploring the strategic management of AI in transportation.
“The bottom line is, AI won’t replace human workers entirely. But it will transform our jobs, requiring employees to adapt and learn new skills,” said Wheeler.
WHAT'S HAPPENING NOW WITH AGENTIC AI?
A shift toward more data-driven decision-making is certainly already a part of the cultural changes seen in workplaces — both in the public sector and private. And automation will continue this shift, said Jonathan Behnke, chief information officer for the city of San Diego.
“Professionally, the next generation of AI technology can empower employees to develop new skills and find new ways to innovate and add value to the services that they provide,” he said. “Guardrails and clear policies will be necessary to maintain transparency, trust, and appropriate governance and controls.”
It’s time for government IT leaders to begin thinking about policy directions to ensure these complex AI systems are trained on generating positive outcomes for residents and government agencies. And it’s why “caution” seems to be the guiding through-line in the thinking among public-sector IT leaders.
“Government leaders are likely approaching agentic AI with cautious optimism,” said Behnke.
“Agentic AI is evolving quickly and may offer the potential to increase the level of automation for functions and processes, but data quality, transparency and appropriate human oversight will play an important role in reducing risk and fully realizing the benefits of the technology,” he added in an email. “I envision the first use cases involving less critical decisions that still add value and speed processes to build organizational confidence and trust in agentic AI.”
Fruchey, in Sonoma County, said "many government agencies are acting cautiously to ensure that humans stay in control of decisions."
“Most people have already experienced the frustration of dealing with fully automated ‘client service’ systems where they can’t reach a human to help them with their unique needs,” he continued. “These systems are primarily put in place to improve the profits of a company by hiring fewer people. That doesn’t apply to government. We are people driven, not profit driven, and so we need to find ways of adopting agentic AI to improve service to our communities.”
What’s important, said O’Neill, is not that cities or states have a specific agentic AI strategy.
“You have a strategy, of which agentic AI may or may not be a part, in how you deploy, in how you scale, how you solve the problems that you’re looking to solve that are part of the strategy that you are rolling out,” she said. “It’s really good to reframe, and remind ourselves, the tools are very exciting. They can do incredible things. But the things that they are doing that are incredible should be part of what we’re trying to do.”
Regardless of the type of AI an organization is putting to use — generative or agentic — human oversight needs to be part of the system, said Mills.
However, “opacity of reasoning and autonomous feedback loops make this challenging,” he added. “It’s critical to assess the level of risk and autonomy in each use case to tailor oversight accordingly.”
How much agentic AI becomes a part of the technology structure of an organization is still an open question, and one each organization will ask and answer for itself. That level of adoption could depend on the particular anxieties these tools raise among workers and residents, say experts.
“We’ve been rolling automated tools into workplaces for some years now. This is just a new flavor of automated tool,” said O’Neill.
“And it’s the newest one to become very scary,” she added. “But I don’t think that it is inherently any scarier than any of the preceding tools. As long as we’re starting from the right framing, and we’re answering those questions in the right way.”