We are now seven decades into the computer revolution — the origins of AI itself dates back to 1956 — and three decades into the rise of the commodity Internet. Our experiments in digital government are 27 years old. In that time, as I noted in my March column, government has made significant strides in turning to face the citizen. The confluence of digital government and generative AI marks the beginning of synthetic government, characterized by an inevitable seismic shift in the way public services are delivered, for good or for ill.
My colleague Dustin Haisler, chief strategy and innovation officer of Government Technology’s parent company, e.Republic, has made an ultimately optimistic but measured assessment of the impacts generative AI will have on state and local government over the next two to five years and beyond. Using the Center for Digital Government’s* Tech Radar methodology, he places workforce augmentation on the near horizon — one or two years out — for AI to help fill the void in the state and local government workforce in which, according to the U.S. Bureau of Labor Statistics, the number of job openings exceeded the number of hires by 195 percent in late 2022.
Sound familiar? The promise at the dawn of digital government in the mid-1990s was to shift large volumes of routine information and transactions from people to the web, a tacit recognition that the demand for public services had outstripped the manual and mechanical processes for delivering them. To borrow a popular phrase from the era, we needed something better, faster, cheaper. We still do.
The web that we knew through about 2020 pulled a friendly, citizen-facing pixel layer over all the old, tired processes of government, presenting stored content to those who navigated or searched their websites. With what the tech industry has dubbed the “generative web,” AI can and will take that same old content to create unique real-time content and experiences.
And that is where the bureaucracy meets the Borg. Neither engenders trust. The former is big, slow moving and impersonal; the latter is faster and smarter than we are. Both bring with them the vibe that wherever the nexus of control is, it is not with you.
It is not hard to imagine scenarios in which cash-strapped public agencies defer to tools such as ChatGPT to write FAQs or call center scripts without being informed by original human thinking and the humanity that comes with it. Relying solely on mass AI-produced content to increase capacity is not the answer. What has always made public service work is the deceptively simple notion of people helping people. That has been true of eras past and will be a key determinate of how our next era goes.
All this elicits polarized reactions. You have probably seen references to an open letter asking “all AI labs to immediately pause for at least six months” signed by technology ethicists (Tristan Harris), technology entrepreneurs (Steve Wozniak), a celebrity mogul (Elon Musk) and more than 1,100 other notables. The letter opens with a chilling line: “AI systems with human-competitive intelligence can pose profound risks to society and humanity.”
The contrary view is articulated by James Currier, a partner at venture capital firm NFX, who argues that people may be uncomfortable with generative AI for the next two to three years, “but eventually, they will realize that the technology lets them do more and better. They will adapt and thrive, as they have in so many technology waves of the past.”
My mentor in the public-sector IT community once observed that sometimes we focus on the shadows created by opportunities rather than the opportunities themselves. This generative AI moment demands that policymakers do both.
This story originally appeared in the June issue ofGovernment Technologymagazine. Click here to view the full digital edition online.
*The Center for Digital Government is part of e.Republic, Government Technology's parent company.