IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Is AI Ready to Replace Human Policy Advisers?

A 50-state investigation in data journalism suggests the answer is, not yet. The AI agent was insightful on a number of fronts; but, while not descending into hallucinations, its mind strayed from instructions as the experiment went on.

An AI-generated humanoid AI robot wears a suit and writes at a desk in an office setting.
The podcast cover image for this The Future in Context (TFIC) episode features an AI-generated image of a humanoid AI robot acting as a government policy adviser. (DALL-E 3)
Listen to this episode on the player below or subscribe for free on YouTube or the podcast app of your choice — Apple Podcasts, Spotify, Audacy and Audible.



Government Technology Data Reporter Nikki Davidson tasked Google’s AI tool Gemini (formerly Bard) to explore AI’s perspective on government technology use. Davidson’s innovative approach involved treating AI as a collaborative partner to generate insights on AI’s potential applications in government. Despite Gemini’s occasional inaccuracies and deviations from instructions, Davidson’s project yielded diverse and unexpected use cases across different areas, such as mental health, opioid use and climate change.

Gemini’s recommendations extended to climate challenges and infrastructure needs, reflecting a surprisingly deep understanding of regional concerns. Gemini itself suggested it would be five to 10 years before AI is fully integrated in government operations, emphasizing that it is inevitable for government.

Believing turnabout is fair play, Davidson asked Gemini for feedback on her work. It gave the article a strong numerical grade on a scale of 1 to 10 but did have suggestions on how to make it better. Human reviewers, including Benjamin Palacio, a senior IT analyst with Placer County, Calif., highlighted both the promise and challenges of AI applications, particularly in sensitive areas like mental health support. Davidson views AI as a valuable tool but underscores the necessity of human oversight and awareness of its limitations.


SHOW NOTES


Takeaways:
  • AI can be used as a tool to explore the best uses of technology in government.
  • Surprising use cases of AI in government include mental health and opioid abuse awareness.
  • Ethical concerns arise when AI suggests analyzing sensitive data like social media and medical records.
  • Human intervention is necessary to ensure the accuracy of AI.

Chapters:

00:00 Introduction: AI as a Policy Advisor
01:14 Exploring the 50-State Experiment
05:52 The Limitations and Tendencies of AI Tools
08:18 Addressing Societal Issues with AI
10:40 AI Solutions for Infrastructure and Climate Challenges
12:31. Realistic Timelines for AI Implementation
14:26. The Challenges of Working with AI
16:23. Human Reviewers' Perspectives
17:00 Conclusion and Future Possibilities

Related Links to items referenced in the episode:


Our editors used ChatGPT 4.0 to summarize the episode in bullet form to help create the show notes. The main image for this story was created using DALL-E 3.
Paul W. Taylor is Programming and Media Manager at TVW, Washington's Public Affairs Network. He is the former Chief Content Officer and Executive Editor at e.Republic Editorial and of its flagship titles - Governing and Government Technology. He can be reached X/@pwtaylor or @pwtaylor.bsky.social
Nikki Davidson is a data reporter for <i>Government Technology</i>. She’s covered government and technology news as a video, newspaper, magazine and digital journalist for media outlets across the country. She’s based in Monterey, Calif.
Ashley Silver is a staff writer for <i>Government Technology. </i>She holds an undergraduate degree in journalism from the University of Montevallo and a graduate degree in public relations from Kent State University. Silver is also a published author with a wide range of experience in editing, communications and public relations.