IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Meet Indiana’s New GenAI Chatbot: A Cautious Introduction

Indiana is one of the first states to deploy a beta version generative AI chatbot on its official website. Conscientious of unintended consequences, Indiana is focusing on transparency, user feedback and iteration.

Closeup of a person tapping two chat bubble icons.
Indiana has unleashed the power of generative AI to the public on its IN.gov state website through a chatbot. Serving like a master brain of public information for the state, it can answer almost any question a user might have in seconds.

But there’s a catch — before users start interacting with it, they’ll have to agree to a disclaimer: “Use the website at your own risk.”

To access the new tool, visitors are first greeted with a lengthy six-point disclaimer they must accept, explaining the chatbot is in beta, it may make mistakes, information may be incorrect and should be verified by professional sources, that users are solely responsible for what they do with the information and that the state is not liable for any consequential or any other damages that might arise from using the tool.
A screenshot of the disclaimer page that appears on the Indiana website before someone can use the chatbot.
The new GenAI Indiana chatbot comes with a disclaimer agreement.
Indiana Office of Technology
The chatbot, currently in version three of its beta phase, is a product of the Indiana Office of Technology’s (IOT) collaboration with vendor Tyler Technologies.

The RFP was awarded in December of 2023, internal testing began in May, and the tool was publicly launched quietly on the state website in June. A link to the tool appears under the main search bar at IN.gov, encouraging users to “try the new IN.gov artificial intelligence chatbot,” rather than featuring it prominently on the main page at this time.

As of Sept. 16, it has had 5,295 interactions, with an average of eight questions per user, and early, limited data shows a positive user feedback rate of 83 percent.

It's an upgrade from the state’s previous chatbot that was simply trained to deliver responses in a Q&A format. The state’s goal was to refocus the website to fit how residents wanted services delivered, viewing the state as a single entity rather than several departments. The GenAI chatbot, trained on public-facing content from each agency, aims to be a one-stop place to have any questions about state services answered.

However, innovation often comes with risks. This is why the state has focused on transparency and potential failure in response, especially after witnessing other public agency chatbots lead to disastrous results, like when New York City’s chatbot created for business owners advised companies to violate the law.

“It’s still a journey for us right now,” said IOT Chief Technology Officer Dave Fox. “We’re not going to sit there and go live publicly and say, ‘Everything’s going to be perfect,’ because I’ve seen some states that went that direction and it didn’t turn out very well for them. I thought this was the best approach to get it out there, and we’re able to actually get some testing out and let the public know that there may be some mistakes along the way.”

PRESSING FOR FAILURE 


Before launching the beta version to the public, IOT proactively sought out testers, with the explicit intention of exposing flaws and weaknesses in the GenAI chatbot. A feedback button with a survey lives under the chat window, asking for the user's name and contact info, as well as prompting them to provide more information about what kinds of questions were asked and if they would use the tool again.

“We want them to find things, we want them to report it, and we had great success finding that out,” said Michael White, IOT’s deputy chief technology officer. “There’s people that wanted to find it falter, and they said they couldn’t. And there are some quirks that we were able to train and modify.”

Additionally, IOT pressed the people with the most knowledge about each agency to test the tool. The chatbot provides sources for each answer it gives, which allows users to easily understand where the information came from.

“We challenged agencies to ask very difficult questions that only the subject matter experts would know,” said Graig Lubsen, communications director for IOT.

By doing that, it exposed flaws or inconsistencies in the verbiage used on some state department websites, especially if multiple agencies were involved.

IOT has limited the scope of the chatbot by training it only on information available through the state’s content management system, therefore it can only access information that's been posted publicly.

“That doesn’t mean somebody didn’t upload a file by mistake that shouldn’t be there, but that’s kind of on the agency to be able to know and understand their content, and we are constantly training on that,” said Lubsen. “What we’re telling agencies is, there’s an onus on you to make sure your content is up to date and right, because it is going to get a little more exposure now with this chatbot than maybe the traditional search, and definitely the traditional chatbot.”

WHAT ARE PEOPLE ASKING THE CHATBOT?


The beta phase has also been an opportunity for the department to better understand what people want to know about the state, and what kind of services they’re looking for.

Government Technology requested data on the top 10 questions the chatbot has been asked so far. According to the IOT team, it’s revealed some trends that they weren’t aware of.
“We’ve got a lot of people trying to quit their jobs and go start food trucks out there, I guess,” said Fox. “Definitely the interactions and the questions asked actually surprised me of where they’re focused on.”

Additionally, it’s helped IOT understand which agencies and topics they really need to stay in line with to make sure the information is correct.

“There’s been some change along the way, we’re able to be a little more agile with this technology, but hopefully we get to a point where we all feel very comfortable with what's going on along with the content that we have with agencies that we can launch it and put it back on our front page. I don’t think we have a scheduled timeframe right now, but we’ll get there,” said Fox.

Another decision that hasn’t yet been made is whether or not to name the tool. At this point, the chatbot has no name or avatar.

“People have humanized and added characteristics to technology that really aren’t there, so if you add a name, I think that kind of adds and feeds into it. This is a tool, it doesn’t have a personality,” said White.

“Honestly, I’m not saying I like or dislike names. We haven’t even come to this, we are just focused on the technology and getting it right,” added Fox. “We’ll probably figure out if that’s even something we want to do when we kind of launch it as a real, live version.”
Nikki Davidson is a data reporter for Government Technology. She’s covered government and technology news as a video, newspaper, magazine and digital journalist for media outlets across the country. She’s based in Monterey, Calif.