IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Privacy, Responsibility Discussed at Inaugural AI Summit

More than 500 applications of AI are in use across Texas agencies, a state representative said, but individual rights remain paramount. Efficiency must not come at the expense of privacy, panelists said.

A sleek camera watches over a futuristic cityscape.
Shutterstock
CIOs and legislators from across the country joined thought leaders in considering how to leverage AI while minimizing its risks, at the Center for Public Sector AI’s* first AI Summit, Tuesday in Austin.

Texas State Rep. Giovanni Capriglione (R-98) and Wisconsin State Rep. Shannon Zimmerman joined the discussion to speak on the legislative challenges AI has introduced, both in the private and public sectors.

Citing data collected by the Innovation and Technology Caucus of the Texas Legislature (IT Caucus), Capriglione revealed there are more than 500 different applications of the technology in use across Texas state agencies. Of those 500 applications, cybersecurity use cases emerged as a majority. While Capriglione voiced his optimism toward the integration of AI in state agencies, he also emphasized the need for a comprehensive regulatory framework.

Zimmerman noted that the emergence of AI has presented an opportunity for standardization across his state by prioritizing government efficiency. However, both representatives clarified that efficiency must not come at the expense of privacy, particularly when it comes to public safety issues.

“I think this is a topic that is going to be massive for us as we go forward in terms of what’s right, what’s constitutionally appropriate and so forth,” said Zimmerman. “We do want public safety, no doubt about it, but we have to find a balance between people’s rights and all.”

Capriglione took issue with, for example, using AI to identify criminal suspects.

“It, to me, is a violation of our constitutional rights,” said Capriglione. “Everything’s bigger in Texas, except Big Brother.”

Talks of system automation led the discussion to data set biases and their impact on trained models. With AI advancing to the point of automated decision-making, who — or what — will be held responsible for said decisions?

According to the representatives, accountability must lie with the entity, not the technology.

“We’re not going to allow individuals to hide behind the fact that, well, it’s in a black box, and I can’t explain to you how it happened; we’re not going to buy it,” said Capriglione. “You wouldn’t do that with any other tool or product that you have.”

“I refuse to allow somebody to hide behind a computer program,” said Zimmerman. “We can’t accept that … When decisions are being made about health care, other major decisions that affect human beings, if it’s denied for any reason, you have to understand why.”

*Note: Government Technology and the Center for Public Sector AI are part of e.Republic, Industry Insider — Texas' parent company.

This story first appeared in Industry Insider — Texas, part of e.Republic, Government Technology's parent company.
Chandler Treon is an Austin-based staff writer. He has a bachelor’s degree in English, a master’s degree in literature and a master’s degree in technical communication, all from Texas State University.