IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Illinois Lawmakers Considering Several Measures Related to AI

Like other state legislatures around the country, the Illinois General Assembly in the last couple of years has grappled with how to address a rapidly evolving technology that replicates human intelligence.

illinois-capitol
jimmywayne/Flickr CC
(TNS) — About a year and a half ago, Crystal Lake South High School math teacher Sarah Murmann grew concerned when she heard one of her students say he met his girlfriend through an artificial intelligence-based social media site.

The revelation ignited a larger conversation with her class about what they knew about AI, which made her realize she was not adequately informed about the new technology.

“Despite all the potential that I’ve seen with AI, there’s still a gap in Illinois,” Murmann said last week during a state House committee hearing on legislation that would establish guidelines for teachers and school administrators on how to use AI. “A year and half after I had to start making my own decisions about AI in the classroom, I still have no official guidance to turn to.”

“We look to the state for support and Illinois has no state guidance for educators on AI,” Murmann said. “That means teachers like me in schools across all of Illinois have to make our best guess every day and hope that it’s right.”

The bill Murmann testified on is one of several measures Illinois lawmakers are considering for the new legislative session on how to respond to the emergence of artificial intelligence.

Like other state legislatures around the country, the Illinois General Assembly in the last couple of years has grappled with how to address a rapidly evolving technology that replicates human intelligence and, everyone seems to agree, has potential benefits as well as the ability to cause significant harm.

“We have to think about how to build systems that bring out the best in us by being very smart as opposed to reducing us by being very smart,” said Kristian Hammond, a computer science professor at Northwestern University who is the director of a master’s level artificial intelligence program. “There will be people who are very mindful about how they use these tools and they’ll be people who are less mindful.”

The Democratic-controlled legislature, in the early stages of a two-year term, is considering bills to address how AI affects residents in areas including education, health care, insurance, elections, picking up on work from the previous General Assembly.

Last year, Gov. JB Pritzker signed legislation that made it a civil rights violation for employers to use AI if it subjects employees to discrimination, as well as a measure prohibiting the use of AI to create child pornography, which made it a felony to be caught with artificially created images. In 2023, he signed a bill to make anyone civilly liable if they alter images of someone else in a sexually explicit manner.

As the technology has become increasingly ubiquitous, lawmakers and advocates say regulations will have to be adjusted regularly.

“Over the last few years, it had sort of exploded, and exploded in the idea that it’s spread to so many parts and so many sectors of our economy,” said state Sen. Robert Peters, a Chicago Democrat who last year co-chaired a legislative task force on AI. “This is going to evolve quickly. Sometimes it’s extremely complex. Sometimes it’s more simple than you think depending on what we’re talking about.”

State Rep. Jeff Keicher, a Republican from Sycamore, said there’s no question the technology has benefits, noting the ways it might be used to provide accurate diagnoses through medical testing. But educators, for example, will need to be careful about any AI used in the classroom.

At the same time, Keicher said he’s concerned about overregulation. “In business, I really worry that we’re going to tamp down innovation and the creative ability of what AI can do for the future if we limit it in any way that we think is in the good judgment, but turns out to act against what would be beneficial for the Illinois consumer,” he said.

In December, a task force made up of lawmakers, Pritzker administration officials and educators issued a report detailing some of the risks presented by AI. The report addressed the emergence of generative AI, a subset of the technology that can create text, code and images. ChatGPT, DALL-E, and Gemini are some of the tools being utilized by industries for automated tasks once performed by humans.

The report singled out how a controversial AI phenomenon known as “deepfakes,” when video or still images of a face, body or voice are digitally altered to appear as another person, have been used to influence the outcome of elections. The panel cited an audio deepfake last year of then-President Joe Biden making it sound like he was telling New Hampshire voters in a robocall not to vote.

Legislation regulating the use of deepfakes in elections has been enacted in 20 states, the task force report said. During the two-year Illinois legislative term that ended in early January, three bills addressing the issue were introduced but none passed.

One of those bills, sponsored by state Rep. Abdelnasser Rashid, would have prohibited the distribution of deceitful campaign material if the person doing so knew the information being shared was false, and it was distributed within 90 days of an election. The bill also would bar a person from sharing the material if it was being done “to harm the reputation or electoral prospects of a candidate” and change the voting behavior of electors by deliberately causing them to believe the misinformation.

The legislation, which had more than 30 Democratic co-sponsors, wouldn’t have banned the deceptive material outright. But it would have required a campaign to include a disclaimer in its media informing the public that it “has been manipulated by technical means and depicts speech or conduct that did not occur.”

Rashid reintroduced the bill, which was never called for a vote in the last term, for the current legislative session.

“We have an obligation to protect people from this kind of deceptive material,” said Rashid, a Democrat from Bridgeview and the other co-chair of the AI task force. “The moment a voter knows this is AI-generated, they (should) immediately have skepticism about its authenticity.”

In a letter to Rashid, TechNet, an organization that advocates for AI-centered businesses and other tech companies, said it supports cracking down on the use of AI to spread “deliberately misleading campaign content.” But the organization also said AI has the potential to help overcome “the greatest challenges of our time” such as predicting weather more accurately, protecting against cyber threats and developing new medical treatments.

TechNet also expressed concerns that Rashid’s bill could unfairly penalize internet service providers and other platforms used to disseminate the campaign material instead of the campaigns that are actually liable for creating the false information.

The American Civil Liberties Union of Illinois raised First Amendment concerns over the bill. ACLU attorney Rebecca Glenberg said there’s not enough clarity about which deepfakes would be required to be disclosed as AI-generated, and the bill could require such speech to be removed before “it’s fully adjudicated to determine that, in fact, it is an impermissible deepfake.”

“That’s problematic because someone could call ‘deepfake’ and there could be a takedown order like that when we don’t even know whether the speech is problematic or not, and this could suppress the speech at a really crucial time before an election,” she said.

A second AI-related bill backed by Rashid would bar state agencies from using any algorithm-based decision-making systems without “continuous meaningful human review” if those systems could have an impact on someone’s civil liberties or their ability to receive public assistance. The bill is meant to protect against algorithmic bias, another threat the task force report sought to address.

“Bias is inherent in GenAI systems for two primary reasons: they are trained on data sets that often reflect historical and societal biases, and the humans responsible for training and designing these systems carry their own implicit biases,” the report said. “These systems are now being used in many high-stakes decision-making areas, such as hiring processes, loan pricing and mortgage approvals—decisions that fundamentally impact individuals’ opportunities and quality of life.

“By automating these processes, GenAI systems risk perpetuating or exacerbating existing biases and discrimination, embedding systemic inequalities more deeply into decision-making frameworks.”

On the health care front, state Rep. Bob Morgan introduced a bill that would prevent insurers doing business in Illinois from denying, reducing or terminating coverage solely because of the use of an AI system.

“It is undeniable that artificial intelligence is already playing a role in our health insurance and we’re totally blind to it,” said Morgan, a Democrat from Deerfield. “This is something that really is a multiyear effort by the state to make sure that a doctor, (a) health care professional, is involved.”

The Illinois Life and Health Insurance Council supports guardrails to ensure AI will not replace human decision-making in health insurance. But the group has concerns about Morgan’s bill, arguing it could apply to all forms of insurance, not just health, and impede positive uses of AI in the insurance business.

“We believe the provisions as introduced could restrict the use of AI for other claims processing functions designed to identify gaps in care and reduce administrative costs for payers,” Insurance Council President Laura Minzer said in a statement.

Morgan has also introduced a bill that would prohibit a person or a business from advertising or offering mental health services unless those services are carried out by licensed professionals. It also limits the use of AI in the work of those professionals, barring them, for instance, from using the technology to make “independent therapeutic decisions.” The measure was meant to prevent AI chatbots from posing as mental health providers for patients in need of therapy, Morgan said.

“It’s really consistent with what we’ve already had in law, which is if somebody is going to hold themselves out as a health care professional they actually have to be a health care professional. But in this situation, AI is stepping in and not disclosing that they’re not a person,” Morgan said. “And they’re advising people on their health care, their behavioral health. And we’re going to put a stop to that.”

Kyle Hillman, director of legislative affairs for the Illinois chapter of the National Association of Social Workers, said, because of a stigma over mental health treatment, along with a shortage of mental health professionals, some people have turned to AI-based mental health services.

“I’m sure this is something that individuals that just aren’t ready to make that call might look to. But it’s just not something that’s safe,” Hillman said of the AI-based treatment option. “We would never consider this as an option for physical health. Like, ‘hey, I have a laceration on my leg. I’m going to call an AI chat doctor on how to put stitches in my leg.’ … It’s not something we would do.”

AI’s role in shaping education policies is also being examined. One piece of proposed legislation would require the state Board of Education to establish an instructional technology board, which would help provide guidance, oversight and evaluation for AI and other tech innovations as they’re integrated in school curricula and other policies. The board would consist of teachers, school board members, experts on the application of AI, principals and other school administrators.

“Teachers, administrators are so focused on teaching the students in front of them, supporting the students in front of them, so you really need some experts in the field to take a look at this issue, which is what this bill seeks to do,” said Democratic state Rep. Laura Faver Dias of Grayslake, the bill’s House sponsor. “What are the pitfalls that we need to be worried about?”

At last week’s House Education Policy Committee hearing, Murmann, the high school math teacher, stressed the need for students, teachers and other school personnel to be educated about AI while also recognizing its potential, pointing to a virtual tutoring program that helps students “engage with subjects that they struggle in where I have no knowledge or background.”

John Sonnenberg, CEO of Carmenta Education and a former director of eLearning for the state Board of Education, agreed with Murmann’s comments about AI’s value, and cautioned that teachers who take the time to learn about it shouldn’t be hemmed in by regulation.

“We are witnessing an unprecedented transformation driven by AI that has the potential to offer accelerated personalized learning for every student,” Sonnenberg testified at the hearing. “Attempting to rigidly regulate AI in education will inevitably fail. Right now, innovative educators are circumventing outdated systems (to) utilize AI tools that they know enhance their teaching and their students’ learning. Meanwhile, others are hesitant to utilize AI without guidance they can trust.”

The December legislative task force report issued a number of recommendations for policymakers in addressing the presence of generative AI including considering measures to protect workers in various industries from being displaced while preparing the workforce for AI innovation.

One of those recommendations called on the state to invest in training programs to help workers transition to new roles created by AI, with an emphasis on digital skills. Another recommendation calls on policymakers to ensure consumers control their personal data and to protect them from the sale or sharing of their data without their permission.

“Companies deploying GenAI tools should be required to provide robust privacy guarantees, ensuring that sensitive personal information is neither shared with third parties nor sold for profit,” the report said.

Hammond, the Northwestern professor, pointed to a so-called AI incident database, which can be found online and keeps a record of things that have gone wrong with the technology — using news stories about such instances as a source. He said the goal should always be to find ways to make it safer and better.

“We should do what we did with electricity. When electricity came (about) it was incredibly dangerous. … People were dying, left and right, and we decided ‘let’s make it safe.’ No one thought, ‘oh, let’s not do electricity,'” Hammond said. “That’s what we should dedicate ourselves to. People would say that AI is the new electricity.”

© 2025 Chicago Tribune. Distributed by Tribune Content Agency, LLC.
Sign up for GovTech Today

Delivered daily to your inbox to stay on top of the latest state & local government technology trends.