IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Educause '23: Teaching About AI Must Include Limits, Biases

As teachers integrate generative artificial intelligence into lesson plans and subjects, doing so responsibly will mean teaching about the limitations and biases of such tools, and discouraging over-reliance on them.

A person in a business suit with their head obscured by a dark cloud with "AI" in the middle of it in blue. Gray background.
Shutterstock
While generative AI tools like ChatGPT are becoming increasingly useful as research assistants in both K-12 and higher education, they come with the risk of programmed bias and the potential to reinforce our own prejudices.

That was the message of a Tuesday panel about the pros and cons of generative AI at the annual Educause conference in Chicago, at which AI researcher Sasha Luccioni said many of the AI models being used today have a tendency to reduce the world around us to stereotypes. She said the issue is compounded by the fact that it’s difficult to detect whether text is AI generated at all, adding that ChatGPT developer OpenAI’s own tool for detecting AI-generated content is accurate about 26 percent of the time.

Luccioni said these limitations make it important to teach students about where AI falls short, as well as the importance of fact-checking and encouraging them to use AI “as a tool, and not as a replacement.” She said being mindful of AI bias, as well as the development of filter functions on generative AI tools that could help to mitigate those biases, are key to making AI adoption across industries feasible and more ethical for a variety of use cases.
AI researcher Sasha Luccioni speaks to attendees about AI bias Tuesday at the Educause conference.
AI researcher Sasha Luccioni speaks to attendees about AI bias Tuesday at the Educause conference. (Photo by Brandon Paykamian)
“A calculator will never fail to do multiplication, whereas ChatGPT will fail to generate some [accurate and representative] content,” she said. “We still can’t get to the level of a human being with AI.”

As tech developers work out bugs like AI hallucinations and biases, she said educators across subjects should look to integrate discussions about AI and AI ethics into their lessons. She said many schools in Canada, for example, have worked to teach students more about AI and how it works, adding that dispelling myths about the technology's effectiveness for finding out factual information could discourage over-reliance on generative AI tools.

“Part of the way we created the curriculum was [centered] around learning about AI and about how it works and doesn’t work by showing concrete examples and showing students that maybe it’s cool to use ChatGPT for this particular thing, but it actually fails at really basic things,” she said.

Regarding tactics for teaching students about the dangers of AI bias and why plagiarizing from chatbots is generally a bad idea, she said educators can show students answers to questions given by AI programs and encourage them to use their own critical thinking to critique them. She said practices like these can reinforce the need to build critical-thinking skills, which many educators fear could be impacted by an over-reliance on AI tools.

“It helps students think about the output of AI as a tool … and [it] really deconstructs the fallacies they have about AI being able to ‘do anything,’” she said. “There is a way to have AI be a part of our existing systems without dynamiting the whole thing.”
Brandon Paykamian is a former staff writer for the Center for Digital Education.