One year after the release of ChatGPT, a Tuesday morning panel was unanimous in its view that the shift in educator attitudes about GenAI in the past year has moved on from “block it” to “teach students and teachers to use it responsibly.” VCOE’s Director of Education Technology Dana Thompson stressed that not all students have someone at home who can walk them through using ChatGPT, for example, so they need to get that guidance at school.
“Our students who are now currently in the K-12 system, whether they’re graduating in June or they’re graduating in 2036 because they’re kindergarteners, are going into an environment where they are going to have to use AI. They are going to have to be familiar with it, or they’re going to be passed over for positions, and they’re not going to be able to use it responsibly,” she said. “The big push now is media literacy and cybersecurity. All of those, our students need to be aware of, our teachers need to be aware of. So the policy that’s coming out now is ‘how can we use it,’ not ‘how can we ban it.’”
Thompson said few districts in California have released official in-house guidance on AI, so in lieu of that, districts should use what policies they have, for example regarding acceptable use and academic dishonesty. And she recommended adding a third layer: classroom AI policy. Various privacy laws, such as the Family Educational Rights and Privacy Act (FERPA), make it illegal to use some GenAI tools with certain age groups, but she said teachers are begging for clearer guidance from administrators.
“The way you’re going to use AI in the classroom, or not, at the elementary level versus the high school level is much different,” she said. “And if you’re not using it in the classroom, and you’re at the high school or middle school level, your students are using it outside the classroom.”
Elaborating on media literacy, Thompson’s colleague Cathy Reznicek, also a director of education technology at VCOE, said it will be increasingly important for students to understand how to evaluate information. Creating a legitimate-looking website is easier than it used to be, so schools should give kids experience with creating media so they can assess the pieces and see when something is suspicious or manipulative. In this way, she said, computer science and media literacy will go hand in hand.
“It’s really important that we all stay aware of what the technology can do, because if we know what it can do, we’re more likely to be able to see past some of the stuff that’s being put out there,” she said. “If you’re not aware of deepfakes and you don’t know anything about it, then you’re not going to be able to say ‘hey, I wonder if that’s a deepfake.’ If you’re not aware of what image generators can do, then you’re not even going to question different media that you [find].”
Thompson was also optimistic at the passage of A.B. 783 in October, which will establish media literacy curricula for K-12 in California, although its requirements will not be realized for some time.
On the computer science side of GenAI, VCOE’s Director Technology Infrastructure Stephen Meier said part of demystifying the technology is understanding it as a tool, not an agent.
“We have this idea that because we’ve abstracted the person out of the machine, that the machine, the AI, is now infallible. But we have to remember the machine was created by fallible creatures, and we can’t take the fallibility out of the machine,” he said. “The other part that goes along with that is, you have OpenAI, who is arguably driving the AI conversation … is now controlled by six people, or potentially now four people. That is something that really concerns me, as AI becomes pedagogical, wrapped up in what we’re teaching our students.”
Meier said another risk with GenAI lies in the data it was trained on. He made an analogy with the MOVEit hack earlier this year, in which a foreign actor essentially poisoned a software company’s product and infected that company’s clients around the world by extension.
“They’re already doing that today with data set poisoning,” he said. “If you find one of these AI companies that have these large training models, if you get a bad actor in there who poisons that data, you’re now getting bad results.”
Looming questions aside, the panelists were broadly optimistic that the challenges of AI will be solved. Reina Bejerano, chief technology officer at Oxnard Union High School District, said parents seem to be receiving the evolution of AI fairly well. She likened it to social media — they don’t really know what Snapchat or TikTok are, but they know their kids use them, and they’re generally curious to learn more. She said her district had some success hosting parent nights with dinner and conversations about these emerging apps and tools.
Bejerano cited Khanmigo, a custom tool that can adjust its answers to prompts if a student doesn’t understand them, as an example of one that already seems to be having a positive impact.
“It really is giving students autonomy, it’s giving them that freedom to learn in their own way, and it’s allowing them to be vulnerable,” she said. “In my opinion, I’m seeing more engagement, and higher engagement, than I’ve seen before because students have this autonomy and they’re able to feel vulnerable, and then they end up learning more.”