Students at Kelso middle and high schools received training on the new policies in January and February, following a year of work by the district’s AI Advisory Committee.
Instructional Technology Specialist Brenda Sargent, who led the committee along with Associate Director of Teaching and Learning Lacey DeWeert, said responses have been mixed. Many teachers and students are excited for the opportunity to experiment with the new technology, while others have concerns about the ethical implications of using AI.
“Our teachers are really liking how it’s saving them time with some of the basic writing skills and being able to analyze articles,” Sargent said.
In surveys sent out before the new training took place, about one third of students and staff reported using AI, while two thirds said they didn’t, Sargent said. The district plans to run another survey sometime after spring break to see whether those numbers have changed.
Superintendent Mary Beth Tack said she hopes learning how to use AI responsibly will help students be more competitive in the workforce once they graduate.
“We really wanted it to be a tool for our students to use in an ethical and responsive and educated way,” Tack said. “It’s important for our kids to be competent and confident when they leave our system, so that they have those skills needed for the 21st century.”
NEW GUIDELINES
AI tools like ChatGPT were previously blocked on all school devices, though teachers had the ability to unblock them temporarily for specific assignments. However, the district changed that policy in October 2024, following discussions by the committee and surveys of students, staff and community members.
Now, AI programs are unblocked by default, but teachers can choose to block them in their classrooms. There are some exceptions for AI tools that do not have an educational purpose, such character.ai, a program where users can interact with chatbots based on fictional characters, Sargent said.
She said the goal was to provide more equitable access to AI for all students. In the past, students whose school-provided Chromebook was their only computer were blocked from using AI programs by the school’s firewall even at home, while students who had their own personal device would have been able to use it.
“For those kids where their Chromebook is their No. 1 piece of technology, that felt really unfair,” Sargent said.
AI programs were originally blocked because teachers were concerned students would use them to cheat. However, Tack cited research by Stanford University that found that the increased availability of AI has not affected cheating rates in high schools.
Sargent said teachers can combat cheating by giving clear instructions about when AI use is or isn’t allowed, and by learning their students’ writing styles to more easily spot AI-generated work. She also suggested engaging in conversation with students who cheat using AI, rather than simply punishing them and moving on.
Kelso teachers now use a standardized scale to instruct students on what level of AI use is appropriate for each assignment. On one end, AI use is completely forbidden and is considered cheating. On the other, students are encouraged to make heavy use of AI, but are not required to do so.
Sargent said she has found AI to be helpful for tasks like proofreading, organizing ideas or summarizing large amounts of data, but that it’s not as good at creative things. Students are encouraged to consider these limitations when deciding how to use it.
“The word choice and the personality doesn’t come through,” she said. “That’s what one of the students said. ‘I don’t want to use it, I don’t want to sound like a robot.’”
The district also follows the state Office of Superintendent of Public Instruction’s guidance on AI, which says that any AI output should be reviewed by a human before it’s turned in.
Much like autocorrect, generative AI creates content by predicting what words are most likely to come next in a sequence based on its training data. Because of this, the information it outputs may or may not be true.
For example, Google’s AI Overview function has recommended that users glue cheese to pizza to prevent it from sliding off or eat rocks for their health because it doesn’t have the ability to distinguish between real scientific articles and satire like a human could, The New York Times reported.
Sargent said students are taught to look out for these mistakes, commonly called hallucinations in AI. They are also taught to be aware of biases that could be introduced by the data used to train AI models.
Elementary school students are not encouraged to use AI because most programs have age restrictions, but teachers may use certain programs in group activities to model responsible practices, Sargent said.
ETHICAL CONCERNS
While some students and teachers were excited to be allowed to use AI more regularly, others were firmly opposed to it because of concerns over its environmental impact or use of copyrighted material, Sargent said.
The data centers that AI uses to process requests require large amounts of fresh water to keep their servers cool. A 2023 study by the University of California, Riverside, reported that Google’s data centers consumed over 23 billion liters of fresh water that year, a 17 percent increase from 2022 that was largely driven by an expansion in AI use. It estimated that AI infrastructure worldwide could consume up to six times as much water as the country of Denmark by 2027.
At the same time, large parts of the U.S. are struggling to get enough water. The National Oceanic and Atmospheric Administration reported that up to 63 percent of the contiguous U.S. experienced drought conditions in 2022, with some major reservoirs dropping to their lowest-ever recorded levels.
The U.S. Drought Monitor reports that most of western Washington is either abnormally dry or experiencing a moderate drought as of March 13.
AI also requires data to train on in order to generate new content, and those datasets often include copyrighted material, which artists say hurts their livelihoods. In 2023, a group of authors including George R.R. Martin and John Grisham filed a class action lawsuit against OpenAI for using their works to train its models.
OpenAI has argued that this falls under fair use, and that access to copyrighted data is required to keep its models competitive. In a March 13 response to the White House’s request for comment on its proposed AI Action Plan, it wrote that AI companies should not have to comply with “overly burdensome state laws” and that allowing rights holders to opt out of data mining, like the European Union does, represses AI innovation.
©2025 The Daily News, Longview, Wash. Distributed by Tribune Content Agency, LLC.