The method for better decision-making in this sense, according to Chris Agnew, managing director of Stanford’s AI Hub for Education, is for district leaders to resist a tools-first mindset and instead set goals for what capacities they want students to have by the time they graduate. With that established, they can then work backward to find what learning activities, and then tools, would support those capacities.
RELY ON EVIDENCE-BASED PRACTICES
Agnew said that when districts allow a tool to define instructional strategy, decisions become reactive and fragmented, and success becomes measured as “did this tool work?” rather than “did learning improve in a meaningful way?” This manner of thinking can fail to connect a tool’s worth to long-term learning goals, he added, potentially leading to short-term experimentation followed by ineffective implementation of the tool.
"It can be tempting to take a tools-first approach of, here's this shiny new tool, run a pilot, evaluate the pilot in isolation, and then consider scaling it across a district," he said. "We encourage district leaders to think about, AI for what?"
This reverse approach where AI is not the starting point of decision-making, Agnew continued, requires schools to define exactly what students should be equipped with by graduation, examine which learning experiences are supported by research to build those capacities, and only then determine if and how AI can contribute to those experiences.
“Pedagogical design matters for these tools, and we need to study them,” he said.
AI LITERACY FOR ALL
Critically, Agnew said, districts also need to prioritize communitywide AI literacy initiatives that extend to the entire school ecosystem of students, staff and caregivers.
“Districts aren’t gonna go wrong with investing in AI literacy for everybody," he said. "This isn’t only about students.”
He noted that most school-aged kids — and likely adults — are already using AI tools independently, but not always in an informed way. Thus, the goal for districts must be to ensure the community trains more discerning, critical users of AI who understand it well enough to question, evaluate and engage with it responsibly.
Teachers in particular are central to effective AI use in classrooms, Agnew said, emphasizing that districts need to invest in giving teachers the knowledge and training to make decisions about the tools and use them effectively. Doing so, he continued, moves the strategy beyond teachers having access to an AI tool, toward improving their professional judgement and ability to integrate the tech in a way that truly serves instruction.
Everyone in the school community, he said, needs to “[build] their capacity to be discriminating users [of AI] and build their critical opinion” when engaging with those tools.
MONITOR RESEARCH DEVELOPMENT
As he described in a recent report on the evidence base for AI use in K-12, Agnew said districts should be highly aware of how much faster the presence of AI is expanding than the evidence of its efficacy in instruction and learning.
Leaders need an ongoing cadence of engagement with research, he added, pointing to the AI Hub for Education’s repository. Updated monthly, the database contains approximately 1,300 papers on AI and education that can be filtered for relevancy — for example, “students as AI users” and “ELA outcomes.”
“Go back there once a month and find out what new findings there are,” he said.