Artificial Intelligence and the Future of Teaching and Learning
The OET’s own policy report from May 2023 includes seven recommendations, the first being to keep humans in the loop.
“We completely reject the notion that AI should replace teachers,” Isabella Zachariah, a fellow at the OET, told attendees at the conference. “Teachers and educators and administrators should be in the decision-making seat at every critical decision-making point when AI is being used in education, so that they can notice the patterns and the decisions that the AI model is making and how it will impact students and educators.”
Other recommendations include aligning AI governance models to an overall vision for education, using modern learning principles and strengthening trust in the institution.
Overall, the OET suggests that AI in education should be more like an e-bike than a robot vacuum: Where robot vacuums have little oversight, an e-bike is always controlled by a human being, and this model should apply to technology use in education, according to Zachariah and Kevin Johnstun, education program specialists at OET.
Blueprint for an AI Bill of Rights
At the legislative level, the White House created guidance for an AI Bill of Rights. It includes basic tenets like “AI should be safe and effective,” “AI should not discriminate unjustly,” and “You should know when AI is being used and how it affects you,” Johnstun and Zachariah said.
The goal is to bring these ideas into official government policy on AI, but in the meantime, Johnstun and Zachariah say it can be a helpful framework for considering the rights of students and other users in education contexts.
National Educational Technology Plan
Another OET report focuses on bridging digital divides, from access and connectivity to education on implementing technology to understanding technology’s uses overall.
Ensuring access to Wi-Fi and devices is key, but it can’t stop there, Johnstun and Zachariah said. Some students use technology more passively in the classroom while others are encouraged to bring more creativity to their interactions with tech tools, and allowing every student to use technology in dynamic ways requires development opportunities for teachers.
“We're talking about AI and focusing on how, when a new technology comes out, we need to give teachers adequate time, space and support to be able to learn that technology for themselves and integrate it into their instruction, so that, firstly, those educators don't get left behind to face a broader digital divide, but also so that their students can also not face that broader digital divide,” Zachariah said.
Designing for Education with Artificial Intelligence
For tech developers, OET created a guide for AI-based educational tools. It expands on the e-bike-versus-robot-vacuum idea: AI tools should not just be used like e-bikes, with humans in the loop, but should also be designed like e-bikes, with efforts to mitigate risk, collaborate on rules of the road and implement safety precautions.
To design AI tools effectively, the report encourages collaboration with educators.
“It's very important that after, kind of, somebody has a new idea, a way they want to harness this new and powerful technology, you get the right group of people together from the start,” Johnstun said.
NIST AI Risk Management Framework
The National Institute of Standards and Technology (NIST) framework encourages leaders to consider AI risk through three lenses: harm to people, harm to an organization and harm to an ecosystem.
For example, AI can harm a person’s civil rights, safety or economic opportunity, perpetuate discrimination and harm democratic participation, the report says.
The NIST framework includes hundreds of ideas for a variety of scenarios, Johnstun said.
"This is high level: Here are the buckets of risk that people should be concerned about, and then it has like 300 practices that are associated with being able to manage those risks," he said of the NIST framework.
For example, it includes a risk profile for generative AI that considers a breadth of possibilities, including the potential for military secrets to be stored in a tech tool.
"There's a lot that's in there that they're really digging into to help people think about the whole gamut of risk associated with generative AI," Johnstun said.