AI has been used in ed tech classroom applications for some time. Like smart speakers in homes, AI-powered tools in schools are growing at an exponential rate. But their level of effectiveness in meeting student-learning outcomes is a topic of much debate. A recent RAND Corporation report takes a long look at the research on AI in education and comes away with an “it depends” answer on its level of effectiveness, and “not so much” on whether educators need worry about AI replacing them in classrooms anytime soon. In Education Week’s interview with Robert Murphy, the RAND senior policy researcher behind the report, he explains AI-based educational programs and how they function:
"Many of the tools are what Murphy described as "rule-based" applications that operate primarily off of if-then logic statements and are programmed in advance. Perceived advantages of such technologies include the ability to let students work at their own pace, advance only when they master requisite skills and concepts, and receive continuous feedback on their performance.
"There's a fair bit of research to show that these systems can be effective in the classroom — but only when it comes to topics and skills that revolve around facts, methods, operations and "procedural skills." The RAND report also describes the current shortcomings of AI-powered instructional applications, due primarily to their focus on developing students’ lower-order skills."
But the report also suggests this limitation represents AI’s greatest current classroom potential. If working in tandem with teachers — allowing them to focus on students’ higher-order skills while leaving some of the more mundane skill development work to computer-based AI applications — these tools can be beneficial for both teachers and students.
However, an issue raised in the RAND report also represents a larger concern: the promotion of biases — gender, racial and others — due to the AI tools’ insufficient representation of individuals in the data sets on which the tools are based.
The “intelligence” portion of AI is built on the quality and quantity of the data fed into the application to “teach” it how to respond to any given question or request. When a student enters a wrong answer in an adaptive digital learning math program and, based on what the machine has “learned” about the student from her previous answers to other problems, and about other students who gave similar wrong answers, it “knows” that she needs some tutoring assistance. Based on that information, it leads her to an area in the program where she can review the math concept she needs to understand in order to correctly answer the missed question.
Yet if the AI application doesn’t have enough data on this particular student to understand her issue isn’t mathematical, but that she’s a non-native English speaker and didn’t understand some language used in the particular problem she missed, then the AI application will have failed her.
It’s these kinds of “insufficient data set” issues that represent AI’s shortcomings, and not just in education. Rather, it goes across the whole range of fields where AI is currently deployed — health care, financial markets, transportation (including driverless cars), manufacturing, and more. These AI systems will certainly improve as they’re fed more data, and become “smarter” and better able to provide the correct solutions to increasingly difficult questions. But we’re not there yet.
The RAND report identifies a growing public distrust about providing the kinds of personal data needed to improve the effectiveness of AI applications. Just as some are concerned about what their smart speakers may be “hearing” in their homes, and how their AI-powered device is transferring this information elsewhere, so too are many parents disturbed by the data gathered on their children by classroom AI tools, and with what entities this information is then being shared.
Despite Elon Musk’s concern about the potential for AI to unintentionally create killer robots that turn against us, there’s little chance that investments in AI will slow. And for ed tech developers in particular, there are significant opportunities to use these new AI tools for good. But the needs of individual students must not be overlooked. And educators considering the purchase of AI-powered tools need to fully understand how the gathered student data is being used, while working only with vendors that are transparent and forthcoming about their intentions.