IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Nonprofit Shares Pointers for Training AI for Classroom Use

Carefully curated data sets and plenty of teacher testing are required to make artificial intelligence-based ed-tech tools suitable for K-12, experts said in a webinar this week organized by Leanlab Education.

A child giving a thumbs up standing next to a large robot with the letters "AI" in the background.
Shutterstock
To make effective ed-tech tools with artificial intelligence, developers must pore over data sets and pay teachers to poke holes in their prototypes, experts said in a webinar this week.

The event was hosted by Leanlab Education, a nonprofit that coordinates classroom studies of emerging ed-tech tools. It featured Peter Gault, co-founder and executive director of the nonprofit Quill, and Arman Jaffer, founder and CEO of Brisk Teaching.

“When it comes to AI development, what we’ve seen as do-or-die in this work is how you leverage data sets to customize the AI, so it understands students and their needs,” Gault said in the webinar.

For Quill, which uses AI to help students improve their writing, “leveraging data sets” means seven full-time staff members combing through thousands of examples of real student writing. They rate each example and write feedback on it, then load it into the AI so it can learn to do the same.

These staff members are former educators with deep knowledge of how to assess student writing, Gault said, and they work with teachers to understand the nuts and bolts of their feedback process as well. He said the work requires a constant dialogue with teachers, who must be paid for their time.

“If you can find a way to compensate teachers, to reward them for being co-development partners here, and really get their eyes on a lot of the development process, you end up building much smarter AI, and the AI is much more effective,” Gault said.

Teachers are needed not only to help train AI but to test the ed-tech tools that use it, Gault said. To this end, Quill has an advisory council of 400 teachers to “poke holes” in new AI components over four cycles of testing.

“You have to build teacher flags and mechanics into your release cycles to do this testing and make it the core of your development process, but it gives you a lot of confidence that what you’re building really works,” Gault said.

For companies without a teacher advisory council to perform such field tests, Leanlab aims to bridge the gap. The nonprofit connects ed-tech developers to a network of paid teachers for research and feedback. Jaffer said Brisk, a company that mainly offers AI tools for teachers, is working with Leanlab now to conduct impact testing.

“You can do a lot of testing beforehand, and that’s amazing, and then you get to launch, but it doesn’t stop there — it’s just the starting point,” Jaffer said in the webinar. “You now have a surface area that you need to continually do research and tests on.”
Brandi Vesco is a staff writer for the Center for Digital Education. She has a bachelor’s degree in journalism from the University of Missouri and has worked as a reporter and editor for magazines and newspapers. She’s located in Northern Nevada.