According to a news release, the Software and Information Industry Association (SIIA) released its “Principles for the Future of AI in Education” at an event on Capitol Hill last week. Among key points, the principles state that AI technologies in education should account for educational equity, inclusion and civil rights as “key elements of successful learning environments,” adding that AI technologies in education should prioritize protecting student privacy and data. The guidelines also noted that AI tools should be as transparent as possible so that students and teachers can understand what they’re using, and encouraged ed-tech developers to engage with schools and institutions to help explain and “demystify” the opportunities and risks of new AI technologies in education. The guide added that the ed-tech industry should work more closely with the greater education community to build AI literacy among students and teachers.
“With AI being used by many teachers and educational institutions, we determined it was critical to work with the education technology industry to develop a set of principles to guide the future development and deployment of these innovative technologies,” SIIA President Chris Mohr said in a public statement. “Partnering with teachers, parents and students will be critical to improving educational outcomes, protecting privacy and civil rights, and understanding of these technologies. I commend our member companies who embraced this initiative to collaborate and for their commitment to support our children and teachers.”
According to the announcement, the principles were developed by the SIIA AI in Education Steering Committee, which includes ed-tech developers such as AllHere, ClassDojo, Cengage, D2L, EdWeb.net, GoGuardian, InnovateEDU, Instructure, MIND Education, McGraw Hill and Pearson.
“AI and kids’ privacy have dominated the conversation in Congress and in the states this year,” Sara Kloek, vice president of education and children’s policy at SIIA, said in a public statement. “As the trade organization representing the leading companies in ed tech, it is our mission to advance the responsible use of AI to enhance a learner’s educational experience while at the same time protecting their privacy, promoting educational equity, upholding civil rights, and developing important skills for the future.”
In a news release from GoGuardian, Head of Privacy and Data Policy Teddy Hartman stressed that ed-tech companies have an “immense responsibility” to students and instructors to prioritize student safety and wellbeing when developing AI tools. The release of the guidelines comes as K-12 schools and universities are adopting AI to enhance instruction, and as policymakers on the state and federal levels discuss how to regulate AI tools moving forward.
“AI holds immense promise to help educators personalize instruction to the needs of every individual student at a scale that hasn’t, until now, been possible,” Hartman said in a public statement. “Today’s industry-wide commitments are an important step toward ensuring the responsible use of AI in K-12 classrooms.”