IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Ed-Tech Companies Seek AI Guidelines and Standards

Several ed-tech organizations have come out with their own set of artificial intelligence guidelines in recent months as groups try to tackle what's considered best practices for developing AI in education.

Graphic illustration of a compass in white against a dark blue background.
Shutterstock
(TNS) — School districts and vendors agree: The absence of clear standards for the use of artificial intelligence in education is creating risks for both sides.

As it now stands, education companies seeking to bring AI products into the market must rely on a hodgepodge of guidelines put forward by an assortment of organizations — while also relying on their own judgment to navigate difficult issues around data privacy, the accuracy of information, and transparency.

Yet there is a collective push for clarity. A number of ed-tech organizations are banding together to draft their own guidelines to help providers develop responsible AI products, and districts are becoming increasingly vocal about the standards they require of vendors, in meetings and in their solicitations for products.

"Standards are just beginning to enter into the conversation," said Pete Just, a former longtime school district tech administrator, and past board chair of the Consortium for School Networking, an organization representing K-12 technology officials. Where they exist, he added, "they're very generalized."

"We're seeing the Wild West evolve into something that's a little more civilized, and that's going to be a benefit for students and staff as we move forward."

EdWeek Market Brief spoke to ed-tech company leaders, school system officials, and advocates of stronger AI requirements to discuss where current standards fall short, the potential legal requirements that companies should look out for, as well as the need for guidelines that are written in a way that keeps up with a fast-evolving technology.

BEST PRACTICES AND MOVING TARGETS


A large number of organizations have come out with their own set of artificial intelligence guidelines in recent months as groups try to tackle what's considered best practices for developing AI in education.

One coalition that has grown in recent years is the EdSafe AI Alliance, a group made up of education and technology companies working to define the AI landscape.

Since its formation, the group has issued its SAFE Benchmarks Framework, which serves as a roadmap focusing on AI safety, accountability, fairness, and efficacy. It has also put forward its AI+Education Policy Trackers, a comprehensive collection of state, federal, and international policies touching schools.

A coalition of seven ed-tech organizations (1EdTech, CAST, CoSN, Digital Promise, InnovateEDU, ISTE, and SETDA) also announced at the ISTE conference this year a list of five quality indicators for AI products that focus on ensuring they're safe, evidence-based, inclusive, usable, and interoperable, among other standards.

Other organizations have also drafted their own version of AI guidelines.

The Consortium for School Networking produced the AI Maturity Model, which helps districts determine their readiness for integrating AI technologies. The Software and Information Industry Association, a major organization representing vendors, released Principles for the Future of AI in Education, meant to guide vendors' AI implementation in a way that's purpose-driven, transparent, and equitable.

In January, 1EdTech published a rubric that serves as a supplier self-assessment. The guide helps ed-tech vendors identify what they need to pay attention to if they hope to incorporate generative AI in their tools in a responsible way. It is also designed to help districts get a better idea of the types of questions they should be asking ed-tech companies.

When the assessment was developed, a few of the focus areas were privacy, security, and the safe use of applications of AI in the education market, said Beatriz Arnillas, vice president of product management for 1EdTech. But as the technology progressed, her group realized the conversation had to be about so much more.

Are users in school districts being told there's AI at work in a product? Do they have the option to opt out of the use of artificial intelligence in the tool, especially when it could be used by young children? Where are they gathering the data for their model? How is the AI platform or tool controlling bias and hallucinations? Who owns the prompt data?

The organization plans to soon launch a more comprehensive version of the rubric addressing these updated questions and other features that will make it applicable to reviewing a wider range of types of artificial intelligence in schools. This updated rubric will also be built out in smaller sections, unlike 1EdTech's previous guides, so that portions of it can be modified quickly as AI evolves, rather than having to revise the entire document.

"This speaks to how quickly AI is developing; we're realizing there are more needs out there," Arnillas said.

1EdTech has also put together a list of groups that have published AI guidelines, including advocacy organizations, university systems, and state departments of education. The organization's list identifies the target audience for each of the documents.

"The goal is to establish an "orchestrated effort" that promotes responsible AI use, Arnillas said. The goal should be to "save teachers time [and] provide access to quality education for students that normally wouldn't have it."

FEDERAL POLICY IN PLAY


Some of the standards ed-tech companies are likely to be held to regarding AI will not come from school districts or advocacy groups, but through federal mandates.

There are several efforts that vendors should be paying attention to, said Erin Mote, CEO and founder of innovation-focused nonprofit InnovateEDU. One of which is the potential signing into law of the Kids Online Safety Act and the Children and Teen's Online Privacy Protection Act, known as COPPA 2.0, federal legislation that would significantly change the way that students are protected online, and are likely to have implications for the data that AI collects.

Vendors should also be aware of the Federal Trade Commission's crackdown in recent years around children's privacy, which will have implications on how artificial intelligence handles sensitive data. The FTC has also put out a number of guidance documents specifically on AI and its use.

"There's guidance about not making claims that your products actually have AI, when in fact they're not meeting substantiation for claims about whether AI is working in a particular way or whether it's bias-free," said Ben Wiseman, associate director of the FTC's division of privacy and identity protection, in an interview with EdWeek Market Brief last year.

Additionally, providers should be familiar with the recent regulation around web accessibility, as announced by the U.S. Department of Justice this summer, stating that technology must conform to guidelines that seek to make content available without restrictions to people with disabilities — as AI developers focus on creative inclusive technologies.

The U.S. Department of Education also released nonregulatory guidelines on AI this summer, but these are still the early days for more specific regulations, Mote said.

States have begun taking more initiative in distributing guidelines as well. According to SETDA's annual report, released this month, 23 states have issued guidance on AI thus far, with standards around artificial intelligence ranking as the second-highest priority for state leaders, after cybersecurity.

HOLDING VENDORS ACCOUNTABLE THROUGH RFPS


In the meantime, school districts are toughening their expectations for best practices in AI through the requests for proposals they're putting forward seeking ed-tech products.

"They're no longer asking, 'Do you document all your security processes? Are you securing data?'" Mote said. "They're saying, 'Describe it.' This is a deeper level of sophistication than I've ever seen around the enabling and asking of questions about how data is moving."

Mote said she's seen those sorts of changes in RFPs put out by the Education Technology Joint Powers Authority, representing more than 2 million students across California.

That language asks vendors to "describe their proposed solution to support participants' full access to extract their own user-generated system and usage data."

The RFP also has additional clauses that address artificial intelligence, specifically. It says that if an ed-tech provider uses AI as part of its work with a school system, it "has no rights to reproduce and/or otherwise use the [student data] provided to it in any manner for purposes of training artificial intelligence technologies, or to generate content," without getting the school district's permission first.

The RFP is one example of how districts are going to "get more specific to try to get ahead of the curve, rather than having to clean it up," Mote said. "We're going to see ed-tech solution providers being asked for more specificity and more direct answers — not just a yes-or-no checkbox answer anymore, but, 'Give us examples.'"

Jeremy Davis, vice president of the Education Technology Joint Powers Authority, agrees with Mote: Districts are headed in the direction of enforcing their own set of increasingly detailed reviews in procuring AI.

"We should know exactly what they're doing with our data at all times," he said. "There should never be one ounce of data being used in a way that hasn't been agreed to by the district."

BACK TO BASICS


Despite not having an industry-wide set of standards, education companies looking to develop responsible AI would be wise to adhere to foundational best practices of building solid ed tech, officials say. Those principles include having a plan for things like implementation, professional learning, inclusivity, and cybersecurity.

"There's no certification body right now for AI, and I don't know if that's coming or not," said Julia Fallon, executive director of the State Educational Technology Directors Association. "But it comes back to good tech. Is it accessible? Is it interoperable? Is it secure? Is it safe? Is it age-appropriate?"

Jeff Streber, vice president of software product management at education company Savvas Learning, said the end goal of all their AI tools and features is efficacy, as it is for any of their products.

"You have to be able to prove that your product makes a demonstrable difference in the classroom," he said. "Even if [districts] are not as progressive in their AI policy yet ... we keep focused on the goal of improving teaching and learning."

Savvas' internal set of guidelines for how they approach AI were influenced by a range of guides from other organizations. The company's AI policy focuses on transparency of implementation, a Socratic style of facilitating responses from students, and trying to answer specific questions about the needs of districts beyond the umbrella concerns of guardrails, privacy, and avoidance of bias, Streber said.

"State guidelines and the ones from federal Department of Education are useful for big-picture stuff," Streber said. "But it's important to pulse-check on our own sense more specific questions that generalized documents can't answer."

As AI develops, "standards will have to keep up with that pace of change or else they'll be irrelevant."

It'll also be important to have a detailed understanding of how districts work as AI standards develop, said Ian Zhu, co-founder and CEO of SchoolJoy, an AI-powered education management platform.

Generic AI frameworks around curriculum and safety won't suffice, he said. Standards for AI will have to be developed to account for the contexts of many different kinds of districts, including how they use such technologies for things like strategic planning and finances.

"We need to have more constraints on the conversation around AI right now because it's too open-ended," Zhu said. "But we need to consider both guidelines and outcomes, and the standards that we hold ourselves to, to keep our students safe and to use AI in an ethical way."

©2024 Education Week (Bethesda, Md.). Distributed by Tribune Content Agency, LLC.