The nonprofit Internet Safety Labs (ISL) released Part 2 of its K-12 EdTech Safety Benchmark: National Findings report on June 27. The nonprofit agency conducts independent product safety testing in K-12 schools and reports its findings and evaluations every three to five years. Part one of the report, released in January, noted that the vast majority of apps used in schools — 96 percent — share student information with third parties. Part III, which will be released at a later date, will specify what type of personal identifiable information (PII) from students and parents is being shared, said ISL Executive Director Lisa LeVasseur.
For the entire report, ISL in 2022 evaluated K-12 technology in the 50 states and the District of Columbia using samples from 455,882 students in 663 schools, and 1,357 different technologies/apps. In doing so, it collected more than 88,000 data points on the apps and 29,000 on schools and their technology habits, according to the report. The resulting report assigns composite scores to surveyed districts as well as to states, noting level of security concerns. The higher the score, the riskier the overall technology used. Texas had the highest score, at 89.18, while Hawaii had the lowest at 30.43.
Other key findings:
- Despite their intentions to ban retargeting ads that can follow students between multiple applications, Internet privacy laws in 24 states have not been effective in stopping ads on technologies used in schools.
- Regardless of “safe harbor” certifications or other privacy protection promises on ed-tech products, student data from those tools still made their way to large platforms like Facebook and Twitter.
- Eighty-six percent of the schools surveyed did not have a function for obtaining parental consent for technology that shares student data with third parties.
- More than 70 percent of schools do not vet all technology used by students, and those who did have some sort of vetting process (29 percent) were using a higher number of unsafe apps than schools that did not have a vetting process.
LeVasseur cautioned that the report isn’t intended to fault school personnel, students or parents for inadequate safeguards. She said technology consumers as a collective — whether individuals who choose free versions of streaming content services loaded with ads, or software companies that have assumed for years that there were no consequences of sharing resources — inadvertently played a role in this.
School districts, meanwhile, have relied on “blanket statements” to assure student privacy without communication from parents, LeVasseur said. She added that based on what is obtained by ad networks, companies are making profiles of students. The growth of mental health apps in schools is especially concerning.
“Once you start peeling the onion, it gets very disturbing,” she said.
ISL has not established a baseline for acceptable practices at schools because there aren’t any examples yet, LeVasseur said.
The good news, she added, is that federal lawmakers are interested in regulating data brokers. A good first step for schools is to create a list of every technology used. The goal should be to maintain a concise list, but to get there, districts need to limit the number of applications that are made available to students.
“With technology, more isn’t necessarily better,” LeVasseur said. “But the other side of it is, all of this technology can be made safe. That might provide some discomfort to the developers, but it can be done. You shouldn’t have to pay extra to have a reasonably safe product."
LeVasseur said ISL is developing safety labels that will help schools to better safeguard apps used by students.
Editor's Note: A previous version of this story said 86 percent of the districts surveyed did not have a function for obtaining parental consent for technology that shares student data with third parties. In fact, 86 percent of schools surveyed did not have this function.