The controversial practice, which uses algorithms to crawl and index public profiles of popular sites like Twitter and Facebook, has grown in popularity over the last several years, as administrators look to new and emergent technologies to heighten school safety — and potentially head off the next mass shooting.
Now the Restoring, Enhancing, Securing, and Promoting Our Nation's Safety Efforts Act or RESPONSE Act, introduced by Texas Sen. John Cornyn, advocates for numerous policies aimed at increasing school security, including a "Children's Internet Protection" amendment that encourages districts to invest in programs that detect "online activities of minors who are at risk of committing self-harm or extreme violence against others." Under the bill, almost all federally funded schools would be required to install software of this kind.
The bill comes at a time when schools are already investing more heavily in this technology. Earlier this year, a review by the Brennan Center for Justice of self-reported procurement orders from schools across the country showed that the number of school districts purchasing such software rose from just 6 in 2013 to 63 in 2018.
Companies like Geo Listening, Varsity Monitor and Snaptrends have all made a name for themselves by offering school administrators the ability to monitor and potentially police signs of violent thoughts and behavior, as well as offensive behavior and language. The net cast can often be wide, like one recently deployed in Juneau, Alaska, that scans for all "mentions of violence, self-harm, drug use, sexual content and cyberbullying" in students' school email accounts.
The relative affordability of these programs — with a median annual expense of $8,417 — makes them appealing additions to district security systems.
These businesses take slightly different approaches, depending on their specific security focus. Geo Listening, for instance, is marketed as a straightforward monitoring system that sends administrators daily reports with screenshots of flagged content from the monitored sites. These reports will include the poster's name, the time and location of the posting, and a short commentary of why the post was flagged. Posts are typically "flagged for negative content ... counter to the school code of conduct, evidencing violent threats to other students or the school, cyberbullying, or self-harm," according to a recent study in which the company was profiled.
Varsity Monitor, meanwhile, is geared specifically toward surveilling student athletes at major universities, for the purposes of making sure the students are compliant with ethics codes. Schools pay upward of $7,000 a year to track "obscenities, offensive commentary or words like 'free,' which could indicate that a player has accepted a gift in violation of N.C.A.A. rules," according to the New York Times.
However, while there is a growing demand for these services, researchers are largely split on whether these programs actually prevent violence.
"Aside from anecdotes promoted by the companies that sell this software, there is no proof that these surveillance tools work," the Brennan Center report concludes, also critiquing the software's propensity for error, and the kinds of misinterpretation that can take place when machines sift through the slang-saturated conversations of teenagers.
When paired with other forms of surveillance, the software has concerned critics for its broad reach and lack of public oversight.
The Aspen Institute, for instance, recently released a brief report on data collection at school systems in Florida, where the state's Department of Education recently launched a controversial Schools Safety Portal (FSSP). The portal mandates that state schools collect large amounts of student data — a process that is augmented by the deployment of social media monitoring programs.
The report argues that "preventing school shootings through data is fraught with ethical and technical risks, including a lack of data quality and the potential for biases across multiple levels of predictive algorithms." The report doesn't outright indict the practice itself, arguing instead that policies should be developed to increase process transparency and accountability, while deploying experts to ensure data quality.
The RESPONSE Act has seen significant support among a constellation of mental health and law enforcement organizations. In October, the bill was referred to the Senate Committee on the Judiciary where it is awaiting further review.