Popular software tools that scan students’ online activity and flag children at risk of self-harm and mental-health crises are “unproven” and come with significant risks, a new report warns.
“No independent research or evidence has established that these monitoring systems can accurately identify students experiencing suicidal ideation, considering self-harm, or experiencing mental-health crises,” according to the , a Washington-based think tank. “Self-harm monitoring systems introduce greater privacy risks and unintended consequences for students.”
The report, titled “,” comes on the heels of numerous news media investigations of such tools. In 2019, for example, ܹ̳ published an in-depth look at how digital surveillance systems led schools to flag students for sending files containing the word “gay” and for the content of personal photos accidentally uploaded to their school-issued devices.
The reach of such systems continues to grow, thanks in large measure to the COVID-19 pandemic, which forced more students online and sparked an apparent rise in student suicides and mental-health crises. Popular ed-tech company , for example, now claims 1,500 school district clients and counting.
“In a school setting—whether virtual or in person—adults have a legal obligation to keep kids safe,” Gaggle CEO Jeff Patterson said in a statement. “Gaggle believes firmly in the importance of protecting student privacy and is a long-standing supporter of the Future of Privacy Forum’s and would welcome opportunities to continue to collaborate with FPF.”
Amelia Vance of the Future of Privacy Forum stopped short of saying K-12 leaders should forgo such systems altogether but warned educators to do extensive diligence before adopting them.
“Schools should not employ self-harm monitoring unless they have robust mental-health resources established and common-sense data protections in place,” said Vance, the director of youth and education privacy for the group.
Self-harm monitoring systems raise privacy, equity, legal concerns
The new report describes self-harm monitoring systems as “computerized programs that can monitor students’ online activity on school-issued devices, school networks, and school accounts to identify whether students are at risk of dangerous mental-health crises.”
Such systems typically collect and scan digital information ranging from students’ web-browsing histories to the contents of their documents and email messages, using algorithms and sometimes human reviewers to search for keywords that might indicate trouble. When content is flagged, alerts are typically sent to school or district administrators, who sometimes take the information to third parties such as law enforcement.
The companies who make such tools regularly tout hundreds or thousands of lives saved and catastrophes averted.
A Gaggle spokeswoman, for example, said in a statement that the company saved 1,408 lives last year alone. That number is based on either reports back from district clients and/or flagged content that contained a “clear and definitive” suicide plan. Gaggle is among the companies that uses trained human reviewers to determine which flagged content merits an alert to school officials.
Still, the Future of Privacy Forum suggests it’s unclear whether self-harm monitoring systems can accurately identify a high percentage of at-risk students while avoiding “false flags” of children who are not really considering harming themselves or others.
And even when self-harm monitoring systems do work as advertised, it’s not clear that merely flagging students’ digital content reliably leads to an appropriate mental-health intervention.
The group’s new report also details a range of other potential problems:
- Legal violations: While schools are required by the Children’s Internet Protection Act to block obscene or harmful content on their networks and devices, it remains unclear whether the federal law clears the way for self-harm monitoring technologies as filters, the Future of Privacy Forum says. It’s also unclear how the Family Educational Rights and Privacy Act applies to the information such technologies gather on students, and surveilling and flagging students’ off-campus online activity may in some circumstances violate Fourth Amendment protections against unlawful searches and seizures.
- Equity concerns: Vulnerable children and students from “systematically marginalized groups” may face an elevated risk of harm from monitoring technologies, the new report maintains. Poor students who lack their own personal devices may have more of their online activity surveilled because they’re forced to rely on school-issued computers, for example. Students who are gay, lesbian, bisexual, or transgender may also be targeted for harassment and stigmatization based on how their online activity is scanned and flagged.
- Privacy concerns: Overcollection and oversharing of information on students’ mental-health status could expose students to sanction by school staff or law-enforcement personnel who are not properly trained to interpret the information in context, the group warns. Sensitive student data that are not deleted in a timely manner also pose a risk.
Curtailing intellectual freedom: Some researchers also warn of a “chilling effect,” in which students are hesitant to search for needed information or resources for fear of being watched.
Among those sharing such concerns is the National Association of School Psychologists, which has not taken an official position on whether schools should use self-harm monitoring technologies.
“We would raise cautions about the possibility of wrongly identifying students or misuse of data,” a spokeswoman for the group said via email.
Monitoring systems not a substitute for mental- health services
NASP and the Future of Privacy Forum were also aligned in recommending that K-12 districts ensure they have an adequate number of school psychologists, counselors, and social workers to support the needs of students who are at risk.
“Monitoring systems cannot serve as a substitute for robust mental- health supports provided in school or a comprehensive self-harm prevention strategy rooted in well-developed medical evidence,” the report says.
Other recommendations include working with parents and community members to develop a shared understanding of values and priorities before adopting monitoring technology; developing clear policies about what information is collected, who has access to it, and how long it is stored; and clearly communicating those policies to school staff and parents alike.
“It is imperative that school districts approach any self-harm monitoring system holistically, taking into account the totality of harms that could arise from hastily adopting technology without well-developed implementation policies and the necessary accompanying school-based mental-health resources,” the report concludes.
GoGuardian, makers of widely used filtering and monitoring services now used by roughly 14,000 schools and districts nationwide, applauded the recommendations as “thoughtful.”
“We recognize the important role that school leaders play in balancing student privacy and safety in the digital age and are committed to building solutions that support that balance,” a company spokesman said in a statement.