A recent, AI-generated 鈥渄eepfake鈥 audio recording of a principal making hateful comments has laid bare an uncertain landscape for educators鈥攁 bleak one that could consist of costly investigations, ruined reputations, and potential defamation cases.
On April 25, the Baltimore County Police department Dazhon Darien, 31, the athletic director at Pikesville High School in Baltimore, Md., with theft, stalking, and disrupting school operations. Dazhon had created and circulated a faked audio clip of Pikesville鈥檚 principal Eric Eiswert making racist and antisemitic remarks against students and colleagues. The audio clip, which surfaced in January, quickly went viral and divided the school鈥檚 community over its veracity.
For more than a year, U.S. schools and districts have grappled with the wide-reaching implications of AI technology for teaching and learning. What happened to Eiswert, who has since been absolved of wrongdoing, shows that AI can also be weaponized against school officials鈥攁nd that most districts are ill-equipped to handle that threat.
School leaders have noticed鈥攁nd they believe something similar could just as easily happen to them or their staff.
鈥淚 was very alarmed to see that AI could be used in this way. As someone who is a considered by law to be a public figure, you are always open to criticism of this nature,鈥 said Kimberly Winterbottom, principal of Marley Middle School in Glen Burnie, Md. 鈥淪omeone could click a picture across a parking lot and put it up on social media. But this is a whole new level.鈥
A lack of policy
After the deep fake audio recording came out, Eiswert was put on administrative leave between January and April while the county police and the school district investigated. He isn鈥檛 coming back to Pikesville High this school year, said Myriam Rogers, the superintendent of Baltimore County Public Schools, in a statement. The district is 鈥渢aking appropriate action regarding the arrested employee鈥檚 conduct, up to and including a recommendation for termination.鈥
Rogers did not clarify if Eiswert will return for the new school year.
The faked audio clip, over 27,000 times, roiled the Baltimore County school community鈥攚ith demands that Eiswert be removed as principal. His physical safety, and that of his family, was threatened on phone calls and messages that Eiswert received.
Long before the incident, though, there were growing undercurrents in schools of AI tools being misused to target students and educators alike.
Male students have used apps to fake pornographic images and videos of female students; in March 2023, a group of high students in Carmel, N.Y., created a deepfake video of a middle school principal in the district shouting angry, racist slurs and threatening violence against his Black students.
Such cases have shone a light on the yawning gap between a rapidly evolving technology, and a lack of policies to govern it.
鈥淲e definitely need some adaptation to bring the laws up to date with the technology being used. For instance, the charge of disrupting school activities only carries a maximum sentence of 6 months,鈥 said Scott Shellenberger, the Baltimore County state鈥檚 attorney, at a press conference held after Darien鈥檚 arrest.
Principals are vulnerable because of their positions
It鈥檚 still unclear what specific tool Darien used to create the deepfake. A by the Baltimore Banner said that Darien had used the school鈥檚 internet to search for Open AI鈥檚 tools and large language models that could process data to produce conversational results.
As authority figures who must take disciplinary action from time to time, principals contend that they are more susceptible to backlash and vengeful reactions, which can now easily take the form of believable-yet-fake video and audio clips. They fear that the technology will progress to a point where it will be difficult to distinguish between real and fake.
The relative ease with which Darien faked the audio has principals thinking closely about how they communicate with their staff, students, and the parent community.
For one thing, it doesn鈥檛 take a lot of data for an AI tool to be able to replicate a voice from an audio clip.
鈥淚 have a colleague who sends out a voice message to her student community. I told her she should stop that,鈥 said Melissa Shindel, the principal of Glenwood Middle School in Glenwood, Md. 鈥淚t could need less than a two-minute audio clip. And you can鈥檛 always trace the origin.鈥
Shindel said she鈥檚 been cautioning other school leaders about their unbridled support of AI in their schools.
鈥淧eople are in denial about the harm it could do,鈥 she said. 鈥淒eepfakes are more damaging than negative social media posts. You believe what you see or hear, over what you read.鈥
A troubling notion, exemplified in the Eiswert case, is that a grievance could spin into AI-fueled revenge鈥攁 parent unhappy about how a child was disciplined, a student or staff member angry about a decision. Shortly before Darien created and spread the fake audio clip, Eiswert had been investigating Darien鈥檚 alleged misuse of school funds.
鈥淲e are vulnerable. Everything that happens in the school funnels to me. Credit and blame,鈥 Shindel added.
Winterbottom, the principal from Glen Burnie, said Darien鈥檚 extreme actions made her revisit the impact she has on people, especially when disciplinary issues are involved. But getting charged by the police has made Winterbottom hopeful that the case has set up the right message.
鈥淚鈥檓 ecstatic that they were able to trace it [the audio]. The precedent is that you鈥檙e going to get caught,鈥 she said.
She hopes it will make people think twice before they jump to conclusions when they encounter a faked charge.
Districts can be proactive on AI use, but can鈥檛 prevent misuse
Eiswert, though pilloried on social media and put on administrative leave by his district, had the support of the Council of Advisory and Supervisory Employees, which represents school administrators. CASE鈥檚 executive director, William Burke, said in an email that CASE has maintained the audio was AI-generated from the time it surfaced.
CASE engaged AI experts to assess if the audio was real, and put Eiswert through a polygraph test, the results of which, Burke said, showed conclusively that Eiswert had not 鈥渕ade the statements on the audio.鈥 The evidence from the AI experts and the results of the polygraph were shared with the police, Burke said.
Eiswert did not respond to several request for comments sent via CASE, which is handling media inquiries related to the incident.
Such investigations can prove expensive and time consuming for unions, schools, and district administrations, especially if they have to investigate multiple cases.
鈥淪chool and district administrators will be given what looks like real evidence [of wrongdoing]. If they don鈥檛 know the risks associated with a tool like AI, they may believe the evidence even if it鈥檚 falsified,鈥 said Adam Blaylock, a lawyer who works with school districts in Michigan.
Blaylock fears that the lack of awareness about AI could put districts at risk of lawsuits. 鈥淚f you end a young administrator鈥檚 career based on something that鈥檚 not true, it opens up the district to a huge risk of litigation,鈥 said Blaylock.
As for victims of AI frauds, states鈥 defamation laws typically put the burden of proof on the victim, who would have to engage experts to prove it鈥檚 not them on the audio or video, an expensive proposition.
鈥淎 principal, like in Pikesville鈥檚 case, may feel personally harmed. But if there was no firing or demotion, they will be hard pressed to show damages,鈥 Blaylock said.
Blaylock is keen to help school districts avoid the pitfalls for AI generated deepfakes. His advice is to build defenses to help identify AI-generated content.
Updating student and employee handbooks with specific clauses about AI is one idea.
鈥淲e have one districtwide policy about technology misuse, which covers cellphones. It should now have language specific to AI,鈥 said Winterbottom.
School and district teams should have one member who is either an AI expert, or constantly updates their knowledge about the quick developments in the technology. 鈥淭here are some telltale signs of AI-generated content. People in AI videos will sometimes have six fingers. The expert on the team will be familiar with these indicators,鈥 said Blaylock.
Ultimately, there is no Turnitin for AI deepfakes yet, Blaylock said, referring to the popular tool used to detect plagiarism in student work. District administrators can only hedge the risk鈥攎aintain a list of approved generative-AI tools, train individuals on the appropriate use of AI, and comply with data-protection standards when dealing with any contractor who will use the school鈥檚 data.
Blaylock encourages individual school leaders to maintain a healthy skepticism about media that seems incredulous, while not backing away from muscular leadership.
鈥淭he risk is going to be focused on how we review information. But I can鈥檛 ask school leaders not to do what鈥檚 best for their kids 鈥 because they鈥檙e scared of a deepfake.鈥