As ChatGPT and similar technologies have gained prominence in middle and high school classrooms, so, too, have AI-detection tools. The majority of teachers have used an AI-detection program to assess whether a student鈥檚 work was completed with the assistance of generative AI, according to a new . And students are increasingly getting disciplined for using generative AI.
But while detection software can help overwhelmed teachers feel like they are staying one step ahead of their students, there is a catch: AI detection tools are imperfect, said Victor Lee, an associate professor of learning sciences and technology design and STEM education at the Stanford Graduate School of Education.
鈥淭hey are fallible, you can work around them,鈥 he said. 鈥淎nd there is a serious harm risk associated in that an incorrect accusation is a very serious accusation to make.鈥
A false positive from an AI-detection tool is a scary prospect for many students, said Soumil Goyal, a senior at an International Baccalaureate high school in Houston.
鈥淔or example, my teacher might say, 鈥業n my previous class I had six students come up through the AI-detection test,鈥欌 he said, although he鈥檚 unsure if this is true or if his teachers might be using this as a scare tactic. 鈥淚f I was ever faced with a teacher, and in his mind he is 100 percent certain that I did use AI even though I didn鈥檛, that鈥檚 a tough scenario. [...] It can be very harmful to the student.鈥
Schools are adapting to growing AI use but concerns remain
In general, the survey by the Center for Democracy & Technology, a nonprofit organization that aims to shape technology policy, with an emphasis on protecting consumer rights, finds that generative AI products are becoming more a part of teachers鈥 and students鈥 daily lives, and schools are adjusting to that new reality. The survey included a nationally representative sample of 460 6th through 12th grade public school teachers in December of last year.
Most teachers鈥59 percent鈥攂elieve their students are using generative AI products for school purposes. Meanwhile, 83 percent of teachers say they have used ChatGPT or similar products for personal or school use, representing a 32 percentage point increase since the Center for Democracy & Technology surveyed teachers last year.
The survey also found that schools are adapting to this new technology. More than 8 in 10 teachers say their schools now have policies either that outline whether generative AI tools are permitted or banned and that they have had training on those policies, a drastic change from last year when many schools were still scrambling to figure out a response to a technology that can write essays and solve complex math problems for students.
And nearly three-quarters of teachers say their schools have asked them for input on developing policies and procedures around students鈥 use of generative AI.
Overall, teachers gave their schools good marks when it comes to responding to the challenges created by students using generative AI鈥73 percent of teachers said their school and district are doing a good job.
That鈥檚 the good news, but the survey data reveals some troubling trends as well.
Far fewer teachers report receiving training on appropriate student use of AI and how teachers should respond if they think students are abusing the technology.
- Twenty-eight percent of teachers said they have received guidance on how to respond if they think a student is using ChatGPT;
- Thirty-seven percent said they have received guidance on what responsible student use of generative AI technologies looks like;
- Thirty-seven percent also say they have not received guidance on how to detect whether students are using generative AI in their school assignments;
- And 78 percent said their school sanctions the use of AI detection tools.
Only a quarter of teachers said they are 鈥渧ery effective鈥 at discerning whether assignments were written by their students or by an AI tool. Half of teachers say generative AI has made them more distrustful that students鈥 schoolwork is actually their own.
A lack of training coupled with a lack of faith in students鈥 work products may explain why teachers are reporting that students are increasingly being punished for using generative AI in their assignments, even as schools are permitting more student use of AI, the report said.
Taken together, this makes the fact that so many teachers are using AI detection software鈥68 percent, up substantially from last year鈥攃oncerning, the report said.
鈥淭eachers are becoming reliant on AI content-detection tools, which is problematic given that research shows these tools are not consistently effective at differentiating between AI-generated and human-written text,鈥 the report said. 鈥淭his is especially concerning given the concurrent increase in student disciplinary action.鈥
Simply confronting students with the accusation that they used AI can lead to punishment, the report found. Forty percent of teachers said that a student got in trouble for how they reacted when a teacher or principal approached them about misusing AI.
What role should AI detectors play in schools鈥 fight against cheating?
Schools should critically examine the role of AI-detection software in policing students鈥 use of generative AI, said Lee, the professor from Stanford.
鈥淭he comfort level we have about what is an acceptable error rate is a loaded question鈥攚ould we accept one percent of students being incorrectly labeled or accused? That鈥檚 still a lot of students,鈥 he said.
A false accusation could carry wide-ranging consequences.
鈥淚t could put a label on a student that could have longer term effects on the students鈥 standing or disciplinary record,鈥 he said. 鈥淚t could also alienate them from school, because if it was not AI produced text, and they wrote it and were told it鈥檚 bad, that is not a very affirming message.鈥
Additionally, some research has found that AI detection tools are more likely to falsely identify English learners鈥 writing as produced by AI.
Low-income students may also be more likely to get in trouble for using AI, the CDT report said because they are more likely to use school-issued devices. Nearly half the teachers in the survey agree that students who use school-provided devices are more likely to get in trouble for using generative AI.
The report notes that students in special education use generative AI more often than their peers and special education teachers are more likely to say they use AI-detection tools regularly.
Research is also finding that there are ways to trick AI detection systems, said Lee. And schools need to think about the tradeoffs in time and resources of keeping abreast with inevitable developments both in AI, AI-detection tools, and students鈥 skills at getting around those tools.
Lee said he sees why detection tools would be attractive to overwhelmed teachers. But he doesn鈥檛 think that AI detection tools should alone determine whether a student is improperly using AI to do their schoolwork. It could be one data point among several used to determine whether students are breaking any鈥攚hat should be clearly defined鈥攔ules.
In Poland, Maine, Shawn Vincent is the principal of the Bruce Whittier middle school, serving about 200 students. He said that he hasn鈥檛 had too many problems with students using generative AI programs to cheat. Teachers have used AI-detection tools as a check on their gut instincts when they have suspicions that a student has improperly used generative AI.
鈥淔or example, we had a teacher recently who had students writing paragraphs about Supreme Court cases, and a student used AI to generate answers to the questions,鈥 he said. 鈥淔or her, it did not match what she had seen from the student in the past, so she went online to use one of the tools that are available to check for AI usage. That鈥檚 what she used as her decider.鈥
When the teacher approached the student, Vincent said, the student admitted to using a generative AI tool to write the answers.
Teachers are also meeting the challenge by changing their approaches to assigning schoolwork, such as requiring students to write essays by hand in class, Vincent said. And although he鈥檚 unsure about how to formulate policies to address students鈥 AI use, he wants to approach the issue first as a learning opportunity.
鈥淭hese are middle school kids. They are learning about a lot of things this time in their life. So we try to use it as an educational opportunity,鈥 he said. 鈥淚 think we are all learning about AI together.鈥
Speaking from a robotics competition in Houston, Goyal, the high school student from Houston, said that sometimes he and his friends trade ideas for tricking AI-detection systems, although he said he doesn鈥檛 use ChatGPT to do the bulk of his assignments. When he uses it, it鈥檚 to generate ideas or check grammar, he said.
Goyal, who wants to work in robotics when he graduates from college, worries that some of his teachers don鈥檛 really understand how AI detection tools work and that they may be putting too much trust in the technology.
鈥淭he school systems should educate their teachers that their AI-detection tool is not a plagiarism detector [...] that can give you a direct link to what was plagiarized from,鈥 he said. 鈥淚t鈥檚 also a little bit like a hypocrisy: The teachers will say: Don鈥檛 use AI because it is very inaccurate and it will make up things. But then they use AI to detect AI.鈥