The parents of a Massachusetts teenager are suing his high school after they say he was unfairly punished for using generative artificial intelligence on an assignment.
The student used a generative AI tool to prepare an outline and conduct research for his project, and when the teacher found out, he was given detention, received a lower grade, and excluded from the National Honor Society, according to the lawsuit filed in September in U.S. District Court.
But Hingham High School did not have any AI policies in place during the 2023-24 school year when the incident took place, much less a policy related to cheating and plagiarism using AI tools, the lawsuit said. Plus, neither the teacher nor the assignment materials mentioned at any point that using AI was prohibited, according to the lawsuit.
On Oct. 22, the court heard the plaintiffs鈥 request for a preliminary injunction, which is a temporary measure to maintain status quo until a trial can be held, said Peter Farrell, the lawyer representing the parents and student in the case. The court is deciding whether to issue that injunction, which, if granted, would restore the student鈥檚 grade in social studies and remove any record of discipline related to this incident, so that he can apply to colleges without those 鈥渂lemishes鈥 on his transcript, Farrell said.
In addition, the parents and student are asking the school to provide training in the use of AI to its staff. The lawsuit had also originally asked for the student to be accepted into the National Honor Society, but the school already granted that before the Oct. 22 hearing, Farrell said.
The district declined to comment on the matter, citing ongoing litigation.
The lawsuit is one of the first in the country to highlight the benefits and challenges of generative AI use in the classroom, and it comes as districts and states continue to navigate the complexities of AI implementation and confront questions about the extent to which students can use AI before it鈥檚 considered cheating.
鈥淚鈥檓 dismayed that this is happening,鈥 said Pat Yongpradit, the chief academic officer for Code.org and a leader of TeachAI, an initiative to support schools in using and teaching about AI. 鈥淚t鈥檚 not good for the district, the school, the family, the kid, but I hope it spawns deeper conversations about AI than just the superficial conversations we鈥檝e been having.鈥
Conversations about AI in K-12 need to move beyond cheating
Since the release of ChatGPT two years ago, the conversations around generative AI in K-12 education have focused mostly on . Survey results show AI-fueled cheating is a top concern for educators, even though data show students aren鈥檛 cheating more now that they have AI tools.
It鈥檚 time to move beyond those conversations, according to experts.
鈥淎 lot of people in my field鈥攖he AI and education field鈥攄on鈥檛 want us to talk about cheating too much because it almost highlights fear, and it doesn鈥檛 get us in the mode of thinking about how to use [AI] to better education,鈥 Yongpradit said.
But because cheating is a top concern for educators, Yongpradit said they should use this moment to talk about the nuances of using AI in education and to have broader discussions about why students cheat in the first place and what educators can do to rethink assignments.
Jamie Nunez, the western regional manager for Common Sense Media, a nonprofit that examines the impact of technology on young people, agreed. This lawsuit 鈥渕ight be a chance for school leaders to address those misconceptions about how AI is being used,鈥 he said.
Policies should evolve with our understanding of AI
The lawsuit underscores the need for districts and schools to provide clear guidelines on acceptable uses of generative AI and educate teachers, students, and families about what the policies are, according to experts.
At least 24 states have released guidance for K-12 districts on creating generative AI policies, according to TeachAI. Massachusetts is among the states that have yet to release guidance.
Almost a third of teachers (28 percent) say their district hasn鈥檛 outlined an AI policy, according to a nationally representative EdWeek Research Center survey conducted in October that included 731 teachers.
One of the challenges with creating policies about AI is that the technology and our understanding of it is constantly evolving, Yongpradit said.
鈥淯sually, when people create policies, we know everything we need to know,鈥 he said. With generative AI, 鈥渢he consequences are so high that people are rightly putting something into place early, even when they don鈥檛 fully understand something.鈥
This school year, Hingham High School鈥檚 student handbook mentions that 鈥渃heating consists of 鈥 unauthorized use of technology, including Artificial Intelligence (AI),鈥 and 鈥淧lagiarism consists of the unauthorized use or close imitation of the language and thoughts of another author, including Artificial Intelligence.鈥 This language was added after the project in question prompted the lawsuit.
But an outright ban on using AI tools is not helpful for students and staff, especially when its use is becoming more prevalent in the workplace, experts say.
Policies need to be more 鈥渘uanced,鈥 Yongpradit said. 鈥淲hat exactly can you do and should you not do with AI and in what context? It could even be subject-dependent.鈥
Another big challenge schools have is the lack of AI expertise among their staff, so these are skills that every teacher needs to be trained on and be comfortable with. That鈥檚 why there should also be a strong foundation of AI literacy, Yongpradit said, 鈥渟o that even in situations that we haven鈥檛 thought of before, people have the framework鈥 they need to assess the situation.
One example of a more comprehensive policy is that of . Its policy says that students can use AI tools as long as it鈥檚 not 鈥渋ntrusive鈥 and doesn鈥檛 鈥渋nterfere鈥 with the 鈥渆ducational objectives鈥 of the submitted work. It also says that students and teachers must cite when and how AI was used on an assignment.
The Uxbridge policy acknowledges the need for AI literacy for students and professional development for staff, and it notes that the policy will be reviewed periodically to ensure relevance and effectiveness.
鈥淲e believe that if students are given the guardrails and the parameters by which AI can be used, it becomes more of a recognizable tool,鈥 said Mike Rubin, principal of Uxbridge High School. With those clear parameters, educators can 鈥渕ore readily guard against malfeasance, because we provide students the context and the structure by which it can be used.鈥
Even though AI is moving really fast, 鈥渢aking things slow is OK,鈥 he said.