Most experts recommend that schools and districts steer clear of banning artificial intelligence, instead letting students learn about AI by permitting them to use AI-powered tools like ChatGPT to complete some assignments.
That opens up a host of questions for teachers and school leaders:
- When is it OK to use AI and when is it not?
- How much can a student rely on AI?
- How should students cite information they learned or sourced from AI tools?
- And what鈥檚 the best way for teachers to communicate their expectations for students on how to use AI appropriately?
Enter North Carolina, which is one of at least five states that have released AI guidance for schools. The Tar Heel State sought to give teachers a roadmap for setting parameters around AI鈥檚 use.
North Carolina knew early on that it did not want its schools to ban ChatGPT and other AI tools, said Catherine Truitt, North Carolina鈥檚 superintendent of public instruction. Rather, she said it wanted to teach students how to understand and use those tools appropriately, in part to prepare them for a future job market in which AI skills and knowledge are likely to be valued.
Prohibiting the use of these tools is like 鈥渟ticking your head in the sand,鈥 Truitt said. 鈥淭his is not something that we can pretend doesn鈥檛 exist and think that we can ban it and then it won鈥檛 be an issue.鈥
In crafting the , the state sought to 鈥渃reate a step-by-step playbook that makes it so easy for a school district or even a school by itself to embrace AI and feel that they鈥檙e doing it in the right way,鈥 said Vanessa Wrenn, the chief information officer for the North Carolina education agency. 鈥淥ur students are either going to use it on the downlow, or they can use it when we give them guidance on how to use it right, how to be safe, and how to use it well.鈥
In an effort to make its guidance as user-friendly and practical as possible, North Carolina included a chart that outlines different possibilities for using AI on assignments without encouraging cheating or plagiarism.
The graphic has five different levels based on the colors red, yellow, or green. The first level鈥攏oted in red and called level 鈥0"鈥攃ommunicates the expectation that students complete an assignment the old-fashioned way, without any help from AI.
The second level鈥攏oted in yellow and called level 鈥1"鈥攎eans students are allowed to brainstorm ideas or figure out how to structure their writing with assistance from AI. Students must disclose that they used AI and submit a link to interactions with chatbots. The third level鈥攁lso noted in yellow and called level 鈥2"鈥攁llows students to use AI in editing, but not to create new content. Again, students must disclose its use and share links to chats.
If an assignment corresponds with the fourth level鈥攏oted in green and called level 鈥3"鈥攕tudents are permitted to use AI to complete certain elements of a task, as specified by the teacher. And on the fifth level, also noted in green and called level 鈥4,鈥 students are allowed to use AI tools in any way that helps them complete the assignment, as long as the student is responsible for evaluating and providing oversight of the technology鈥檚 work. In those instances, AI is supposed to serve as a partner or 鈥渃o-pilot,鈥 not as a solo content creator. Students are also required to submit citations explaining how they used AI with both the fourth and fifth levels.
鈥淭here has to be a scale for when some AI is acceptable versus when all AI is, and it鈥檚 always going to depend on the assignment itself,鈥 Truitt said.
If that graphic of the different levels of AI use looks like it was created for a teacher to print out and post on their wall, that鈥檚 because it was, Wrenn said.
鈥淚 wanted something that if a teacher wanted to use this graphic in the classroom, it鈥檒l be very easy for teachers, [and] for students, to understand when they could or could not use AI on an assignment,鈥 Wrenn said.