Corrected: The original version of this story misspelled Daniel Vargas Campos.
ChatGPT鈥檚 release in November prompted big worries over how students could use it to cheat on all kinds of assignments.
But that concern, while valid, has overshadowed other important questions educators should be asking about artificial intelligence, such as how it will affect their jobs and students, said Daniel Vargas Campos, a curriculum program manager with Common Sense Media, a nonprofit research organization that develops curricula and reviews digital media.
One big question: How will artificial intelligence change the teaching of media literacy skills that help students determine the intent and accuracy of the media they consume?
澳门跑狗论坛 spoke with Vargas Campos about how media literacy education is at a critical moment as educators grapple with the implications of AI-driven technologies. This interview was edited for length and clarity.
In what ways could you see AI changing media literacy education?
There are layers to it. We are concerned that with the rise of artificial intelligence, that misinformation is going to proliferate a lot more in the online spaces. That鈥檚 one layer. Another layer to this, and something that鈥檚 a little bit less talked about, is how even just the artificial intelligence hype is already challenging how we think about media literacy before we even see examples of AI being used for misinformation purposes explicitly.
There was a term that the World Health Organization came up with like two years ago in the middle of the pandemic, the 鈥渋nfodemic.鈥 There鈥檚 too much information out there, and that makes it difficult to sort what鈥檚 real versus what鈥檚 fake. That is what鈥檚 happening right now with artificial intelligence. The real challenge is that even just talking about the potential negative impacts artificial intelligence can have in the field of misinformation is that we are creating an environment where it鈥檚 harder for people to trust what they see online.
To give you an example: A [few] weeks ago there was a video that went viral of a drag show, and there were babies in the video. It was trying to stoke emotions, like, 鈥淥h, that shouldn鈥檛 be allowed.鈥 But what was interesting is that people鈥檚 response to it was immediately, 鈥淥h, this is a deep fake.鈥 Turns out, the video was real, it was just an example of the most common type of misinformation, which is real information taken out of context.
Now, the challenge is that when we just label that automatically as a deep fake, then we don鈥檛 go through that extra step of putting into practice our media literacy skills. You鈥檙e bypassing the critical thinking that you need to do to actually consider, what are the impacts? What is this information trying to do?
How do educators need to change their approach?
It does require a shift. And this is a shift that鈥檚 not necessarily just because of AI, it鈥檚 because the information-seeking pattern of young people is different. In terms of how we teach media literacy, we need to update our approach to meet students鈥 actual experiences before we even dive into AI. We have to understand that most kids get their news from social media and a lot of the information-seeking behaviors and habits that they develop are developed as part of an online community.
Now, when it comes down to artificial intelligence, a big part of this conversation is to just talk to young people about this issue but really from the perspective of what they鈥檙e worried about. Because AI is already having lots of negative impacts in kids鈥 lives.
So, this is a question about how do we update media literacy for the next five, 10 years? And part of it is integrating or adding these conversations around AI literacy into how we talk about media literacy.
Do you see a disconnect between adults and kids regarding their biggest worries about AI?
Especially in education, we went straight to: 鈥淜ids are going to use this to write essays, and it鈥檚 going be plagiarism.鈥 And we kind of just jumped way ahead to this very unique use case. I do think that there鈥檚 a disconnect because kids are engaging with this sort of AI in all sorts of different realms of their digital lives.
[For example, the social media site] Discord has a summarizing AI. So, if you鈥檙e in an online forum, keeping up with the conversation could be super hard, especially if you have a thousand people commenting on something. Now there鈥檚 AI that gets used to summarize the conversation.
These are deeper questions that are less about plagiarism, but more about within your social life, your community, how can you identify bias? How can you identify whether the text that鈥檚 being used to share information to you is giving you an accurate representation of what鈥檚 happening?
A big component to this, just general advice for teachers, is how do we create more meaningful connections between media literacy and social-emotional learning? That鈥檚 a space that鈥檚 underdeveloped. Social-emotional learning is about self-awareness, the social awareness.
We want kids to also consider not just how is [media] making you feel or how is it making you react, but what can you notice about the general impact that this type of information or this conversation is having on people鈥檚 behavior?