澳门跑狗论坛

Opinion
Artificial Intelligence Opinion

What Makes Students (and the Rest of Us) Fall for AI Misinformation?

Studies show that students can become more savvy at evaluating online information
By Sam Wineburg & Nadav Ziv 鈥 October 25, 2024 4 min read
Trendy pop art collage search concept. Halftone laptop computer with search bar and cut out hands pointing on it.
  • Save to favorites
  • Print
Email Copy URL

Four years ago during the 2020 election, we in the Los Angeles Times that young people were struggling to spot disinformation because of outdated lessons on navigating the internet. Today, educators risk making the same mistakes with artificial intelligence. With the election at our doorstep, the stakes couldn鈥檛 be higher.

by our research team, the (formerly the Stanford History Education Group), showed that young people are easily deceived because they judge online content by how it looks and sounds. That鈥檚 an even bigger problem with AI, which makes information feel persuasive even when it content and ignores context. Educators must show students the limits of AI and teach them the basic skills of internet search for fact-checking what they see.

When it comes to AI, leaders preach 鈥済reat excitement and appropriate caution,鈥 as Washington state Superintendent Chris Reykdal in a recent teachers鈥 guide. He writes of a 鈥渇ull embrace of AI鈥 that will put that state鈥檚 public education system 鈥渁t the forefront of innovation.鈥 New York City schools former chancellor, David C. Banks, who stepped down amid a federal investigation, said in September that AI can 鈥溾 for the better. The 鈥渁ppropriate caution,鈥 however, remains a misty disclaimer.

Washington state鈥檚 guidelines, like , , and , rightly warn that AI may be biased and inaccurate. Washington state stresses that students shouldn鈥檛 automatically trust the responses of large language models and should 鈥渃ritically evaluate鈥 responses for bias. But this is like urging students in driver鈥檚 education to be cautious without teaching them that they need to signal and check blind spots before passing the car ahead of them.

This pattern repeats the mistakes we saw with instruction on spotting unreliable information online: educators wrongly assuming that students can recognize danger and locate content that鈥檚 reliable.

Massachusetts Institute of Technology professor Hal Abelson that if they come across 鈥渟omething that sounds fishy,鈥 they should say, 鈥淲ell, maybe it鈥檚 not true.鈥 But students are in school precisely because they don鈥檛 know a lot. They are in the least position to know if something sounds fishy.

Imagine a history student consulting an AI chatbot to probe the Battle of Lexington, as one of us recently tested. The large language model says this conflagration, which launched the American Revolution, was initiated 鈥渂y an unknown British soldier.鈥 In truth, no one actually knows who fired first. The chatbot also reports that 鈥渢wo or three鈥 British soldiers were killed during the skirmish. Wrong again. None was. Unless you鈥檙e a history buff, this information doesn鈥檛 sound 鈥渇ishy.鈥

A second danger is that AI mimics the tone and cadence of human speech, tapping into an aesthetic of authority. Presenting information with confidence is a trap, but an effective one: Our 2021 of 3,446 high school students reveals the extraordinary trust students place in information based on a website鈥檚 superficial features.

When students conflate style with substance and lack background knowledge, the last thing they should do is try to figure out if something 鈥渟ounds fishy.鈥 Instead, the detection of unreliable information and responsible use of AI rests on internet search skills that enable them to fact-check.

Here鈥檚 the good news: Studies by our and show that students can become more savvy at evaluating online information. Without delay, educators should focus on AI literacy that emphasizes why content can鈥檛 be judged just by looking at it, along with search literacy that gives students the tools to verify information.

On the AI literacy front, educators need to help students understand that large language models can generate misleading information that looks good and pull scientific out of thin air. Next, they should explain to students how the chatbots work and how their training data are liable to perpetuate bias. When Purdue University researchers people how large language models struggled to recognize the faces of brown and Black people, participants not only grasped this point, they also became more skeptical of other AI responses.

Second, teachers need to make sure their students possess basic online search skills. Expert don鈥檛 rely on how something 鈥渓ooks.鈥 Students, likewise, need to leave an unfamiliar website and . The same advice applies to AI: Students need to go beyond the seemingly credible tone of a chatbot and seek context by searching the broader web.

Once there, they should take advantage of, yes, Wikipedia, which has a remarkably accurate resource with safeguards to weed out errors. Having students compare AI responses to Wikipedia entries highlights the difference between artificial and human intelligence. Whereas AI issues a murky smoothie of ambiguously sourced information, Wikipedia requires that claims be anchored to verifiable sources. The site鈥檚 page provides a record of debates by real people鈥攏ot algorithms鈥攐ver the evidence that supports a claim.

Our studies have shown the danger of taking information at face value. This threat only increases as AI churns out flawed content with encyclopedic authority. And yet, some educators are telling students to vibe-check AI-produced information. Or to evaluate it without first making sure they know how.

Let鈥檚 pair genuine caution about AI with proven search strategies so that students can avoid falling for misinformation and locate trustworthy sources online.

Resources for Teaching Search Literacy

  • (book)
  • (website)
  • from teachers and students in Civic Online Reasoning classrooms (video, 3 minutes)
  • what a lesson looks like in real classrooms (video, 4 minutes)
  • Take advantage of (website, sign up)

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of 澳门跑狗论坛's editorial staff.
Sponsor
Reading & Literacy Webinar
Literacy Success: How Districts Are Closing Reading Gaps Fast
67% of 4th graders read below grade level. Learn how high-dosage virtual tutoring is closing the reading gap in schools across the country.
Content provided by 
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of 澳门跑狗论坛's editorial staff.
Sponsor
Artificial Intelligence Webinar
AI and Educational Leadership: Driving Innovation and Equity
Discover how to leverage AI to transform teaching, leadership, and administration. Network with experts and learn practical strategies.
Content provided by 
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of 澳门跑狗论坛's editorial staff.
Sponsor
School Climate & Safety Webinar
Investing in Success: Leading a Culture of Safety and Support
Content provided by 

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide 鈥 elementary, middle, high school and more.
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.

Read Next

Artificial Intelligence From Our Research Center 'We're at a Disadvantage,' and Other Teacher Sentiments on AI
Teachers say they have other, more pressing priorities.
3 min read
3D illustration of AI button of keyboard of a modern computer. Light blue button. 3D rendering on blue background. Training.
Bo Feng/iStock + 澳门跑狗论坛
Artificial Intelligence Parents Sue After School Disciplined Student for AI Use: Takeaways for Educators
The Massachusetts lawsuit is one of the first to highlight the benefits and challenges of generative AI use in the classroom.
5 min read
Person using technology smart robot AI, enter command prompt. A.I. Chat concept AI, Artificial Intelligence.
iStock/Getty
Artificial Intelligence Q&A This Counselor Used AI to Help Students Apply to College. Here's How
Jeffrey Neill shares his tips on when it makes sense to use AI in the college application process.
6 min read
Jeffrey Neill, director of college counseling at Graded - The American School of S茫o Paulo in Brazil, presents on how to use AI tools in his work at the College Board鈥檚 annual forum in Austin, Texas on Oct. 21, 2024.
Jeffrey Neill, director of college counseling at Graded: The American School of S茫o Paulo in Brazil, presents on how to use AI tools in his work at the College Board鈥檚 annual forum in Austin, Texas, on Oct. 21, 2024.
Ileana Najarro/澳门跑狗论坛
Artificial Intelligence Q&A Meet Sassy, the AI Chatbot Helping Students Find Their Dream Jobs
The chatbot was created as part of Oregon's investment in expanding career-oriented education programs.
4 min read
Image of looking to future path options.
Tetiana Lazunova/iStock/Getty