Artificial intelligence might be able to drive cars, treat disease, and train your front door to recognize your face. But can it crack the toughest nut in literacy: Helping kids comprehend what they read?
AI is evolving to meet reading instruction and assessment needs, some experts say. For instance, some believe it won鈥檛 be long before tools that use AI鈥檚 natural language processing capabilities to measure skills like phonemic awareness are commonplace in schools. Intelligent tutors that can coach students to demonstrate in writing an understanding of a text they鈥檝e read are already emerging.
If AI can improve reading instruction and assessment, it could fill important gaps, educators say.
Reading assessments can serve a variety of purposes: to identify which students need extra help; to diagnose students鈥 trouble spots; and to monitor progress, which can gauge whether a particular intervention is working.
But at this point, there isn鈥檛 a single digital or analog product on the market that can do all those things well, said Matthew Burns, a professor of special education at the University of Missouri and director of the University of Missouri Center for Collaborative Solutions for Kids, Practice, and Policy.
Moreover, the most important reading skill鈥攃omprehension鈥攊s also the toughest to measure and teach, Burns said. Partly that鈥檚 because it must assess what students know and the strength of their vocabulary, not just whether they can sound out words.
鈥淥ur assessment of reading comprehension is very surface level,鈥 he said. 鈥淲e have to figure out a better way to do it. I wouldn鈥檛 be surprised if AI was part of the solution to get a really good assessment鈥 of reading comprehension.
But he added a big caveat: 鈥淚 don鈥檛 think technology can replace a teacher.鈥
Digital adaptive tools get good reviews from teachers but can鈥檛 do it all
For now, digital adaptive tools鈥攎ost of which don鈥檛 include an AI component鈥攁re among the most widely used technologies to help teachers assess students鈥 reading abilities.
Adaptive reading software adjusts the level of difficulty for students based on what they are mastering, advancing them to higher levels or pushing them back to more basic instruction based on how well they are performing. Adaptive assessments are also used, though plenty of teachers still continue to measure students鈥 reading ability without digital tools.
Digital adaptive tools can save teachers a lot of time and effort, said Heather Esposito, a technology teacher coach for New Jersey鈥檚 Cherry Hill district who previously worked as an English teacher and reading specialist.
In the past, she said, teachers might sit with a student while they read a few passages from a story or article, keeping track of errors, self-correction, and other factors to determine the child鈥檚 fluency level. Then they might ask the student to tell them about what they read, asking different questions to gauge comprehension.
鈥淭hat takes a lot of time,鈥 Esposito said. 鈥淪o that鈥檚 why software programs came in to try and help with that.鈥
That鈥檚 likely why Esposito and other educators generally find adaptive reading software useful, according to a survey of 1,058 educators conducted by the EdWeek Research Center from Jan. 26 through Feb. 1. Forty-four percent said they think the tech does a better job of accurately assessing a students鈥 reading level than non-adaptive software or pen-and-paper methods, including 14 percent who said it does a 鈥渕uch better job.鈥
That鈥檚 compared to just 18 percent who said it does a worse job, including 4 percent who said it is 鈥渕uch worse.鈥 Thirty-eight percent said the effectiveness is about the same.
The current tools have clear limitations, however, Esposito pointed out. Teachers should never rely just on adaptive tools to assess student reading levels, she said.
Teachers need to closely supervise students taking adaptive assessments because they 鈥渁ren鈥檛 foolproof,鈥 said Catherine Snow, a professor at Harvard鈥檚 Graduate School of Education who specializes in children鈥檚 literacy development.
Students can press a wrong button without realizing it, altering their score, Snow said. Or kids could get a low score on an adaptive assessment because it鈥檚 making them do a boring task and they disengage. While the tools can often adapt to a kids鈥 reading level, many are less able to adapt to a students鈥 particular interests, Snow added.
鈥淜ids have things they want to read about and things they don鈥檛 want to read about,鈥 she said. 鈥淲e sort of ignore that, with a seven-year-old and say, 鈥楬e鈥檚 not reading this! He鈥檚 not getting his practice in!鈥 Well, it鈥檚 some story about dolls and princesses. He doesn鈥檛 really care.鈥
What鈥檚 more, students might not take the assessment particularly seriously. 鈥淲e know as adults that this assessment could be consequential,鈥 Snow said. 鈥淭he kid just thinks it鈥檚 another stupid thing he鈥檚 being asked to do.鈥
There are structural weaknesses too. Some digital reading tools examine reading fluency in part by looking at how quickly students read, Snow said. 鈥淭hose assessments, in my view, incentivize teachers to push for speed reading, rather than for deep reading, which often means you have to slow down.鈥
Like Esposito, Snow finds digital tools particularly lacking when it comes to reading comprehension. 鈥淐omprehension is what鈥檚 really hard to measure,鈥 Snow said. 鈥淭here are very few tests that even hint at the deeper comprehension levels that we would really like kids as young as third or fourth grade to be able to get into.鈥
That鈥檚 not a trivial problem, she added. 鈥淐omprehension is what it鈥檚 all about, right? Really, that鈥檚 the reason we鈥檙e teaching kids to read.鈥
Could AI help measure students鈥 reading comprehension and improve writing?
Some educational technology and literacy organizations are optimistic that adding AI to adaptive reading tools might offer the best opportunity yet to tackle that missing reading comprehension piece.
For instance, Quill, a nonprofit ed-tech literacy organization, has created an AI-powered tool that can read students鈥 answers to open-ended questions about a passage or article. The tool can then coach students to use evidence from the text, as well as proper grammar, to improve their responses. That can help give students the practice they need to improve both their reading comprehension and writing skills, said the organization鈥檚 founder and CEO, Peter Gault.
Building reading comprehension through writing is a contrast from the typical approach, Gault said. 鈥淎lmost every reading tool today uses multiple-choice questions as the main mechanism for then demonstrating your knowledge of the text. Our perspective on multiple choice is that it is a shallower way of learning.鈥
Khan Academy, a nonprofit digital learning company with more than 145 million registered users, is also considering using AI to help with students鈥 reading comprehension and writing.
鈥淚n the next year, there鈥檚 going to be ways that you can actually do reading comprehension and writing at the same time, where there鈥檚 a passage, and then the AI essentially works with the student to construct essentially a five-paragraph essay, arguing a point anchored in that essay,鈥 said the organization鈥檚 founder, Sal Khan, in an interview. 鈥淪o, it鈥檚 both reading comprehension and writing at the same time. Stuff like this never happened before.鈥
For her part, Esposito has already been experimenting with the latest version of ChatGPT, the AI-powered writing tool that emerged late last year. She鈥檚 asked it, for instance, to explain the hero cycle鈥攁 common language arts concept鈥攖o a 10-year-old who loves video games, or a 15-year-old who reads manga, a popular genre of Japanese graphic novels. The tool produced responses that were much better than she had expected.
鈥淵ou could take a topic or a concept and ask it to level it鈥 to match the students鈥 reading level and 鈥渢o make it more meaningful鈥 given a kid鈥檚 personal interests, Esposito said. And she expects the tools will only improve with time.
鈥淎I is a hard trend,鈥 Esposito said, meaning it鈥檚 here for the long term. ChatGPT is just an early iteration, she said, likening it to the search engines of the late 1990s.
But even powerful AI technology still needs substantial teacher input, she said. 鈥淚t鈥檚 about striking a really good balance of seeing the potential that鈥檚 out there with AI, finding the tools that work best for you and your students and knowing that you can pivot at any point,鈥 Esposito said.
Snow seconded that sentiment, and cautioned teachers to rely on their own judgement even as increasingly complex AI reading tools emerge.
鈥淭eachers should always know that their instincts might be better鈥 than the people who designed the software or the school leaders who purchased it, Snow said. 鈥淚f they think something is not really working very well, it might be because it鈥檚 not really working very well, and they should be cautious about imposing it on students.鈥