Picture a 3rd grade classroom. A teacher and a child sit side by side, open booklets before them both. The teacher starts a timer. The girl begins to read: “Goldfish make good pets. They are easy to take care of and do not cost much to feed. Goldfish are fun to watch while they are swimming.â€
“Now tell me as much as you can about the story you just read. Ready, begin,†the teacher says, starting the timer again.
The girl quickly scans the passage. “Um, he has a pet goldfish. It’s easy to take care of. He likes to watch it swim. It’s a good pet.â€
The teacher tallies each word the child says related to the passage, determines that she has provided three meaningfully sequenced details that capture a main idea, and circles a score, the highest one there is: 4.
The teacher restarts the timer and repeats the process with two more passages.
The teacher in this scene is testing the child’s reading using Acadience, one of several literacy screeners the New York City Department of Education has mandated elementary schools administer three times a year. And the child, according to the manual in the teacher’s lap, has just demonstrated excellent reading comprehension.
The department’s mandate was no doubt influenced by the ascendant “science of reading†movement. Its proponents advocate for greater focus on phonics instruction—structured lessons that teach the connections between letters and sounds—in kindergarten through 2nd grade. They recommend screeners like Acadience because they generate useful data on children’s phonics knowledge in these early grades. However, in New York, these screeners are also being used in upper elementary grades, where they offer teachers very little of what they actually need: a nuanced and accurate picture of students’ comprehension abilities.
While “science of reading†proponents see comprehension as the ultimate goal of reading, they don’t prioritize it as a goal or focus of reading instruction. They argue that, as long as readers come to texts with strong decoding skills and a broad knowledge base, comprehension is all but assured. Therefore, the thinking goes, instruction should focus on developing students’ phonics knowledge (which is the foundation of decoding) as well as broad topical knowledge.
A reading assessment can’t be valid if the kind of reading it requires doesn’t match the kind of reading we need to do in real life.
The two of us—a teacher-educator specializing in literacy and a veteran elementary school teacher—argue instead that teachers must actively support students’ comprehension. This means two things. First, we must teach comprehension as a multidimensional experience. We want children to comprehend what’s happening literally in the text (who did what when), but we also want them to be able to analyze how parts of the text (literary devices, figurative language, structural choices) work together to develop ideas. And we want them to interpret the purpose and significance of the text in relation to their lives and to society.
Second, supporting students’ comprehension means nurturing what’s called active self-regulation—the ability to monitor our understanding and adjust our reading when something doesn’t make sense. Readers can do this by simply rereading, by strategically focusing their attention, or by intentionally searching for information to fill in gaps in understanding.
Any tool we use to assess reading must generate information about these two aspects of reading comprehension. In Jessica’s 3rd grade classroom, the Acadience screener did not. Jessica didn’t get a sense of students’ understanding of how characters change, what an author is teaching us, or how details support main ideas, nor did she ascertain students’ ability to evaluate an author’s perspective or analyze how literary devices add meaning to the text. In other words, the assessment didn’t show her whether or not children were engaged in the kind of thinking that enables deep comprehension in realistic reading situations.
This screener took over two weeks to administer. Multiplied by three administrations a year, that’s six weeks’ worth of lost reading instruction. All she had to show for this investment of time was simple numerical scores based on the words children said in their retell.
The idea of a simple score—the idea that we can quantify reading ability at all—might feel reassuring to educators yearning to tie their teaching to something solid. But screeners like Acadience offer only an illusion of scientific objectivity. After all, a reading assessment can’t be valid if the kind of reading it requires doesn’t match the kind of reading we need to do in real life.
More importantly, how we assess reading shapes how we teach reading. If assessment tools require children to say a certain number of words about a disconnected set of trivial passages, then teachers will be inclined to emphasize recall and disinclined to support children in selecting complex, relevant texts to read.
Our approach to reading instruction is embedded in a broad set of instructional values—values ostensibly shared by New York City’s education department and many other districts across the country. In the summer of 2021, as the department mandated the literacy screener, it also released a “vison statement†for teaching reading that calls for an emphasis on “critical literacyâ€â€”instruction meant to “challenge students to be critical thinkers†and “foster critical consciousness.†The statement sees literacy applied to “culturally relevant curriculum.â€
We believe, however, that the screener mandate and the vision statement are in conflict. The mandate undermines the indisputably worthy goals of the vision statement by giving short shrift to the support that students need in constructing meaning from diverse texts and then applying that learning to other pursuits.
What’s happening in New York City reflects a broader trend wherein teachers are expected to negotiate the contradictory pressures to teach reading in a culturally relevant way but assess reading in a way that strips it of all relevance.
What might a relevant assessment look like?
Picture a 3rd grade classroom. A teacher and a child sit side by side, open booklets before them both.
A teacher starts a stopwatch, not a timer. A girl reads a short text about sharks, while the teacher notes her decoding errors and tracks her fluency.
“What is the author’s view of sharks?†the teacher asks.
The child replies, “Well, the author wants us to think that sharks are dangerous. Look at this heading ‘You can run, but you can’t hide.’ That makes a scary feeling. But I disagree! People are probably more dangerous to sharks than sharks are to people. Sharks should be more scared of us.â€
There is no numerical score, but the teacher notes that the child knows what’s happening literally in the text and is analyzing and evaluating it.
The child in this scene is reading the way we read in real life. We want our children to read with a critical lens, to not take the author’s opinions at face value. We want our children to empathize. And that kind of reading requires instruction and therefore assessments that are rich, meaning-based, and authentic.