Corrected: An earlier edition of this story gave an outdated name for an assessment being developed to gauge students鈥 reading comprehension. The correct name for the test is now the Global Integrated Scenario-Based Assessment.
The use of testing in school accountability systems may hamstring the development of tests that can actually transform teaching and learning, experts from a national assessment commission warn.
Members of the Gordon Commission on the Future of Assessment in Education, speaking at the annual meeting of the National Academy of Education here Nov. 1-3, said that technological innovations may soon allow much more in-depth data collection on students, but that current testing policy calls for the same test to fill too many different and often contradictory roles.
The nation鈥檚 drive to develop standards-based accountability for schools has led to tests that, 鈥渨ith only few exceptions, systematically overrepresent basic skills and knowledge and omit the complex knowledge and reasoning we are seeking for college and career readiness,鈥 the commission writes in one of several interim reports discussed at the Academy of Education meeting.
鈥淲e strongly believe that assessment is a primary component of education, ... [part of] the trifecta of teaching, learning, and testing,鈥 said Edmund W. Gordon, the chairman of the commission and a professor emeritus of psychology at Yale University and Teachers College, Columbia University.
The two-year study group launched in 2011 with initial funding from the Princeton, N. J.-based Educational Testing Service and a membership that reads like a who鈥檚 who of education research and policy. Its 32 members include: author and education historian Diane Ravitch of New York University, former West Virginia Gov. Bob Wise of the Washington-based Alliance for Excellent Education, and cognitive psychologist Lauren Resnick of the University of Pittsburgh, among others.
The panel is developing recommendations for both research on new assessments鈥攆or the Common Core State Standards and others鈥攁nd policy for educators on how to use tests appropriately. The final recommendations, expected at the end of the year, will be based on two dozen studies and analyses from experts in testing on issues of methods, student privacy, and other topics.
Stopping Overlap
Education policymakers understandably want to develop multimillion-dollar tests as efficiently as possible, said Lorrie A. Shepard, the education dean at the University of Colorado at Boulder and part of the commission鈥檚 executive council. However, she said, they often confuse summative tests鈥攍arge-scale snapshots such as the standardized tests states use for accountability鈥攚ith formative tests, which are used to diagnose specific learning problems in individual students and improve instruction over time.
鈥淭his set of misbeliefs is actually fostering worse and worse tests,鈥 which assess only surface details that can be gathered for quick turnaround, rather than more-nuanced measures of deep knowledge, retention, and the ability to transfer knowledge to other subjects, she said.
Because teachers are accountable鈥攁nd increasingly evaluated professionally鈥攐n the basis of those tests, Ms. Shepard added, 鈥渢he way math and reading are taught are disabling because they are taught for recognition and taught for memorization, and even comprehension is being postponed. The way those subject matters get presented is the harm of those teaching-to-the-test regimes.鈥
Both Ms. Shepard and Elena Silva, a senior policy analyst at Education Sector, a Washington think tank, said commercial testing companies increasingly offer electronic versions of tests that don鈥檛 gauge deeper learning. Ms. Shepard said that education needs a Consumer Reports to identify tests being used for purposes for which they were not designed.
Test developers and policymakers alike should think of tests as a framework to create feedback loops for improvement, argued Robert J. Mislevy, the chairman for measurement and statistics of the ETS and part of the commission鈥檚 executive council.
鈥淲ho needs the information at what time?鈥 Mr. Mislevy said. 鈥淪ometimes feedback loops are very tight鈥攚hen you鈥檙e playing a learning game, for example, feedback loops are taking place in a second or two. There are other feedback loops that are much bigger, like those used by chief state school officers looking at policy over the course of years.鈥
Those assessments, rather than being used simply to rank students, could help educators identify learning patterns, he said.
For example, the ETS鈥 Global Integrated Scenario-Based Assessment, now in field-testing as part of the federal Reading for Understanding program, uses scenarios to differentiate a student鈥檚 comprehension ability from his or her background knowledge.
Each scenario in the test is a cycle. Students first are tested on vocabulary and concepts related to a topic. Then they read a passage on the topic and summarize the main idea and key details of the text.
Finally, the students report on how they incorporate what they have read into what they already knew about the topic鈥攆or example, by completing an interactive graphic, according to Barbara Foorman, an education professor and the director of the Florida Center for Reading Research at Florida State University in Tallahassee. She spoke in an interview with 澳门跑狗论坛.
Nuanced test design would not replace the need for separate formative and summative tests, Mr. Mislevy of the ETS said, but it could help educators and policymakers think differently about what can be learned from tests.