The What Works Clearinghouse, the Department of Education鈥檚 newly functioning enterprise to give Consumer Reports-style ratings on research and educational programs, is getting mixed reviews so far from one key group: the researchers whose work it features.
While a few among that small group of scholars are happy to see their work reaching a wider audience, some also express concerns about the way it is being presented. They say reviewers are misinterpreting and pigeonholing their studies and sometimes inadvertently casting aspersions on potentially useful research.
鈥淲hen I looked at it, I was just kind of appalled,鈥 said G. Michael Pressley, the director of doctoral programs in teacher education at Michigan State University in East Lansing. A study he co-wrote on a practice known as reciprocal teaching is among the handful of studies that got a thumbs-up from clearinghouse reviewers when the site was unveiled last month. (鈥溾榃hat Works鈥 Research Site Unveiled,鈥 July 14, 2004.)
For their part, clearinghouse researchers say they are already working to address some of the scholars鈥 concerns. They are drafting text for the Web site, for instance, to explain better why some studies are listed as failing to pass muster, and are working as quickly as possible to add more studies to their online archives.
The reviewers plan for now, though, to continue using the strict methodological criterion that is drawing much of the criticism for weeding out useful research. Emulating standards set by medical research, the clearinghouse puts a premium on randomized field trials, in which subjects are randomly to either control or experimental groups. As a result, of 18,000 studies reviewed, only 12 so far have fully met clearinghouse standards.
鈥淩andom assignment is a strong design,鈥 said Rebecca S. Herman, the project director. 鈥淚t鈥檚 always been a strong design, and I think it continues to be.鈥
鈥楧oes Not Pass Screen鈥
Some of the harshest criticism has come from Mr. Pressley, who in letters to the clearinghouse and to 澳门跑狗论坛 has charged that the reviewers mistakenly categorized his study on reciprocal teaching as a trial of peer tutoring. He says peer tutoring is just one part of that approach.
鈥淭here is no way to draw a conclusion about peer tutoring or cooperative learning per se from this study,鈥 he writes.
In a written reply to Mr. Pressley, Ms. Herman and Robert F. Boruch, the project鈥檚 principal investigator, said they included the study because they wanted to offer educators as much information as possible on peer-assisted learning.
But Mr. Pressley, like other researchers, also raised questions about the 173 peer-tutoring studies that did not make the grade. They are listed on the Web site under the heading 鈥渄oes not pass screen,鈥 which means that they were either deemed irrelevant or did not meet the clearinghouse鈥檚 methodological standards. The problem, the researchers said, was that the site does not explain why those studies were rejected.
鈥淢any studies that 鈥榙o not pass the screen鈥 may be viewed as not valuable when in fact they may be extremely helpful in understanding or adapting an intervention to a new context,鈥 said Anthony J. Gabriele, a researcher at the University of Northern Iowa in Cedar Falls whose own study landed in that category. The label did not surprise him, he said, because the study was never designed to answer whether peer tutoring works. But he became concerned when a friend sent an e-mail expressing condolences on the study鈥檚 categorization.
鈥淭o the extent that this classification discourages stakeholders from looking at these studies, I think we may be providing too narrow a focus given our relative understanding regarding what works,鈥 Mr. Gabriele said.
James J. Baker, the developer of a middle school mathematics program known as Expert Mathematician, is also dismayed at the way his research on the program is reported. His study鈥攖he only one that fully met the criteria for his topic鈥攗sed a random-assignment strategy to test whether students could learn as much with his student-driven, computer-based math program as they could from a traditional, teacher-directed curriculum known as Transition Mathematics. The problem, he argues, is that the Web site says his program had no effect without explaining that students made learning gains in both groups.
Ms. Herman said the clearinghouse could not provide that context because it had no research to show that Transition Mathematics works better than other curricula. 鈥淲ithout that, we couldn鈥檛 report out that it was effective,鈥 she said. Ms. Herman said other analyses in the study鈥攕howing, for instance, that students鈥 attitudes toward math improved more with Expert Mathematician鈥攄id not meet clearinghouse criteria.
In all, she said, the clearinghouse has received about 100 comments on the new site, which took two years to develop. Some suggested that the screening criteria had not gone far enough in weeding out the 鈥渂est of the best鈥 research in the field. Ms. Herman said some feedback came from the practitioners the clearinghouse was designed to serve.
鈥淭hey said that we were providing the kind of critical look that helps them figure out whether a study is useful or not,鈥 she added.