Of all the grants the Bill & Melinda Gates Foundation has made in teacher quality, observers tend to agree that the single most influential has been the $45 million Measures of Effective Teaching study.
Nearly all the researchers interviewed about the study praised its technical merits. But that hasn鈥檛 silenced the criticism aimed at how the project was framed, how the findings were communicated, and whether the many states drawing on them to draft teacher-evaluation policies are doing so appropriately.
The study鈥檚 core findings are that 鈥渧alue added鈥 models, observations of teachers keyed to frameworks, and student survey results all, to an extent, predict which teachers help their students learn more. Combined into a single measure, they offer trade-offs of validity, stability, and cost.
One thread of criticism: The research made students鈥 test scores paramount, said Bruce D. Baker, a professor in the graduate school of education at Rutgers University in New Brunswick, N.J.
The Gates Foundation and the research team set up 鈥渁 research framework that really boxed them in,鈥 he said. 鈥淭hroughout the course of these studies, it was always assumed that the validity check for anything and everything else was next year鈥檚 value-added scores.鈥
Contribution to Learning
Value-added models, which try to determine teachers鈥 contributions to student learning, are supposed to make the use of test scores fairer to teachers by taking into account students鈥 performance history and backgrounds. But teachers remain deeply skeptical of those models.
The summaries of the teaching-effectiveness research released for the public beginning in 2011, meanwhile, emphasized certain conclusions and downplayed other important findings, critics charge.
The Gates Foundation has provided grant support to 澳门跑狗论坛, the nonprofit corporation that publishes 澳门跑狗论坛. The newspaper retains sole editorial control over coverage. See disclosure.
For one, said Jay P. Greene, a professor of education reform at the University of Arkansas at Fayetteville, the reports stressed the importance of observation tools despite their cost and generally weak correlation with teachers鈥 future performance. Similarly, one less prominently featured finding was that a teacher-observation framework in English/language arts called PLATO actually did better than value-added in some cases in predicting how a teacher鈥檚 students would perform on higher-order, more cognitively challenging tasks in that field.
鈥淚t鈥檚 difficult to write data up when they鈥檙e controversial and you鈥檙e not sure what to emphasize,鈥 said Susanna Loeb, a professor of education at Stanford University who was on the project鈥檚 technical-advisory committee but didn鈥檛 conduct any of the research. 鈥淚 think there are a lot of interpretations about what the results mean. And the study doesn鈥檛 tell you the effect of using any of these measures in teacher evaluation in practice.鈥
Debates about the study tend to reflect disagreements about the translation of the research into policy. Jesse Rothstein, a University of California, Berkeley associate professor of public policy, for instance, described the relationship between various ways of estimating the value-added measures as 鈥渟hockingly weak,鈥 calling into question their usefulness as a factor in personnel decisions.
State Capacity
But the principal researcher on the study, Thomas Kane of Harvard University, disputes such characterizations. He argues that the correct basis for comparing the strength of the study鈥檚 identified measures is the information districts have traditionally used instead.
鈥淚t鈥檚 not like we can avoid making high-stakes decisions about teachers,鈥 he said. 鈥淭he right comparison is not to perfection; it鈥檚 to experience and master鈥檚 degrees and the information we currently have. Relative to that information, do these measures do better? The answer is unequivocally 鈥榶es.'鈥"
Lawmakers and state education officials digesting the study鈥檚 results, meanwhile, are bumping up against the fact that not all states have the capacity to generate value-added data. Many states, such as New Jersey, are using an alternate method of gauging teachers鈥 impact on test scores, called student-growth percentiles, that the research didn鈥檛 even undertake, sparking concern from scholars like Mr. Baker.
Bill Gates himself has harbored concerns about how some states and districts are putting into practice the ideas his philanthropy has catalyzed. In op-eds in national newspapers, he has opposed the publication of teachers鈥 evaluation results and the haste to establish tests in every subject to produce teacher-evaluation data.
Melinda Gates added in a recent interview with 澳门跑狗论坛 that state officials sometimes rushed to institute systems ahead of the teacher-effectiveness findings.
鈥淲hen we come out with new research and new data, we can鈥檛 necessarily control how it spreads, nor should we,鈥 she said. 鈥淚t would have been nice and neat and tidy if we could have said, 鈥榃ait until the very last day when [the research] comes out, and this is the way to go.鈥 But I think some states went a little fast.鈥