Suddenly, value-added-based accountability policies are everywhere. The wave of these new policies has been astonishingly swift and broad. Except for educators in Tennessee and Dallas, few had ever heard of 鈥渧alue added鈥 a decade ago. Today, it has been a key component of the federal Race to the Top competition and is being used in a fast-growing list of states and school districts.
Given the size and speed of this wave, you might expect a great deal of agreement about value-added measures, but nothing could be further from the truth. Several recent reports highlight the surprising fault lines in the debate. In particular, it is hard to miss the grand divide between economists and other educational scholars.
Agreement on Some Key Facts
First, a very brief introduction. Drawing on student-level achievement data across years, linked to individual teachers, statistical techniques can be used to estimate how much each teacher contributed to student scores鈥攖he value-added measure of teacher performance. These measures in turn can be given to teachers and school leaders to inform professional development and curriculum decisions, or to make arguably higher-stakes decisions about performance pay, tenure, and dismissal. In Los Angeles, teachers are with their value-added measures on the Los Angeles Times website; the practice may be coming soon to New York City where a judge has ruled that the city school system can release performance ratings of teachers there to media organizations; the United Federation of Teachers says it plans to appeal the decision.
Value-added should be serving the education system, rather than the education system serving value-added.
Unfortunately, there is little understanding of value-added measures, aside from the researchers and statisticians who create and study them. For this reason, I wrote a book to explain what they are and how we might use them.
What I learned in the process was that there is considerable agreement among researchers on many of the central facts. Compared with point-in-time achievement snapshots, value-added measures are less likely to give low ratings to teachers just because they teach disadvantaged students. But, at present, value-added measures can only be calculated for a small fraction of teachers and, even when they can be calculated, they are imprecise and therefore somewhat inaccurate.
But here the agreement ends.
Three Recent Reports
Eric A. Hanushek, a well-known economist at Stanford University鈥檚 Hoover Institution, suggests in a recent that using value-added measures to fire low-performing teachers could generate $100 trillion in national income. This is not a typo: Yes, it鈥檚 鈥渢,鈥 as in trillion, 10 times the total national income in any given year. Hanushek and others have also argued that this approach could eliminate the racial achievement gap. In my view, calculations such as these vastly overstate the benefits of these measures. Value-added measures show promise, but let鈥檚 not get carried away.
Another view comes in a report from the Economic Policy Institute, titled This was followed by a third report from the Brookings Institution,
From the titles, one would think the Brookings and EPI authors are miles apart, but consider the following quotations: 鈥淭here are good reasons for concern about the current system of teacher evaluation,鈥 and 鈥淭here is an obvious need for teacher-evaluation systems that include a spread of verifiable and comparable teacher evaluations.鈥 Can you tell which report each came from? Probably not. The more favorable Brookings report (second quote) points out the flaws of value-added and the opposing EPI report (first quote) says positive things about value-added, and criticizes the current system.
Yet there is a clear disagreement between these groups, mirrored in the larger public debate. If there is agreement on most of the key facts, then why is there so much disagreement about what to do with these measures?
Thinking Like Economists
The answer lies substantially in the backgrounds of the authors: The higher the proportion of authors who are economists, the more aggressive the reports are about the use of value-added. This is obvious in Hanushek鈥檚 case, but the more favorable Brookings report is also written almost entirely by economists (with two exceptions), while only one of the 10 authors on the EPI report is an economist. The main academic architect behind the Bill & Melinda Gates Foundation鈥檚 value-added efforts is yet another economist, Thomas J. Kane.
Why is this? Why do economists see the issue so differently?
An economist myself, let me try to explain. Economists tend to think like well-meaning business people. They focus more on bottom-line results than processes and pedagogy, care more about preparing students for the workplace than the ballot box or art museum, and worry more about U.S. economic competitiveness. Economists also focus on the role financial incentives play in organizations, more so than the other myriad factors affecting human behavior. From this perspective, if we can get rid of ineffective teachers and provide financial incentives for the remainder to improve, then students will have higher test scores, yielding more productive workers and a more competitive U.S. economy.
This logic makes educators and education scholars cringe: Do economists not see that drill-and-kill has replaced rich, inquiry-based learning? Do they really think test preparation is the solution to the nation鈥檚 economic prosperity? Economists do partly recognize these concerns, as the quotations from the recent reports suggest. But they also see the motivation and goals of human behavior somewhat differently from the way most educators do.
Now, this all might sound like yet another educator vs. business perspective, but the situation is not so simple. While usually focused on efficiency, economists make a strong case that value-added could improve equity.
Resources鈥攑articularly teachers themselves鈥攁re highly unequally distributed by student background. Today鈥檚 test-based accountability, created with the intention of 鈥渓eaving no child behind,鈥 also partly reinforces the inequity by punishing schools whose students start at lower initial scores and creating never-ending frustration that drives teachers away. Value-added measures can help solve these problems and improve equity.
So, it is not so easy to brush aside economists鈥 views on this as a narrow-minded mania for efficiency. While not all economists agree on value-added, it is a little hard for most of us to get past the fact that the current educational system pays almost no attention to what teachers do in the classroom.
Finding Middle Ground
The divide between economists and others might be more productive if any of the reports provided specific recommendations. For example, creating better student assessments and combining value-added with classroom assessments are musts. We also have to avoid letting the tail wag the dog: Some states and districts are trying to expand testing to nontested grades and subjects, and to change test instruments so the scores more clearly reflect student growth for value-added calculations. This thinking is exactly backwards. Policymakers should be thinking about the larger instructional benefits and consequences before they test kindergartners or get rid of open-ended questions for which growth is harder to measure. Value-added should be serving the education system rather than the education system serving value-added.
Another key step is experimenting with and carefully evaluating different options for using value-added. There is almost no evidence to suggest that any use of value-added does or does not improve teaching and learning. And that is the goal, right? Improving teaching and learning? Despite the divide in perspectives, I think we can agree on that. But we have to act on it, too, because the statistical properties of value-added measures are at best loosely related to their influence on teaching and learning.
Economists have won round one of this debate. But to win the larger fight, and to tell teachers in Los Angeles and New York City with a straight face that value-added is a good idea, will require showing that these measures can really be useful.