The results of the Program for International Student Assessment, or PISA, the test that is used to do an international ranking of students’ academic performance, will be trumpeted later this year, and the responses will undoubtedly repeat those of previous years: U.S. officials will wring their hands and lament that American student achievement is stagnant, that it is lagging woefully behind our economic competitors’, and that we therefore need to import features of schooling from higher-scoring countries.
This edu-masochism—a distinctly American way of focusing on our educational shortcomings—can be traced at least back to the late 1950s and the Soviet Union’s launch of Sputnik. Perhaps the nation’s early trailblazing successes in establishing mass schooling and developing a uniquely excellent system of higher education have left U.S. educators particularly vulnerable to charges that other nations are surpassing us.
These fears unfortunately have led education policymakers astray. They focus on what other nations are achieving, and they fall prey to those urging us to copy Finland’s, Singapore’s, or South Korea’s schooling practices.
But why compare national student performance in the United States with average scores in other countries, when U.S. students attend schools in 51 separate education systems responsible to states and the District of Columbia, not the federal government? The U.S. education system is a construct that does not exist operationally. What’s more, a well-respected test, the National Assessment of Educational Progress (NAEP) provides a state-by-state picture of our schools that is much more relevant than either PISA or other major international tests, such as the Trends in International Mathematics and Science Study (TIMSS).
President Barack Obama’s signing of the Every Student Succeeds Act in December underscores this reality: Although the federal government retains the ability to measure and compare how different states are doing with NAEP, the new law makes clear that the authority to decide school policy resides with the states.
Our recent study of two decades of PISA, TIMSS, and NAEP scores .
To begin with, schools in the United States are not doing as poorly as international-test scores suggest. It makes sense to look to high performers in Europe and Asia for new education strategies if we lack our own success stories, but that is not the case. After we adjusted for socioeconomic differences in the samples of students taking the PISA and TIMSS tests, we found that performance on international math and reading tests in states such as Massachusetts and North Carolina is as high as, or higher than, in the highest-scoring countries in Europe. We also showed that gains in TIMSS mathematics scores over the past 12 to 16 years in several states are much higher than gains in other countries. Furthermore, students in these same states and others have made very large, steady gains on NAEP, especially in mathematics, over the last two decades, despite the dip in scores in 2015. That’s in marked contrast with the average U.S. results on PISA, which did not rise between 2000 and 2012.
Policymakers also must consider whether international tests are influenced by factors that are as much social and cultural as they are educational. There is no causal evidence that students in some Asian countries, for example, score higher on international tests mainly because of better schooling. Their achievement is more likely the result of large investments made by families on outside-of-school tutoring and cram courses to hone test-taking skills. Such differences between other countries and U.S. states in these social and educational contexts make it difficult to infer relevant educational policy from correlations with student test scores.
Worrying about how schools in other nations are better than ours doesn’t get us much closer to answers.
By contrast, the conditions and context of education in the high-scoring states are more similar to those in other, lower-performing states than they are to other countries’. Teacher labor markets are not drastically different from state to state. Furthermore, the state systems are regulated under the same federal rules. If students with similar family academic resources attending schools with similar socioeconomic and ethnic composition in some states make much larger gains than in other states, those larger gains are more likely to be related to specific state policies that could be applied elsewhere in the United States.
In our study, we showed that the basis for such fruitful comparisons exists. We found that since 1992, the average annual increase in demographically adjusted NAEP 8th grade math scores of students in the top-gaining 10 states was 1.6 points per year, double that of students in the bottom-gaining 10 states. Thus, for a 21-year period between 1992 and 2013, 8th graders in the higher-gaining states increased their math performance 16 points more than students in the lower-gaining states. This represents about one-half a standard deviation of the individual student variation in NAEP 8th grade math scores—a huge difference in performance gain by typical educational improvement standards.
States that made large reading gains were not necessarily the same states that made large math gains. For example, Maryland and Florida made relatively larger gains in reading than in mathematics. And Texas and Massachusetts made large gains in math, but not reading.
Another direction for further policy research is to pair off states with different patterns of gains in 8th grade math. We showed in our study, for instance, that 8th grade students in Massachusetts made much larger gains in math after 2003 than their counterparts in neighboring Connecticut; that students in Texas greatly outpaced demographically similar California from the early 1990s until 2013; and that students in New Jersey made larger gains than students in New York after 2003. These and other state comparisons could provide important insights into the kinds of policies that enabled students in some states to make much larger demographically adjusted gains in math scores than students next door.
A case in point is Massachusetts and Connecticut, where students had the same NAEP math score in 2003. By 2013, Massachusetts students had increased by 17 points over similar students attending similar schools in Connecticut. We need to learn why students in Massachusetts took off in math after 2003 while students in Connecticut did not.
To be sure, international comparisons can be instructive. It is useful to know that teachers in high-scoring Finland are prepared much more thoroughly than teachers in most U.S. states, and that high teacher salaries in Singapore and Taiwan have eliminated shortages in math instruction.
But too much time in the United States is spent fear-mongering and declaring that our economy is about to tank because of how U.S. schools purportedly stack up against schools in other nations. It’s almost as if we want to punish ourselves.
Despite these cries, our economy has grown, and the nation remains the leader in innovation. While improvement is needed, worrying about how schools in other nations are better than ours doesn’t get us much closer to answers.
We can put an end to our edu-masochism: If researchers spend more effort on assessing our own states’ successes and failures in improving student performance and less on trying to draw lessons from countries with very different social and educational contexts, they are sure to spark a much more productive national educational policy debate than we have had in the past decade.