Outside of a handful of Asian nations, the typical 8th grader in many foreign countries would not meet “proficient” levels on U.S. tests of mathematics and science, according to a reanalysis of international achievement data being published today. Then again, the study also shows, neither do most American students.
Scheduled to be posted today on the Web site of the American Institutes for Research, a Washington-based research organization, the new analysis comes from AIR’s chief scientist, Gary W. Phillips. Mr. Phillips’ idea was to statistically “link” scores from two well-known testing programs: the Trends in International Mathematics and Science Study, or TIMSS, which is given every few years to students in more than three dozen countries, and the National Assessment of Educational Progress, a congressionally mandated program known as the “nation’s report card.”
A study designed to predict how other countries would fare on U.S. mathematics and science tests found that the typical 8th grader in most places, including the United States, would score at the “basic” level or below on the National Assessment of Educational Progress math test.
SOURCE: American Institutes for Research
“I wanted to re-express the TIMSS results in terms of a set of standards that U.S. policymakers would be familiar with,” said Mr. Phillips, who was acting commissioner of the National Center for Education Statistics, the data-gathering arm of the U.S. Education Department, from 1999 to 2002. “It’s a lot like looking at world poverty levels. It’s impossible to get your head around it until you get a common metric.”
Begun in 1969, the NAEP testing program uses three levels to gauge student achievement: basic, which denotes partial mastery of the academic material; proficient, for solid academic performance; and advanced, to describe superior achievement. Students who reach none of those levels are described as performing below basic.
Read the report, from the .
By Mr. Phillips’ estimate, of the 8th graders from 46 nations who took TIMSS mathematics tests in 2003, only students from Hong Kong, Japan, Singapore, South Korea, and Taiwan, would reach the proficient level, on average, if they were to take the 8th grade NAEP test in that subject. On the NAEP science tests, the analysis shows, only two of the TIMSS countries—Singapore and Taiwan—would have students who score, on average, at that level.
For most countries, including the United States, 8th graders typically score at levels considered basic on the U.S. NAEP tests. On the math test, 22 of 46 countries were projected to achieve that status; in science, the number was 20 of 38.
However, another large group of nations, most of them located in Africa and the Middle East, would achieve at “below basic” levels on NAEP’s 8th grade math and science exams, according to the analysis.
Interpretations Mixed
Although Mr. Phillips saw his study as primarily a methodological test, some experts said the findings could shed some light on current debates over whether U.S. policymakers have set the right achievement levels for the NAEP program.
“My conclusion is that we need to take a good look at where all the cut points are for defining basic skills or proficiency,” said Tom Loveless, the director of the Brown Center on Education Policy at the Brookings Institution, a Washington-based research organization.
However, Chester E. Finn, the president of another Washington think tank, the Thomas B. Fordham Foundation, drew the opposite conclusion from Mr. Phillips’ results. “If you have normal students from normal public schools in normal countries and more than half of them are reaching proficient levels, then we have set exactly the right target that everybody should aim for,” said Mr. Finn, who once sat on the governing board that sets policy for the federal testing program.
Mr. Loveless and other experts, though, also warned about reading too much into the findings since the two tests cover different content. But Mr. Phillips said the coverage areas, though not exact, are similar enough to make his comparisons worthwhile.
To arrive at his results, Mr. Phillips drew on earlier studies that analyzed results for two national samples of U.S. students—one of which took TIMSS tests in 1999 and another that took NAEP tests in 2000—to find statistical relationships for predicting how scores on one test could predict how well students perform on the other one. The purpose of that earlier study was to determine how students in particular U.S. states would have performed on the international tests. Mr. Phillips used essentially the same process and extended it so that he could estimate how students taking the international tests in 2003 would fare on the U.S. exams.
He acknowledged, though, that the method may have some drawbacks, one of which is that the statistics on which the calculations are drawn are based on U.S. students, because they took both the NAEP and TIMSS tests. In the other countries included in the study, students only took the international tests.