For the second year in a row, a controversial $14.4 million testing the effectiveness of reading and math software programs has found few significant learning differences between students who used the technology and those taught using other methods.
Of the 10 commercial software programs tested at various grade levels, only one鈥擫eapTrack, a supplemental-reading program for 4th graders that is published by LeapFrog Schoolhouse, of Emeryville, Calif.鈥攑roduced significant improvements in students鈥 test scores across both years of the study.
Although not large, the test-score boost that the program provides is considered enough to move a typical student from the 50th percentile to the 54th percentile on a national standardized reading test, according to the report.
The two Algebra 1 products tested鈥擟arnegie Learning鈥檚 Cognitive Tutor Algebra 1 and Houghton Mifflin Harcourt鈥檚 Larson Learning Algebra 1鈥攍ed to similar-size test-score gains, but only among students taught by a subset of teachers who had used the same products for two years in a row.
Publishers, researchers, and federal officials called the findings disappointing, but also raised cautions about relying too heavily on the results to compare effectiveness among products and choose which ones to buy.
鈥淚f you already have the hardware in the classroom and you want one of these products, this would not dissuade you,鈥 said Mark Dynarski, the lead researcher on the project for Mathematica Policy Research Inc., the Princeton, N.J.-based company that conducted the study.
鈥淚f you鈥檙e quite skeptical of the software and very budget-pinched, I think you would feel this is evidence in favor of your position,鈥 he added. 鈥淎nd if you鈥檙e really right in the middle, I think it comes down to how much you want to move test scores, because you鈥檙e really not going to see that happen with these products.鈥
Study Draws Criticism
Despite a quiet release in January, the study met with criticism from independent researchers and software publishers.
鈥淭here鈥檚 nothing really here that superintendents or state policymakers or corporations could use that would be a strong basis for decisionmaking,鈥 said Christopher J. Dede, a professor of learning technologies, innovation, and education at the Harvard Graduate School of Education and a critic of the study. 鈥淚 feel the methods used were more flawed in the second year than the first.鈥
Ten computer-based reading and math products were evaluated in the 2005-06 school year as part of a major federal research project.
Grade 1 Early Reading
鈥 Destination Reading, Riverdeep Inc.
鈥 Headsprout, Headsprout Inc.
鈥 The Waterford Early Reading Program, Waterford Institute Inc.
鈥 Plato Focus, Plato Learning
Grade 4 Reading Comprehension
鈥 Academy of Reading, AutoSkill International Inc.
鈥 Leaptrack, LeapFrog Schoolhouse
Grade 6 Prealgebra
鈥 Plato Achieve Now, Plato Learning Inc.
鈥 Larson Prealgebra, Houghton Mifflin Harcourt
Grade 9 Algebra
鈥 Cognitive Tutor, Carnegie Learning Inc.
鈥 Larson Algebra, Houghton Mifflin Harcourt
Source: Mathematica Policy Research Inc.
Note: Some of the developers and companies have since sold their product lines or been involved in corporate acquisitions.
The findings don鈥檛 mean that products that seem to be ineffective in one school or district won鈥檛 work better in another, the report concludes, nor should educators and policymakers use the results to make head-to-head comparisons between products. In some cases, Mr. Dynarski said, too few schools were using the individual products studied to make those kinds of comparisons.
Involving roughly 13,000 students, the study was ordered by Congress in the No Child Left Behind Act. The report on the first round of findings, which looked at 16 products, came out in 2007. (鈥淢ajor Study on Software Stirs Debate,鈥 April 11, 2007.)
The new report, the last one for the project, evaluates 10 commercial software programs that are widely used in the 1st, 4th, and 6th grades, as well as in Algebra 1 classes, which can be taught at several grade levels.
Unlike its predecessor, the final report gives product-by-product results for all 10 programs studied. Over the 2005-06 school year, researchers tested the programs in 23 districts around the country, most of which served high numbers of low-income students, and 77 schools. In each school, and for each product used in those schools, researchers included at least one control classroom and one experimental classroom.
鈥淭he control classrooms are generally using only products for Internet browsing or practicing on state assessments,鈥 Mr. Dynarski said. 鈥淭hey weren鈥檛 using the other software products.鈥
A subset of teachers鈥115鈥攕tuck with the same products for a second year, allowing researchers to see whether the programs became more effective as teachers grew more familiar with them. The additional experience only seemed to matter for the Algebra 1 software, though; for the other programs, students fared about the same in both study years.
The study also found that the average amount of time that students spent using the programs fluctuated from year to year. Yet the researchers could find no correlations between programs鈥 effectiveness and the amount of time that students spent using them.
Questions on Method
Some experts said the study may raise more questions about the usefulness of experimental research designs in education than about the findings themselves. The software study was among the first to reflect the then newly formed Institute of Education Sciences鈥 early emphasis on large-scale randomized studies.
鈥淭hese studies are intended to wash out all the variation in school environments, teacher quality, resources鈥攁ll the things that we, in fact, know make a difference when it comes to student learning,鈥 said Margaret A. Honey, a technology expert who is the president of the New York Hall of Science.
Mr. Dynarski said such concerns stem from the belief that the study had failed to pick up actual learning gains. 鈥淚鈥檓 not sure that the right answer isn鈥檛 zero,鈥 he said.