Two blockbuster research findings reported recently in the national press鈥攐ne from the field of education, the other from medicine鈥攈ave something important in common: They are the latest cases in which widely used, widely accepted practices have been challenged by scientifically rigorous evaluations. The education study found no difference in academic achievement between students who used leading educational software for reading and math in their classrooms, and those taught by other methods. (鈥淢ajor Study on Software Stirs Debate,鈥 April 11, 2007.) The medical study, reported in late March, found that stents, which are widely used to open clogged arteries, unexpectedly do no better than drugs for most heart patients.
鈥 David Cutler
These two studies, although similar in methodology, were conducted in very different policy environments. In medicine, such rigorous evaluations are common, and often drive policy and practice. By contrast, in education and most other areas of social policy, such studies are relatively rare. In these areas, policy and practice tend to be driven more by advocacy studies and anecdote than by rigorous evidence, costing billions of dollars yet often failing to produce meaningful improvements in educational achievement, employment and earnings, or rates of substance abuse and criminal behavior, for example.
There is strong reason to believe education would benefit greatly from a more evidence-based approach, such as that used in medicine.
Both of the studies cited above were well-designed randomized-control trials, widely considered the gold standard for determining 鈥渨hat works鈥 in medicine. Such trials鈥攊n which individuals are randomly assigned to a treatment group or a control group鈥攈ave stunned the medical community before by overturning both conventional wisdom and the results of less rigorous studies. Examples include hormone-replacement therapy for postmenopausal women (shown to increase the risk of coronary heart disease, stroke, and breast cancer), dietary fiber to prevent colon cancer (shown to have no effect), and an oxygen-rich environment for premature infants (shown to increase blindness).
Well-designed trials have not only identified medical practices that are ineffective or harmful, they have also provided the conclusive evidence of effectiveness for most of the major medical advances over the past 50 years. These include, for example, vaccines for polio, measles, and hepatitis B; effective treatments for hypertension and high cholesterol; and cancer treatments that have dramatically improved survival rates from leukemia, Hodgkin鈥檚 disease, breast cancer, and many other cancers.
In the few cases where rigorous methods have been used in education, they have demonstrated the ability to produce valid, actionable evidence about what works, and the potential to spark rapid progress similar to that which has transformed medicine. For example, well-designed randomized-control trials in education have identified a few widely used programs that are ineffective or harmful. The trial of educational software products is one example, but there are others. Drug Abuse Resistance Education, or DARE鈥攁 substance-abuse-prevention program used in more than two-thirds of U.S. school districts鈥攈as been shown in such trials to have no effect on substance use, and therefore is now being redesigned. The Job Training Partnership Act program for young people鈥攁 large federal workforce-training program in the 1980s鈥攚as shown in a well-designed trial actually to have an adverse effect on the earnings of male youths. And a similar trial of federally funded dropout-prevention programs in the 1990s found no effect on school dropout rates.
Well-designed trials have also identified a few highly effective educational programs. One example is Success for All, a comprehensive schoolwide reform program, primarily for high-poverty elementary schools, with a strong emphasis on the prevention of reading problems before they become serious. A recent trial found that the average school implementing the program in grades K-2 scored higher on schoolwide reading achievement at the end of 2nd grade than approximately 60 percent of the schools in the control group.
Another example is the Good Behavior Game, a 1st grade classroom-management strategy that rewards students for positive group behavior. It has been shown in two well-designed trials in Baltimore public schools to produce 25 percent to 60 percent reductions in substance abuse, school suspensions, and serious conduct problems in youths through middle school and into young adulthood. And Check and Connect鈥攁 dropout-prevention program for at-risk high school students that assigns them a 鈥渕onitor鈥 (such as a graduate student) who serves as a year-round mentor and service coordinator鈥攈as been shown to be highly effective in two such trials, producing a 40 percent increase in students鈥 staying enrolled in or graduating from high school four years later.
The very existence of a few research-proven educational programs such as these suggests that a concerted government effort to apply the rigorous methods used in medicine to education policy could fundamentally increase the effectiveness of such policy in improving educational and life outcomes for American students. The U.S. Department of Education鈥檚 Institute of Education Sciences, or IES, has made an excellent start by greatly increasing the number of well-designed randomized-control trials and other rigorous evaluations it funds. But much more is needed, because the number of educational practices proven effective in such studies is very small, or even nonexistent in some areas of education. This leaves schools and districts with few research-proven tools they can use to increase the reading and math proficiency of students, as called for in the federal No Child Left Behind Act, or to improve other key educational outcomes.
In education, policy and practice tend to be driven more by advocacy studies and anecdote than by rigorous evidence.
An important step to address this problem has been proposed by the Aspen Institute鈥檚 bipartisan Commission on No Child Left Behind. That national panel has recommended doubling the IES research budget for K-12 education, currently about $250 million a year.
A second, complementary idea has been proposed by the National Board for Education Sciences, which oversees the research agenda of the IES. The board has recommended that Congress require federal education program grantees, as a condition of their grant awards, to participate in evaluations if asked to do so, including by random assignment where appropriate. This would help foster partnerships between program grantees and evaluators in carrying out rigorous evaluations to identify effective models and practices.
A third important step would be to create strong incentives for those who receive federal education grants to adopt research-proven models and practices in areas where such models or practices exist, and to provide assistance to put them into widespread use. A recent 澳门跑狗论坛 Commentary proposed a promising approach to advance this goal: a competitive priority for grant applicants that commit to the use of programs backed by strong evidence of effectiveness. (鈥淩esearch and Effectiveness,鈥 Oct. 18, 2006.)
A concerted effort to develop and use rigorous evidence to improve education policy, through steps such as these, would require a very modest investment of government funds in the context of the $400 billion spent each year on public K-12 education. Doubling the Institute of Education Sciences鈥 research budget, for instance, would cost, at an additional $250 million a year, only about 0.06 percent of the total K-12 expenditure, and the other two ideas above are budget-neutral. Yet they offer the opportunity, based on a compelling precedent from the field of medicine, to supply a critical missing piece needed to improve U.S. education: scientifically valid, actionable knowledge about what works in raising student achievement, preventing educational failure, and producing creative, motivated students who will be contributing members of society.