In my idealistic days 25 years ago, I believed that education research would lead us to the promised land of successful schools and high student achievement.
Many folks still believe that, including the president of the United States, who insists he is determined to make education an 鈥渆vidence-based field,鈥 鈥渁 scientifically based practice.鈥 (Despite the fact that he has long denied global warming, opposes embryonic-stem-cell research, and wants to include the teaching of 鈥渋ntelligent design鈥 alongside evolution in schools.)
鈥擩onathan Bouw
As much as I hate to say it鈥攁nd I truly hope I am wrong鈥擨 no longer believe it, and here鈥檚 why:
Research is not readily accessible鈥攅ither physically or intellectually鈥攖o the potential users. Summaries of major studies appear in periodicals like 澳门跑狗论坛, but the detailed results (usually written for other researchers in academic-speak) are usually available only in separate reports or in relatively low-circulation journals that don鈥檛 reach those who most need to know.
Even if research findings were widely available and written in clear prose that even a dimwit like me could understand, the reports would not be widely read. Most teachers are not consumers of research, nor are most principals or superintendents.
And even if educators and policymakers did read all the studies in a timely fashion, schools and education practice would not change very much, mainly because making significant changes means altering value structures, disrupting routines, and teaching old dogs new tricks.
Moreover, researchers seem to delight in neutralizing each other. That鈥檚 easier to do in social science than the physical sciences because there are so many uncontrollable variables. And the bigger the question addressed, the more vulnerable the findings.
Research rarely leads to significant change because it is often expensive to apply or is a threat to the status quo.
When one study claims small classes boost student achievement, another insists they do not. One study finds social promotion harmful; another says retention hurts children more. Money matters; no it does not. Vouchers work; no they do not. And on and on.
This makes it easy for policymakers and practitioners to get off the hook, because they can always find research results to rebut those they don鈥檛 agree with. And it makes it tougher on foundations trying to decide where their grants will make the most positive difference.
When some entrepreneurial soul proposes trying something different from what we have been doing in traditional schools for a century, naysayers immediately warn that there is not enough research to justify such an experiment and remind us that 鈥渋t is immoral to use other people鈥檚 children as guinea pigs.鈥
By some perverted logic, we are told that we do not have enough research to justify trying something, but if we do not try it, how will we ever get any data to assess whether it works?
Research rarely leads to significant change because it is often expensive to apply or is a threat to the status quo. Good professional development may really improve teaching, but it can be terribly costly. Small classes may boost student achievement, but they increase costs.
If a major study found that public charter schools were outperforming traditional public schools by a country mile, the teachers鈥 unions would still fight them to the death and use all of their influence in state legislatures to help snuff them out.
In rare cases where research findings are neither too costly nor too controversial, and are therefore embraced by policymakers, they are often applied so ineptly that they are ineffective鈥攐r worse, they wind up doing more harm than good.
The textbook example in recent years is the proposal of then-Gov. Gray Davis of California to extend the limited class-size-reduction measure enacted by his predecessor, former Gov. Pete Wilson, to cover all students.
I have often tried to picture how the governor and his aides reached that decision. The only uncynical explanation I can come up with is that they must have been smoking something. Was there nobody in the room who raised crucial questions such as whether there were enough teachers or classrooms available, or whether this was the best use of limited resources?
The federal No Child Left Behind Act is a more recent and powerful example. Based to a fair degree on research and conventional wisdom, the law鈥檚 good intentions have been undermined by its heavy-handed implementation.
I find much education research suspect because it depends so heavily on the flawed measure of standardized-test scores. In most important studies I have seen over the years, the research findings are based solely on student test scores. The limitations in the metric devalue the findings.
I have listened to the liturgy of psychometricians enough to understand why researchers rely so heavily on test results. But scores on standardized tests are not a true or reliable measure of student learning. They do not measure many of the things we hope schooling will produce in children, like good habits of mind and behavior, and they do not measure Howard Gardner鈥檚 other 鈥渋ntelligences,鈥 like artistic talent, kinesthetic ability, and social skills.
Finally, efforts to apply research findings are not likely to produce the desired outcomes because the educational system, like a combustion engine, will not work efficiently if any of its critical parts are broken. Most would agree, for example, that schools will not succeed without good teachers. But you need good salaries, good working conditions, and radically improved teacher-preparation programs to attract smart students and produce good teachers. You cannot get those conditions, however, without having adequate resources, altering practices in higher education, and making basic changes in the structure and operations of schools. In short, the broken components of the system have to be addressed simultaneously.
By some perverted logic, we are told that we do not have enough research to justify trying something, but if we do not try it, how will we ever get any data to assess whether it works?
Deborah J. Stipek, the dean of Stanford University鈥檚 graduate school of education, published an essay on education research in these pages several years ago that made some of the points I make here. (鈥溾楽cientifically Based Practice鈥,鈥 March 23, 2005.) But one statement in her essay boggled my mind. She wrote: 鈥淸B]asing decisions on research and data is a new concept. Both the desire to consult research and the skills to interpret it will need to be developed within the teaching community.鈥
If the dean is correct, and she probably is, one wonders what educators and teacher-preparation programs have been doing for the past century.
It is easier to criticize than to offer remedies, but Dean Stipek鈥檚 comment suggests at least one: Researchers could do more to create an audience for their work. The people who conduct education research and follow it are often the same people who prepare teachers in education schools and departments. What better context for preparing teachers than the most important and timely research on the field they are about to enter? What better opportunity to cultivate in aspiring teachers an interest in research?
Another improvement might be more emphasis on longitudinal studies. These are expensive and time-consuming, but they also can be powerful. Researchers are still feasting off data from the National Education Longitudinal Study (NELS) and the High/Scope Perry Preschool study. Wouldn鈥檛 it be helpful to have data on what has happened to the graduates of alternative schools during the past 20 years, and to follow the graduates of charter schools for the next 20 years, instead of relying on standardized-test scores that are usually incompatible with these schools鈥 educational philosophies and methods?
In the mid-1990s, I was a member of the National Research Council committee that produced SERP鈥攖he Strategic Education Research Partnership program. It was an attempt to deal with education鈥檚 systemic challenge. Could we identify the highest-priority questions, those whose answers would lead to better schools and improved learning, and get the education and policy community to agree? Could a carefully constructed program of strategic research priorities lead to an integrated assault on education鈥檚 systemic problems? Could government and foundations be persuaded to provide long-term funding for such an effort?
If those questions were ever to be answered affirmatively, maybe education research could improve education. Maybe, if there were more of a consensus in the research community, there would be more positive outcomes, both in legislatures and in schools.