ܹ̳

Opinion
School & District Management Opinion

‘Scientifically Based Practice’

By Deborah Stipek — March 22, 2005 9 min read
  • Save to favorites
  • Print
Email Copy URL

The pressure is on. As a nation, we are asking teachers and administrators to bring all students to high standards of achievement, and we are holding them accountable. By raising the stakes for demonstrating better student outcomes, we have created a desperate need for information on how to achieve these challenging new goals. Everyone seems to agree that it is time for education researchers to deliver the kind of systematic knowledge that policymakers and practitioners need to do the job the nation is asking of them.

Nowhere has faith in the value of research for informing policy and practice been more forcefully expressed than in the nation’s capital. The U.S. Department of Education’s recent strategic plan claims that “we will change education to make it an evidence-based field.” Indeed, “scientifically based practice” has become the constant refrain of the Bush administration.

Improving the quality of education research does not solve the problem of how the findings will be implemented.

But the administration is also recommending significant changes in the way education researchers do business. According to the Institute of Education Sciences’ director, Grover J. “Russ” Whitehurst, the focus of research should be on identifying effective teaching practices. Borrowing from the field of medicine, the federal government has also put its faith, and its money, in a particular methodology—randomized field trials. This methodology is considered to be more rigorous than any other used in education research, and it allows causal conclusions that no other method can boast.

Also concerned with the quality and reputation of education research, the National Research Council Committee on Scientific Principles in Education Research offers a somewhat different set of recommendations. The committee suggests that the fit between the method and the questions being asked is more important than the particular method. Its recommendations focus primarily on the culture of education research—the need to foster a greater commitment to objectivity, high standards of scientific inquiry, replication, and the free flow of constructive critique.

Yet a third set of recommendations is well articulated in two documents—one issued by the National Academy of Education in 1999 (Recommendations Regarding Research Priorities: An Advisory Report to the National Education Research Policy and Priorities Board), and another by the National Research Council (Strategic Education Research Partnership, SERP). These reports promote, as the administration does, research that focuses on the problems of practice. Their recommendations differ from the administration’s strategy in several important ways, however. First, they encourage research in what Donald Stokes, in his 1997 book, calls Pasteur’s Quadrant—research on practical problems that develops, at the same time, general principles that can guide future research and practice. The reports suggest particular qualities of research that they claim will be more useful for improving education practice.

They recommend, for example, research that is embedded in practice and that involves collaborations between researchers and practitioners. Unlike the traditional linear model of “research-into-practice,” their view of productive research and development involves moving back and forth between research and practice. Innovations are developed by researchers collaborating with practitioners. They are tried out in classrooms, refined or developed by practitioners in their schools and classrooms, and then systematically studied by researchers. The link between research and practice is assumed to be complex, reciprocal, and dynamic.

Thus we have three well-developed proposals for how educational researchers can get their act together and then deliver. All three have merit, and they are not mutually exclusive, except inasmuch as time, resources, and talent are limited.

The culture of research organizations, especially universities, has not been particularly supportive of collaborative research that focuses on practical issues. But let us suppose, optimistically, that we are able to effect the needed changes in research contexts and make progress on all of the recommendations: We increase the number of randomized field trials that produce evidence for the value of particular instructional approaches; we increase the commitment and culture of rigorous scientific methods among education researchers; and we develop sustained collaborations between researchers and practitioners in which effective teaching strategies are developed, tested, refined, and disseminated.

We are still only halfway to scientifically based practice. There is more to do.

First, research findings must be made more accessible. Most research evidence is published in places and forms that only other researchers visit and can comprehend. The Bush administration’s effort to give policymakers and practitioners easy access to research findings through its What Works Clearinghouse is a laudable beginning.

We also need to create an appetite for research findings. Practitioners’ decisions are based primarily on their own intuitions and experience and occasionally on advice from colleagues, principals, or workshop leaders. The idea of basing decisions on research findings or even data collected at the local level is not part of the culture of teaching. New technology and the push for data-based decisionmaking and evidence-based practice are beginning to change the situation, but basing decisions on research and data is a new concept. Both the desire to consult research and the skills to interpret it will need to be developed within the teaching community.

We need to create an appetite for research findings.

We might expect the demand for and use of education research to rise if the quality and clarity of findings improve significantly. This occurred to some degree in medicine. But even in medicine, the path from findings to local use is indirect, often slow, and sometimes nonexistent. Education presents more serious obstacles to the implementation of research findings because the implications for practice are rarely straightforward.

We will also need to change the organization of teachers’ work to make it possible for them to learn new, effective practices. Evidence-based teaching involves more than prescribing the right pill. Research findings can never be specific enough to guide all of the myriad decisions that teachers need to make, moment by moment, in their own classrooms with their own students.

As a consequence, teachers need to have a deep understanding of the innovative methods and programs they are asked to implement. This requires far more time out of the classroom than they have available during the workday, and more training and support than most schools are organized to provide. Without these, however, the instruction that is actually implemented may bear little resemblance to the instruction that research demonstrated as effective.

Productive use of research findings at the policy level also requires many judgment calls. A policy found to be effective in one context is not necessarily effective in another, and there are often many details related to the original conditions of the research that need to be attended to when applying findings in new contexts.

Consider the example of class-size reduction in California. A large, random-assignment study in Tennessee demonstrating the benefits of reducing class sizes to about 15 students was used to support a policy of reducing class size to 20 in California. But unlike in Tennessee, where trained teachers were in good supply, in California there was a serious teacher shortage. Because crucial variables related to the context of the study were ignored, the implementation of this very costly policy in California may have done more harm than good, at least for children in the low-income communities that could not compete for the limited supply of trained and experienced teachers.

Another example is a random-assignment study of the High/ Scope preschool intervention in Ipsilanti, Mich., cited repeatedly as support for preschool education. True, the study has demonstrated impressive and long-term effects of a preschool experience, but the devil is in the details. Many of the preschool programs that were spawned by this compelling research evidence look nothing like the Ipsilanti program. It is very likely that many of the preschool programs based on this research do not give anything close to the same advantages seen in the original High/Scope program.

Policymakers will need to be willing to give more weight to research findings than they now do if evidence is to have an impact on practice.

These examples illustrate the complexity of making evidence-based policy decisions. Researchers will need to make sure that they communicate clearly what contextual variables and details of the intervention or program are necessary to achieve positive results. And policymakers will need either training or assistance to make judgments about the implications of research findings for their local context.

It is also important to consider that evidence-based education practices will not be implemented broadly without cooperation from the private sector. In the field of medicine, pharmaceutical companies use a substantial portion of their profits to develop and study more effective strategies to prevent or cure illness. The motive is profit, to be sure, but the rigor of the research is monitored, and an elaborate federal bureaucracy exists to constrain dissemination of products that have not met high standards of evidence for effectiveness and safety.

The situation is quite different in education. Although educational practices are hugely influenced by products developed in the private sector, objective evidence on the effects of these products on student learning is rare. Until recently, there have been no incentives for carefully designed studies because buyers haven’t asked for evidence, and no outside agency has monitored the quality or even the existence of evidence.

There are signs that this situation may change as a consequence of the Bush administration’s policy of limiting funding (for example, in the Reading First initiative) to instructional programs that are research-based. The potential value of such a policy is clearly evident. Companies that produce educational products are beginning to figure out how to do credible research that will demonstrate the positive effects of their products on student learning. But we have a long way to go to develop mechanisms and organizational structures that will ensure critical and fair reviews of the evidence offered.

Finally, when evidence, however rigorous, is pitted against politics politics always wins. Student retention is a good example of evidence that is consistently ignored. The lack of evidence for positive effects of retaining children in their current grade when they fail to meet minimum standards appears not to have stemmed the trend of “no social promotion” policies. More rigorous, clearer, and more consistent findings may help, but policymakers will need to be willing to give more weight to research findings than they now do if evidence is to have an impact on practice.

The bottom line is that education researchers, like educational practitioners, are being asked to approach their work differently from how they did in the past. We are being challenged to impose high standards of scientific rigor on ourselves, to focus on problems of practice, and to develop sustained collaborations with practitioners. If the resources needed to do this kind of research become available (they currently are not), we should be able to live up to the challenge.

But until many other institutional changes occur, and the organizational structures to support evidence-based practice are developed, research findings, however clear and useful, will have a feather’s weight on teaching and student learning in the nation’s schools.

We do need to improve the quality and relevance of education research, but that’s not all we need to do.

Related Tags:

Events

Artificial Intelligence K-12 Essentials Forum Big AI Questions for Schools. How They Should Respond 
Join this free virtual event to unpack some of the big questions around the use of AI in K-12 education.
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of ܹ̳'s editorial staff.
Sponsor
School & District Management Webinar
Harnessing AI to Address Chronic Absenteeism in Schools
Learn how AI can help your district improve student attendance and boost academic outcomes.
Content provided by 
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of ܹ̳'s editorial staff.
Sponsor
Science Webinar
Spark Minds, Reignite Students & Teachers: STEM’s Role in Supporting Presence and Engagement
Is your district struggling with chronic absenteeism? Discover how STEM can reignite students' and teachers' passion for learning.
Content provided by 

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.

Read Next

School & District Management Local Education News You May Have Missed in 2024 (and Why It Matters)
A recap of four important stories and what they may signal for your school or district.
7 min read
Photograph of a stack of newspapers. One reads "Three schools were closed and..."
iStock/Getty
School & District Management Principals Polled: Where School Leaders Stand on 10 Big Issues
A look at how principals responded to questions on Halloween costumes, snow days, teacher morale, and more.
4 min read
Illustration of speech/thought bubbles.
DigitalVision Vectors
School & District Management Opinion You’re the Principal, and Your Teachers Hate a New District Policy. What Now?
This school leader committed to being a bridge between his district and school staff this year. Here’s what he learned.
Ian Knox
4 min read
A district liaison bridging the gap between 2 sides.
Vanessa Solis/ܹ̳ via Canva
School & District Management The 4 District Leaders Who Could Be the Next Superintendent of the Year
Four district leaders are finalists for the national honor. They've emphasized CTE, student safety, financial sustainability, and more.
4 min read
Clockwise from upper left: Sharon Desmoulin-Kherat, superintendent of the Peoria Public School District 150; Walter Gonsoulin, superintendent of Jefferson County Schools; Debbie Jones, superintendent of the Bentonville School District; David Moore, superintendent of the School District of Indian River County.
Clockwise from upper left: Sharon Desmoulin-Kherat, superintendent of the Peoria school district in Illinois; Walter Gonsoulin, superintendent of Jefferson County schools in Alabama; Debbie Jones, superintendent of the Bentonville, Ark., school district; and David Moore, superintendent in Indian River County, Fla. The four have been named finalists for national Superintendent of the Year. AASA will announce the winner in March 2025.
Courtesy of AASA, the School Superintendent's Association