澳门跑狗论坛

Opinion
School & District Management Opinion

The Painful Necessity of Replicating Research

By Jonathan A. Plucker & Matthew Makel 鈥 November 03, 2015 7 min read
  • Save to favorites
  • Print
Email Copy URL

A published this summer in the journal Science estimated the 鈥渞eproducibility鈥 of psychological research. It rightly received massive media attention, much of it centered on questions of whether research in psychology should be trusted. But the research in that field is in .

Academic research, especially in the social sciences, is undergoing a profound change today that is born of a moment of crisis about the trustworthiness of research findings. There has been increased scrutiny over when we 鈥渒now鈥 what we think we know. Such scrutiny includes questions about whether a single study can or should serve as a definitive answer to a question, as well as on how statistics should be used and interpreted. As a colleague once asked us, 鈥淲ould you set national policy based on the results of a single study?鈥 We would not, nor should anyone else.

Education isn鈥檛 ignoring these issues, but such questions do not yet dominate our education research discussions. Our 2014 paper in Educational Researcher, spurred significant discussion, but more than discussion is needed.

BRIC ARCHIVE

Education can make a singular contribution to the evolution of how social-science research is conducted and interpreted. This is because the field has vast experience in an area highly related to replication: program evaluation. For example, despite generally being conducted with the best of intentions, some replication attempts are being met by a considerable and growing , accusations of bullying, and even concerns about possible in replication. These are exactly the issues that often come up in discussions of education program evaluation. So we can pull lessons from evaluation to make sense of the growing paradox surrounding replication: How can something almost universally acknowledged to be valuable be so often reviled and controversial?

As researchers who have been involved in both replication research and program evaluation, we believe that if replication is viewed as a special case of evaluation, members of the education community can (and should) lead the charge on using replication to improve scientific research. What follows are some lessons we鈥檝e pulled from years of evaluating education programs, or, more to the point, lessons about the psychology of program evaluation. They are not meant to be exhaustive, but rather a springboard for more discussion about replication within the education sciences.

  • No One Likes to Be Evaluated.

    Everyone tends to be a fan of evaluation ... until their work is the focus of those verification efforts. Replication is no different. When one of us was involved with an evaluation of a federal agency, most of the staff members were helpful and congenial; ironically, the unit within the agency tasked with promoting rigorous program evaluation was the most resistant to being evaluated. It鈥檚 just not human nature to welcome an external evaluation with open arms, and replication is no different.

  • A Weak Defense Is Often Worse Than No Defense.

    Because of the aversion to evaluation, a common response by someone whose work is being evaluated is, 鈥淏ut we鈥檝e already been evaluated!鈥 These previous evaluations, upon closer inspection, often tend to be self-evaluations, evaluations conducted with or by close colleagues, or those based on satisfaction surveys (that is, a low level of evidence). This defensiveness weakens one鈥檚 arguments from the start and should be avoided. As replications slowly become more common in the social sciences, we have observed similar knee-jerk responses (such as complaints that 鈥渕y study has already been replicated,鈥 when, on closer examination, that proves not to be the case). The best compliment for anyone鈥檚 research can be found in multiple, independent replications of the original study. That may not be fun for the researcher, but that鈥檚 science.

There has been increased scrutiny over when we 'know' what we think we know.

  • Don鈥檛 Be a Jerk.

    The motivation behind the vast majority of replications we鈥檝e seen is to conduct sound scientific inquiry. However, any expectation on the part of replicators that the replicatee will be thrilled to have his or her work evaluated is probably na茂ve, especially if that researcher is approached in a manner that could be interpreted as hostile. An evaluator whose goal is to prove someone wrong is not one who will be well received, but an evaluator seeking to understand what is (or isn鈥檛) happening, in an open and fair manner, will be much more welcome. Yet, in other fields, there have been instances of poor judgment, in which replicators have discussed their largely negative results on blogs in unfortunate tones. To paraphrase The Dude from the movie 鈥淭he Big Lebowski,鈥 they鈥檙e not wrong, they鈥檙e just jerks. Honest, rigorous evaluation is an essential component of academic research. Being a jerk, gloating, and bragging do not need to be part of the research process, and can work against the effectiveness of a replication attempt.

  • Replication Isn鈥檛 Easy.

    Just as there are best practices when conducting a program evaluation, standards for conducting replications should be established. Replication procedures, however, are still in their relative infancy. Many scholars, including the economist and Nobel laureate Daniel Kahneman, have recently proposed a for replications, suggesting that replicators must make a 鈥済ood-faith effort to consult with the original author鈥 and then report this correspondence along with the final manuscript, so that reviewers can integrate it into their process of assessing the replication. Original authors who are not responsive or helpful cannot tank the replication, and replicators who don鈥檛 accurately replicate the original methods are identified before publication.

    These suggestions, in the main, make sense to us. But we aren鈥檛 convinced that a formal partnership is necessary. We are both in the process of conducting replications of major studies within our fields of interest. In both cases, we approached the original authors to let them know we loved their studies and wanted to replicate them鈥攏ot out of any sense that they are wrong or fraudulent, but because their results, if replicated successfully, are potentially very important. Both sets of authors responded enthusiastically. If one treats an evaluation as an aggressive exercise, things will not go well. Replication is no different.

  • Don鈥檛 Judge a Book by Its Cover.

    Others have that the relative inexperience of a researcher could be associated with failure to replicate original findings, with the implication that graduate students and junior faculty members should not conduct replications. Experience can be helpful, to be sure, but casting aspersions on entire groups of researchers is the type of argument social scientists typically spend their careers fighting, not propagating. There are plenty of weak evaluators out there, and it stands to reason that there are also plenty of weak replicators. But those in a field can often be the best at identifying potentially fatal flaws in research findings. And what better way to learn methods than to replicate seminal studies?

  • Results Are Rarely Appreciated at the Time.

    The results of a program evaluation are often underappreciated when the study is concluded. This is especially true when the report contains constructive criticism and recommendations for significantly improving the program in question. But after a period of time鈥攚eeks, months, or even years鈥攑eople gain emotional distance from the recommendations, take them much less personally, and view the suggestions for improvement in a new light. The same will likely be true with replications.

Replication is a critical, if underused, part of the scientific process. It has become both more popular and more controversial recently, but we should not allow the controversies to outweigh the many benefits for education. Because inaccurate findings pollute the scientific environment, the goals of a good replicator should be to identify these pollutants so that they can be removed from the environment, but with the tacit admission that one person鈥檚 pollutant may be another person鈥檚 life鈥檚 work and passion. We hope other education researchers join us in our fight to change the research climate to one that encourages clean and kind replications.

A version of this article appeared in the November 04, 2015 edition of 澳门跑狗论坛 as The Painful Necessity of Replicating Research

Events

Artificial Intelligence K-12 Essentials Forum Big AI Questions for Schools. How They Should Respond鈥
Join this free virtual event to unpack some of the big questions around the use of AI in K-12 education.
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of 澳门跑狗论坛's editorial staff.
Sponsor
School & District Management Webinar
Harnessing AI to Address Chronic Absenteeism in Schools
Learn how AI can help your district improve student attendance and boost academic outcomes.
Content provided by 
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of 澳门跑狗论坛's editorial staff.
Sponsor
Science Webinar
Spark Minds, Reignite Students & Teachers: STEM鈥檚 Role in Supporting Presence and Engagement
Is your district struggling with chronic absenteeism? Discover how STEM can reignite students' and teachers' passion for learning.
Content provided by 

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide 鈥 elementary, middle, high school and more.
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.

Read Next

School & District Management Local Education News You May Have Missed in 2024 (and Why It Matters)
A recap of four important stories and what they may signal for your school or district.
7 min read
Photograph of a stack of newspapers. One reads "Three schools were closed and..."
iStock/Getty
School & District Management Principals Polled: Where School Leaders Stand on 10 Big Issues
A look at how principals responded to questions on Halloween costumes, snow days, teacher morale, and more.
4 min read
Illustration of speech/thought bubbles.
DigitalVision Vectors
School & District Management Opinion You鈥檙e the Principal, and Your Teachers Hate a New District Policy. What Now?
This school leader committed to being a bridge between his district and school staff this year. Here鈥檚 what he learned.
Ian Knox
4 min read
A district liaison bridging the gap between 2 sides.
Vanessa Solis/澳门跑狗论坛 via Canva
School & District Management The 4 District Leaders Who Could Be the Next Superintendent of the Year
Four district leaders are finalists for the national honor. They've emphasized CTE, student safety, financial sustainability, and more.
4 min read
Clockwise from upper left: Sharon Desmoulin-Kherat, superintendent of the Peoria Public School District 150; Walter Gonsoulin, superintendent of Jefferson County Schools; Debbie Jones, superintendent of the Bentonville School District; David Moore, superintendent of the School District of Indian River County.
Clockwise from upper left: Sharon Desmoulin-Kherat, superintendent of the Peoria school district in Illinois; Walter Gonsoulin, superintendent of Jefferson County schools in Alabama; Debbie Jones, superintendent of the Bentonville, Ark., school district; and David Moore, superintendent in Indian River County, Fla. The four have been named finalists for national Superintendent of the Year. AASA will announce the winner in March 2025.
Courtesy of AASA, the School Superintendent's Association