The federal Investing in Innovation program helped build evidence of the effectiveness of new interventions, but also highlighted how much local education groups need support from regional and national experts to build successful ones.
That is the takeaway from an evaluation of the program, known as i3, that was released last week by the Social Innovation Research Center, a nonprofit think tank supported by the national venture-philanthropy fund New Profit.
The findings raise concerns about states鈥 and districts鈥 ability to develop and study their own school improvement and other interventions under the Every Student Succeeds Act. The new federal law gives districts much more flexibility to implement school improvement and other interventions but requires them to meet evidence standards modeled after those of i3.
鈥淲ith all these things happening under ESSA with new evidence definitions, if you are going to just throw that out there and hope the locals will do it with no assistance, you are dreaming,鈥 said Patrick Lester, the director of the Social Innovation Research Center. 鈥淭hese [i3 grantees] are in the top 3 percent of [i3] applicants, they are supposed to be the cream of the crop, the elite of school districts, ... and we see what the results look like: For the most part, school districts were out of their depth.鈥
The Obama administration launched the $1.4 billion i3 program in 2009, part of the economic-stimulus education spending that also brought the much bigger Race to the Top and School Improvement Grant programs, but i3 is the only one of the massive federal competitive grants to be codified in the Every Student Succeeds Act, as the revamped Education Innovation and Research program. Both iterations of the grants are intended to support the developing, testing, and scaling-up of effective interventions for school improvement, early-childhood education, dropout prevention, and other education areas.
Readiness Lacking
Thomas Brock, the acting director of the U.S. Department of Education鈥檚 research agency, the Institute of Education Sciences, said he agreed with the study鈥檚 findings, adding that district and state research capacity 鈥渉as been front on my mind since ESSA was passed, because it鈥檚 clear it is trying to push this [evidence-based] work,鈥 he said. 鈥淢y worry is ... there may just not be the readiness yet for states and localities to undertake this work, and even when there is readiness, it鈥檚 still just very hard work.鈥
About a third of all 44 interventions designed and evaluated under the i3 grants to date showed significant benefits, and others showedsome positive results, according to the study.
That鈥檚 more than double the average success rate for research and development in education; a 2013 study by the Coalition for Evidence-Based Policy showed that only 12 percent of well-conducted experimental evaluations of education interventions found positive effects. Final evaluations have only been released for projects from the first three years of the program, 2010 to 2012. But if the current success rate holds, Lester estimates 52 of the grant鈥檚 172 total projects will show evidence of success.
The i3 interventions helped fill gaps in the field on school turnaround strategies, science assessments, and the use of data to improve student achievement, the study found. While 12 projects focused on teacher and principal professional development, however, only three found benefits to their interventions.
鈥淚t does support the argument for research-practice partnerships pretty strongly, to help districts on the evidence side, the analysis side, maybe even the data-collection side,鈥 said John Q. Easton, the vice president for programs at the Spencer Foundation and a former director of IES.
Elements of Success
The success of interventions skewed heavily toward experienced, well-funded organizations that were centered around their own interventions, rather than grassroots interventions launched by school districts, Lester found. Out of the 16 districts that received i3 grants directly, only three showed positive effects. By contrast, 3 in 4 university-led grants showed evidence of success for their interventions.
Part of that is because all but one of the district-led interventions received development grants, which required the lowest initial evidence base and which were also the least likely to show benefits. For example, interventions evaluated using a randomized control trial were as likely to see benefits as interventions studied under other statistical evaluation methods, but groups with more resources and expertise were more likely to undertake the so-called 鈥済old standard鈥 experimental studies in the first place.
Caitlin Scott, a manager in research and evaluation at the research group Education Northwest, agreed with the evaluation鈥檚 recommendation that districts engaging in education research and development should receive better access to national experts and research partners who can take on the technical side of the projects. "[Districts] are in the business of educating students; this is not their daily mission,鈥 she said.