The federal Investing in Innovation program has produced only a handful of true breakout educational interventions in the final evaluation, and the success trajectory of all the projects funded under the program highlights the rocky path forward for school districts nationwide working to use evidence under the Every Student Succeeds Act.
The Institute of Education Sciences last week released its final evaluation report on the $1.4 billion i3 program鈥攖he only Obama-era competitive grant to be codified into the Every Student Succeeds Act. Of the 67 grants with evaluations completed by last May, nine, or 13 percent, had both tight implementation and strong positive effects.
I3鈥檚 tiered evidence model awarded more money to programs with a larger evidence base and which has since formed the backbone of ESSA鈥檚 tiered model for evidence-based school improvement interventions. It seemed to be effective. Half of the highest-tier 鈥渟cale-up鈥 grants, which offered up to $50 million but which required the most initial evidence, produced evidence that the programs were well-implemented and had benefits for students. By contrast, only 8 percent of the lowest-tier 鈥渄evelopment鈥 grants, which awarded $2 million to $5 million to interventions that seemed promising but had the least initial evidence, showed significant positive effects.
Building Evidence
鈥淭his report shows how difficult it is to do good work,鈥 said Patrick Lester, the director of the Social Innovation Research Center, who has conducted separate studies of the i3 program.
鈥淭here鈥檚 a certain issue of competence here,鈥 Lester said, noting that the evaluations found successful i3 programs, including the Knowledge Is Power Program charter schools and the Reading Recovery tutoring program, had strong plans for both developing evidence, monitoring implementation across schools, and gathering data that would be needed for evaluating the program later. 鈥淭hose who are successful are often well-funded, have top-notch people鈥攇enerally have their stuff together across the board,鈥 he said.
ESSA does not prescribe specific interventions for schools in need of improvement, but it does call for districts to use interventions that have evidence of being effective with the kinds of students they want to help. If there are no available research-based interventions, districts are allowed to develop, test, and evaluate their own. But in contrast to programs developing and testing interventions in the i3 grant, Lester said, districts working to develop their own school improvement interventions often have little technical support either in developing programs or collecting the data needed to evaluate them.
鈥淚鈥檓 very worried about ESSA; ... I do not expect most schools, regardless of how much they say they鈥檙e focused on evidence, to build interventions that have any effect at all,鈥 he said.
The i3 program鈥檚 13 percent success rate seems small, but was not surprising for programs of this kind, according to Barbara Goodson, a co-author of the study and a principal scientist at the research firm Abt Associates. One report by the Coalition for Evidence-Based Policy found about 12 percent of rigorously evaluated education interventions show positive results. And during i3鈥檚 creation, the White House cited a success rate of about 10 percent for business research and development projects.
鈥淵ou look at the number and don鈥檛 know if you should be really excited or really disappointed,鈥 Goodson said. 鈥淎 lot of the advances were not particularly sexy, not earth-shattering headlines, but the real innovation in these experiments may be less about the findings of the programs [than about] improving the infrastructure of the field.鈥
For example, Goodson said program developers and districts in i3 learned how to continuously evaluate and improve their interventions. Researchers also submitted their evaluation plans in advance, leading to less cherry-picking of good results after the fact.
One of the successful development grants was Building Assets, Reducing Risks, a student support program which has since built up enough evidence to earn scale-up funding under the Education Innovation and Research grant program, i3鈥檚 successor in ESSA.
Angela Jerabek, BARR鈥檚 executive director, said the program owed its evolution and growth to the technical assistance and ongoing evaluation in i3.
The 鈥渂ig lesson鈥 for districts trying to develop their own school improvement programs under ESSA, Jerabek said, is that 鈥渋ndividuals and districts need to have the patience to try an intervention and test it before trying to scale it up. That鈥檚 a big Achilles鈥 heel, because ... if your very first idea becomes the final program, it鈥檚 rare that it will have a consistent benefit for students.鈥
I3 also highlighted problems in current, popular education research approaches. For example, both i3 and ESSA encourage districts to use administrative data鈥攖he day-to-day attendance,
participation, unit tests and other information that districts collect on their own鈥攖o study interventions and make changes quickly. But Goodson said administrative data didn鈥檛 help much for many programs, because some states did not test annually in subjects such as science, and it was difficult to gauge the effects of interventions when states or districts changed their testing systems.
The evaluation called for better technical assistance for those trying to develop interventions.
鈥淚 do think if you really care about generating evidence, it takes a consistent message, a clear framework ... and some kind of resources for districts to go to when they hit problems,鈥 Goodson said. 鈥淣obody鈥檚 going to pay for Abt to give individual assistance to every district鈥 trying to develop an intervention.
For example, BARR鈥檚 initial model, developed around a year of teacher training on using data to support student transitions, evolved over time into a three-year professional development program that today includes ongoing professional learning networks in more than 56 school districts nationwide.
鈥淥ne of the challenges with the i3 program is that the message goes out that you are just trying to identify what works鈥攆ind more of what works and less of what doesn鈥檛,鈥 said Vivian Tseng, the senior vice president of programs for the William T. Grant Foundation, which supports efforts to improve research use in education. 鈥淏ut we need to be able to glean lessons learned, so that people who are trying to do these Tier 4 [interventions under ESSA] can build them to yield useful lessons not just about did it work or not work, but how to improve the next time around.鈥