U.S. Secretary of Education Betsy DeVos often points to a gloomy federal analysis of the Obama administration鈥檚 multi-billion dollar School Improvement Grant program to make the case that big federal spending and direction doesn鈥檛 make a difference.
The analysis, which was commissioned by the Institute of Education Sciences, the Education Department鈥檚 research arm, found that the SIG grant, which poured more than $7 billion into low-performing schools, had no significant impact on math and reading scores or high school graduation.
But did that analysis give an accurate picture of the program? Two former Education Department officials鈥擜lan Ginsburg and Marshall S. Smith鈥攁rgue in a released by FutureEd, a non-partisan think tank at Georgetown University, that it missed the boat.
The ignored more-localized studies that Ginsburg and Smith say painted a brighter picture of SIG鈥檚 success. It had some clear design flaws, they argue. It also missed the chance to help communicate some best practices in states and districts where SIG worked that could be useful as educators and policymakers begin rolling out new school improvement plans under ESSA.
The IES study鈥檚 authors dispute those findings. (More below).
Ginsburg was a career staffer who served both Republican and Democratic administrations. He retired as the department鈥檚 director of policy and program studies. Smith was a Democratic political appointee, serving in different roles during the Obama, Clinton, and Carter administrations. He was undersecretary under Clinton. Smith is now a senior fellow at FutureEd,
The IES study was conducted by two nonpartisan research organizations, Mathematica and the American Institute for Research. It was released at the end of the Obama administration, just before the Trump team took office.
So what were the flaws in the analysis, according to Smith and Ginsburg? For one thing, it had a relatively small sample size and required students to make 鈥渦nrealistically large gains鈥 in order to show 鈥渟tatistically significant improvement.鈥 Mathematica and AIR used a higher benchmark to demonstrate that level of improvement than IES has used for other studies of schools, including a look at the KIPP charter network, the report says.
What鈥檚 more, schools in the federal study weren鈥檛 representative of other SIG schools nationally, Ginsburg and Smith note. Mathematica and IES included a higher percentage of urban schools and students from disadvantaged backgrounds than SIG did overall. That skewed the sample, Ginsburg and Smith say.
And the findings contradict other, more-localized looks at SIG. Ginsburg and Smith examine studies that considered the performance of SIG schools in states such as California and Ohio. Twelve of those studies show that, 鈥渕any SIG programs did indeed produce significant improvements in student achievement,鈥 the report concluded. Four of those studies used research methods deemed most rigorous by IES showed gains, and three of them showed gains.
鈥淭he fact they didn鈥檛 recognize the differences in context is pretty serious,鈥 Ginsburg said in an interview. 鈥淚t creates a lot of noise and can bury the positive findings.鈥
Mathematica, though, says the study presented an accurate picture of the program.
鈥淥ur study had sufficient statistical power to detect meaningful effects. It鈥檚 unlikely that there were substantively important impacts that were undetected by the study,鈥 said Joanne Pfleiderer, a spokeswoman for Mathematica.
Dana Tofig, a spokesman for AIR, concurred and said his organization stands by the study. He added that AIR had conducted one of the smaller scale studies referenced in the report, and in the context of the federal findings.
Here鈥檚 a snippet from the FutureEd report showing net scores at schools included in the more localized studies, as compared to IES.
The FutureEd report also notes that the feds missed an opportunity to take a deeper look at why SIG failed in some places and succeeded in others, Ginsburg and Smith say. That information, they argue, could help states and districts figure out how to implement evidence-based interventions under ESSA.
For example, SIG schools in Houston, San Francisco, and Massachusetts put in place 鈥渃omprehensive, research based approaches,鈥 and had positive results, Ginsburg said.
And Ginsburg wishes that the department had released state-by-state test score data showing how SIG schools performed. He would love to see that kind of data made available for schools flagged for extra help under ESSA, too.