The U.S. Department of Education鈥檚 $650 million experiment to find and scale up innovative education ideas was a mixed success鈥攆or the first time, money was awarded to programs that showed evidence of past success, but those rigorous standards also produced a list of winners full of the 鈥渦sual suspects,鈥 a new finds.
The report released today by , a Washington consulting firm, hammered away at a crucial question: Was the Obama administration鈥檚 program successful in finding truly innovative ideas that will improve K-12 education?
鈥淚s it immediately obvious that they found breakthrough innovation? No, but that wasn鈥檛 necessarily their purpose,鈥 said Kim Smith, a co-founder and CEO of Bellwether, which is working with support from the Rockefeller Foundation on research about innovation. The report is the culmination of interviews with dozens of i3 applicants, winners, and philanthropists, plus a review of public documents about the program.
鈥淚 think the department accomplished some really important things. It motivated a lot of action in the field. [The Department] is really juicing up the innovation ecosystem, and it鈥檚 going to take a little while to start to make progress.鈥
As the Aug. 2 deadline nears for a second, smaller round of Investing in Innovation, or i3, grants, the report acknowledges that in many ways, the competition itself was innovative, especially for a federal education department that is more accustomed to handing out grants via formula than through a competitive process.
Last year, nearly 1,700 applicants vied for $650 million in prize money, which was funded by the 2009 American Recovery and Reinvestment Act, the economic-stimulus package passed by Congress. Forty-nine winners were chosen, with awards split into three tiers ranging from nearly $5 million to $50 million. The biggest awards went to the proposals with the strongest research base.
Although this year鈥檚 i3 round will award only $150 million, interest does not appear to have waned. Nearly 1,400 would-be applicants told the Education Department they plan to apply.
In today鈥檚 i3 report, the researchers give the department credit for encouraging partnerships between the philanthropic sector and K-12 public education by requiring winners to secure matching dollars and establishing an online registry where foundations and education entrepreneurs could find each other.
And, researchers said, the department took a bold and significant step in requiring varying levels of evidence for each type of innovation grant, acknowledging that some ideas and innovations might be worthy of government investment but have far less research to back them up. This evidence framework was 鈥渁 giant leap forward鈥 and 鈥渂y far the most significant innovation that i3 brought to the table,鈥 the researchers said.
But this rigorous evidence framework came at a cost, since it favored ideas that had been around long enough, and had enough financial backing, to make evaluations possible. The result, the researchers said, was a 鈥減ool of applicants and grantees made up of existing organizations that had already addressed K-12 schooling in some way.鈥
The winners included such well-known entities as Teach For America, the Knowledge is Power Program, and the Reading Recovery program through Ohio State University.
The report quotes one unnamed i3 applicant who said: 鈥淣either the iPhone or iPad teams at Apple would have been able to meet this standard to get the funds to initiate these projects.鈥
Frederick M. Hess, the director of education policy studies at the American Enterprise Institute, agrees, but he doesn鈥檛 necessarily fault the department.
鈥淚t did not find innovative programs because it was not set up to find them,鈥 Mr. Hess said. 鈥淭hey chose to write rules which required established evidence of effectiveness. That鈥檚 perfectly reasonable. You鈥檙e giving away $650 million in tax dollars.鈥
Branding Issue
For the department, part of the problem became the competition鈥檚 name.
In the beginning, the department called it the Invest in What Works and Innovation Fund, but that name was later simplified to Investing in Innovation, and given the 鈥渋3鈥 nickname.
James H. Shelton, the department鈥檚 assistant deputy secretary for innovation and improvement, acknowledged that the department may have done itself a 鈥渂it of a disservice鈥 by taking 鈥渨hat works鈥 out of the name, thus setting up unrealistic expectations about the kind of innovation the department would fund.
However, he pointed out that although the list of winners included well-known organizations, it also contained applicants with no national profile. (For example, the 27,000-student St. Vrain Valley School District, in Colorado, was the highest-scoring winner, netting $3.6 million for its plan to use targeted reading and math interventions with English-language learners.)
鈥淲e had two important criteria: that a proposal be significantly better than the status quo, and two, that it goes to scale,鈥 Mr. Shelton said.
The report notes that more involvement from the for-profit sector could have led to more innovative proposals, especially in the area of technology. In this case, however, the department鈥檚 hands were largely tied. The legislation Congress passed creating i3 made districts, groups of schools, and nonprofit partners the only eligible grant recipients. That left no opening for applicants from the for-profit sector, which is far more likely to embrace risk than the government.
In the end, the list of first-round winners disappointed many foundation officials, the report said. This is an important point, because foundations and other private-sector organizations were called upon to provide 20 percent matching funds to the winners. (The matching requirements have been lowered to between 5 percent and 15 percent, depending on the tier, for the second round.)
Some foundation leaders referenced in the report indicated there were few winning proposals that they wanted to fund. And they were further disappointed at the winners in the smallest 鈥渄evelopment鈥 category, where the chance of finding creative, unique ideas seemed more likely given that a less rigorous research base was acceptable.
鈥淥ut of the development grants, I would be amazed if these grantees really develop into game-changers,鈥 one funder is quoted in the report as saying.
In their narrative, the researchers question whether such criticism is based on a thorough review of the winning proposals, or merely a quick glance at the list of winners.
Other education policy observers, however, argue that some critics have an unrealistic definition of what innovation means.
Simply scaling up an education program, or implementing an idea in a part of the country that has not seen such a thing before, can be innovative, said James W. Kohlmoos, the president and chief executive officer of the Washington-based Knowledge Alliance, which represents research groups.
鈥淚f innovation is doing something different to create improvement, ... that kind of a meaning can have broad applications,鈥 he said. 鈥淚nnovation can be something very small.鈥
Selection Process Reviewed
The researchers also took a hard look at the selection process for winners, which relied almost exclusively on a cadre of outside peer reviewers who scored each application. The report questions whether strict rules the department used to weed out peer reviewers with potential conflicts of interest may have eliminated the most-qualified reviewers from the pool, leaving 鈥渄istrict data officers and retired professors鈥 as judges who favored 鈥渕ore incremental innovations.鈥 The report questions whether the department could have used the peer reviewers differently and not relied on them entirely to pick the winners.
While acknowledging the quick timeline on the project, with awards made just three months after the application deadline, the researchers questioned whether the reviewers had enough training. And while the department tried to reconcile differences in scores among reviewers who judged the same applications, that 鈥渘orming鈥 process may actually have watered down the reviews, the report said. As part of the process, the department gave the judges a chance to come together, review, and even revise their scores. One reviewer quoted in the report said reviewers often regressed to the mean and deferred to the most conservative scorer.
As the Aug. 2 deadline for second-round applications nears, the department has made some changes to the process. When it comes to scoring the applications, no longer will peer reviewers judge the evidence applicants present; instead, that responsibility will fall to experts with the Institute of Education Sciences, which is the research arm of the department. Instead, the peer reviewers will focus on other scoring categories, such as how much need there is for a project, and how much experience the applicant has.
In addition, Mr. Shelton said, the department is going to give peer reviewers better training based on lessons learned鈥攁nd more of it.