Includes updates and/or revisions.
A slew of quiet changes in the proposed Senate bill to reauthorize federal education law would substantially increase the role of research in federal education programs.
The latest version of the reauthorizing the Elementary and Secondary Education Act, put forth by U.S. Sens. Tom Harkin, D-Iowa, the chairman of the Senate education committee, and Sen. Michael B. Enzi, R-Wyo., the committee鈥檚 lead Republican, was taken up by the panel on Wednesday after being introduced last week.
The bill has been controversial from the get-go, with civil rights groups criticizing its overhaul of the accountability system set up under the current law, the No Child Left Behind Act, and states and administrator groups voicing concern that its provisions are still too restrictive.
In comparison, the bill鈥檚 research-related provisions seemed to be flying under the radar. They would:
鈥 Greatly increase the percentage of federal program funds devoted to evaluation and technical assistance; from 0.5 percent to 1 percent in the case of federal Title I anti-poverty programs, and to 3 percent in most other programs;
鈥 Set the , the U.S. Department of Education鈥檚 primary research arm, as the lead agency to evaluate federal education programs and require IES to help federal programs establish criteria for program effectiveness;
鈥 Establish a permanent grant for the Investing in Innovation, or i3, program, originally created under the fiscal stimulus law, which provides three- to five-year competitive grants to conduct research to develop and scale up promising education programs and interventions; and
鈥 Require more research evidence backing up the use of school improvement programs in some instances.
鈥淚 think they鈥檝e got the fundamentals for a great [research] infrastructure in there,鈥 said James W. Kohlmoos, the president of the , a Washington-based group representing regional educational laboratories and other research organizations that receive federal research funding. 鈥淭here鈥檚 a recognition in total in this draft that we might be moving further down the road from ESEA being primarily an accountability bill to a solutions-oriented one that provides support for improvement.鈥
While the bill would not allow IES to pool all evaluation money into a single pot鈥攁 change long requested by IES officials and the National Board for Education Sciences that advises it鈥攖he increase in evaluation set-asides could mean millions of additional dollars for IES. According to Sue Betka, the deputy director of administration and policy at IES, the institute spent $66.8 million for evaluations in fiscal 2011, including: $11.1 million in general ESEA set-asides; $7.8 million for Title I evaluation specifically; $20.2 million for national program evaluations; and $11.5 million for special education evaluations. Only $16.2 million came from general research, development and dissemination funds for IES.
Moreover, the bill would solidify IES鈥檚 authority to evaluate Education Department programs, rather than splitting responsibility between IES and the department鈥檚 various program offices. The Education Department had already announced that starting this fiscal year, IES鈥檚 National Center of Education Evaluation and Regional Assistance would conduct all program implementation and effectiveness studies of 18 months or longer, but the Harkin-Enzi bill would increase that authority. It calls for IES to coordinate all federal program evaluations for the department and help establish the evaluation criteria for those programs on the front end.
鈥淕enerally I鈥檓 pleased the bipartisan bill shows confidence in IES and our ability to do this work,鈥 said IES Director John Q. Easton. 鈥淲e are very eager to work with program people from the very beginning to design programs that can result in strong evaluations with little burden on participants. When planning works that way, we can even make implementation less burdensome on participants.鈥
In the past, evaluations have been planned after large grants such as went out, making it more difficult to determine criteria and collect data to measure the program鈥檚 effectiveness.
鈥淚t鈥檚 awkward to have it divided that way,鈥 said Grover J. 鈥淩uss鈥 Whitehurst, the director of the at the Washington-based Brookings Institution and formerly the director of IES. 鈥淚t鈥檚 much better to have one captain of the ship rather than confused coordination of the bridge. You need that independence to carry out an evaluation of a program in the department.鈥
Moreover, Mr. Whitehurst said, 鈥渉aving a real secure pot of money, knowing it鈥檚 going to be there over a number of years would change the nature of the enterprise. Predictability of the fund would allow much more forward thinking about an evaluation plan. It would allow the evaluation of lots of programs that don鈥檛 get evaluated now, simply because there鈥檚 not enough in smaller programs to fund a real evaluation.鈥
Research in School Improvement
One example of the bill鈥檚 tougher requirements for research evidence comes in the area of school improvement. Under the bill, the bottom 5 percent of schools identified as 鈥減ersistently low-achieving,鈥 either for overall performance or for achievement gaps, would have to adopt one of a number of school improvement strategies. Among these, the 鈥渨hole school reform鈥 strategy allows districts to create their own turnaround plans鈥攂ut only using programs and interventions that have demonstrated statistically significant improvements in student outcomes on 鈥渕ore than one well-designed or well-implemented experimental or quasi-experimental study.鈥
That provision raised red flags for Mr. Whitehurst. 鈥淚 think it repeats the error in NCLB of using 鈥榮cientifically based research鈥 as a requirement for using federal funds when there wasn鈥檛 a sufficient base of scientifically rigorous research to do what was required,鈥 Mr. Whitehurst said.
While the Harkin-Enzi bill includes fewer references to 鈥渟cientifically based research鈥 than NCLB does, it sets a more stringent definition for programs backed by scientifically based research. They must include randomized, controlled experiments or quasi-experimental studies鈥攃onsidered the 鈥済old standard鈥 of education research鈥攚hich have results that can be repeated and have been accepted by a peer-reviewed journal or panel of independent experts.
Since NCLB set randomized, controlled experiments as the bar for 鈥渟cientifically based research,鈥 the number of such trials has exploded. In the past five years, for example, regional educational laboratories conducted 25 studies using experimental designs. Yet critics have argued the time, expense, and ethical dilemmas posed by full experiments make them difficult to use for education.
The Baltimore-based offers one of the few whole-school interventions with evidence backed by experimental studies yet its chairman, Robert E. Slavin, admitted. 鈥淚f we [as a field] had to [implement the school improvement model] this afternoon it would be trouble; there wouldn鈥檛 be more than a half-dozen programs for whole-school reform that have that kind of evidence.鈥 Mr. Slavin also writes an opinion blog for edweek.org.
However, Fred Doolittle, the vice president and K-12 policy director of the New York City-based research firm , argued most schools could weave individual evidence-based interventions into their larger school improvement plan, and the requirement would drive demand for large-scale research on effective programs. 鈥淲e鈥檙e not starting from scratch; over the last decade there has been a push for this kind of evidence in education and as a result the capacity to do this kind of work has expanded.鈥
Jon Baron, the president of the Washington-based , agreed. 鈥淵ou don鈥檛 need a million programs; you need a few that are backed by strong evidence, and you鈥檒l build more over time,鈥 Mr. Baron said. 鈥淥ne of the reasons building additional valid evidence of things that work is so important is a lot of the interventions currently in use that people think are effective might not be achieving their intended goals.鈥
Expanding Research
Among the experts interviewed, there also seemed to be a consensus that a permanent Investing in Innovation grant program could, more quickly, help expand the research-backed programs available. The program provides tiers of grants, with various funding levels, based in part on the level of evidence a program has already established.
鈥淵ou encourage not just scaling up interventions backed by strong evidence, but these lower tiers are an opportunity for innovation, to see whether these programs are things that can work or not,鈥 Mr. Baron said. 鈥淚t opens it up to a much wider field. It鈥檚 attractive to much larger groups and constituencies.鈥
During the mark-up of the Harkin-Enzi bill, U.S. Sen. Michael F. Bennet, D-Colo., introduced an amendment to create a set-aside within the i3 program to develop the , or ARPA-Ed, modeled on a cutting-edge research group within the U.S. Department of Defense known as DARPA. ARPA-Ed, which was originally proposed as a $90 million project in President Barack Obama鈥檚 2012 budget plan, would focus on special projects 鈥渢o aggressively pursue technological breakthroughs that transform educational technology and empower teaching and learning,鈥 according to a statement from Sen. Bennet鈥檚 office.
Mr. Slavin said he thought an ARPA-Ed addition could fill in gaps in i3. 鈥淲hen you want Disney or Pixar or National Geographic to get involved and solve some long-standing problems in education, I think [ARPA-Ed] could get them involved in a way that frankly a $5 million [i3] grant over five years isn鈥檛 going to do,鈥 Mr. Slavin said.