Is one-to-one tutoring effective in raising the achievement of struggling students? Research evidence from numerous studies says yes, but what if such tutoring is offered as a component of a large-scale federal program?
The latest renewal of the federal Title I program, part of the No Child Left Behind Act passed by Congress in 2001, instituted the , or SES, program. Schools in their third year of failing to make adequate yearly progress, or AYP, are required to offer out-of-school-time tutoring in reading and math to low-income students. This program, an integral part of No Child Left Behind, allows either public or private agencies to provide the tutoring services, with money to pay for them allocated from districts’ Title I funds. In many cases, the services can account for up to 20 percent of Title I spending. The Los Angeles Unified School District alone, for example, directs more than $100 million per year in Title I funds to this program.
More than half a million children participate in SES each year, and several hundred tutoring agencies nationwide have been approved to provide supplemental services.
The stated goals of the SES program are to (1) ensure that students increase their academic achievement; (2) provide parents with options for helping their children receive a quality education; and (3) provide incentives to districts to bolster schools in need of improvement.
When districts were allowed to offer their own supplemental-services programs, the effects on math achievement were three times greater. The district programs also were offered at a fraction of the cost.
Much of the responsibility for program oversight lies with the individual states. Federal regulations require states to approve and evaluate all providers. According to the NCLB legislation, states must withdraw providers from the approved list if they fail to show evidence of the improved academic achievement of the students they serve for two consecutive years. This process has proved difficult, however, as neither federal regulations nor funds for evaluating SES providers exist. In many instances, providers are permitted to document student progress on their own test instruments, which naturally opens the door to bias. It only seems fair that they be held accountable for improving student performance on the same measures that were employed to determine which schools would be placed in school improvement, corrective action, or restructuring status.
Since 2003, a number of states and local school districts have conducted evaluations of the effectiveness of SES providers in improving student scores on state assessments in reading and mathematics. As researchers working with the at Old Dominion University, we recently helped complete a synthesis of these studies using rigorous meta-analytic methodology. Our purpose was to determine the program’s overall effectiveness and to identify the provider characteristics that produced the most positive results.
Our hope is that these findings can be used to inform the design and approval of effective SES programs, and to assist in the development of scientifically based criteria for approving, continuing, and removing providers. The findings below are based on comparisons of test scores for SES students who attended a minimum of 15 hours and those of students who attended the same schools but did not participate in the SES program. In all, for both reading and math achievement, the study synthesized over 400 provider effects in 17 states or large school systems, with a sample size of 140,846 students in the math-achievement analyses and 139,844 in the reading-achievement analyses.
The overall results suggest that the SES policy as currently designed and implemented is not meeting its primary goal of improving student achievement. The mean effect size for reading was smaller than +0.02, and the mean effect size for math was about +0.04. These were both positive and statistically significant, but much smaller than those obtained in previous syntheses of Title I program effects. For example, Geoffrey Borman of the University of Wisconsin reported average effects of +0.11 for rigorous studies of the effects of the Comprehensive School Reform program, support for which has all but been eliminated in recent reauthorizations. Further, the comprehensive-reform program never received the massive infusion of federal funding associated with the SES program, which costs more than $2 billion per year. Much of the SES funding goes unallocated because of low student participation—these millions are expended in an unplanned and likely wasteful manner, near the end of the fiscal year.
First, any way you slice it, SES effects on reading are extremely small: No sub-analyses revealed reading effects above +.03, which was the average effect for providers who offered one-to-one tutoring. Syntheses of other studies of one-to-one tutoring typically show much larger effects; but in fairness, the individual studies on which they are based tend to use more-sensitive measures than the state accountability tests.
Encouragingly, our research synthesis does point to promising ways to improve the program, by identifying several provider characteristics associated with larger gains in math. By far the most important was hiring tutors with four-year college degrees. If providers used only degreed tutors, the effect size for math jumped to +0.08. If nondegreed tutors were used, the effect size shrank to less than +0.02.
We also found that district providers had much larger effects than commercial providers, obtaining an average math effect size of +0.09 vs. +0.02 for commercial providers. The use of a prescribed curriculum and the provision for initial and ongoing training of tutors also enhanced SES effects on math achievement.
The legislative intent of the SES program is to narrow or close achievement gaps by improving the academic achievement of historically underperforming populations. Based on the evaluation findings, however, the reality is that the program has not had the desired impact on test scores.
It might be unfair to expect that the effects would be much larger. After all, the average SES student receives only 20 to 40 hours of tutoring (approximating only one extra week) during an entire school year. To maximize benefits from these services, policies for structuring and approving SES provider programs should be re-examined. For example, as our own research indicates, when school districts were granted an exception and allowed to offer their own supplemental-services programs, the effects on math achievement were three times greater, relative to tutoring by external providers. The district programs also were offered at a fraction of the cost.
Regardless of who does the tutoring, however, evaluation findings suggest strong needs to increase the participation of eligible students, boost the attendance rates of those enrolled in tutoring sessions, and strengthen the connection of the tutoring instruction to what students are learning in the regular classroom.
As Congress considers the reauthorization of the Elementary and Secondary Education Act, a careful review of the current law’s supplemental-educational-services provision seems warranted in light of these findings.