The use of 鈥渧alue added鈥 information appears poised to expand into the nation鈥檚 teacher colleges, with more than a dozen states planning to use the technique to analyze how graduates of training programs fare in classrooms.
Supporters say the data could help determine which teacher education pathways produce teachers who are at least as good as鈥攐r even better than鈥攐ther novice teachers, spurring weaker providers to emulate those colleges鈥 practices.
The two states with the most experience using such data, Louisiana and Tennessee, have shown that it can be a powerful catalyst for change. Both can point to programs that have seen improvements in value-added scores after altering aspects of their programming. Nevertheless, teacher-educators and state officials alike continue to wrestle with how best to translate what are, in essence, fairly blunt measures of program effectiveness into a regular cycle for improving teacher education curricula.
鈥淎ll value-added can do is signal to you that something鈥檚 the matter,鈥 said George H. Noell, a professor at Louisiana State University, in Baton Rouge, who helps oversee the production of the value-added reports for teacher preparation in the state. 鈥淭here aren鈥檛 many institutions that have practice getting student-level outcome data from their program graduates and figuring out what to do with it. It鈥檚 a whole new skill.鈥
New Experiments
For a concept that is only about a decade old, value-added is poised to expand rapidly in teacher education. Some 14 states are in the process of using value-added to examine how the graduates of preparation programs are faring.
At least 14 states are currently reporting鈥攐r have plans to report鈥攙alue-added information on their teacher education programs, but not all of them will use the information formally for program accountability.
SOURCES: Center on American Progress; 澳门跑狗论坛
The basic idea behind value-added is to examine growth in student test scores, holding constant factors like poverty or family characteristics that could skew scores, and then to determine what impact teachers had on that improvement. For teacher education, the process goes a step further, by analyzing how graduates of particular programs have done in the aggregate to raise their students鈥 scores.
The concept has been controversial among teacher-educators, however, in part because of how it has historically been implemented. A preliminary Florida effort to issue value-added data to teacher colleges, in 2009, brought complaints from schools of education that the data were simplistic, flawed, and subject to misinterpretation.
Tennessee鈥檚 attempts to provide such information, beginning in 2008, were initially mired in faulty data and confusing reports, officials there say. 鈥淚n the first few years, we realized that there were a lot of errors in the data that we needed to clean, and there was quite a bit of pushback,鈥 said Katrina M. Miller, an official at the Tennessee board of higher education who oversees the teacher-preparation portions of the state鈥檚 Race to the Top plan.
鈥淧art of the reason we included value-added in the Race to the Top application was to start anew to develop a database, in collaboration with institutions, that would house all these data,鈥 she said, 鈥渁nd make sure the linkages between teachers and higher education institutions were sound, and the reports were useful.鈥
尝辞耻颈蝉颈补苍补鈥檚 system, the most mature to date, has gradually gained acceptance among teacher-preparation programs, though some teacher-educators say they still have qualms about using test scores as a primary gauge of program effectiveness. (More measures will be added in a new set of performance standards, scheduled to debut in spring 2013.)
鈥淚t was frustrating at first. Based on earlier assessments, we always had exemplary status, and then to get these data showing some weaknesses鈥攚ell, it was a shock,鈥 said Gerald M. Carlson, the dean of the University of Louisiana at Lafayette. 鈥淏ut we ultimately said, 鈥楲et鈥檚 roll up our sleeves and see what we can find out.鈥 鈥
The state鈥檚 system had identified the school, in 2008, as producing language arts teachers whose performance put them below that of other novice teachers. Similar problems appeared in a handful of other content areas in subsequent years鈥 reports. In response, the school set up teams of faculty to look at the curriculum, switched the sequencing of elementary math courses, and is now requiring faculty members to spend more time observing student-teachers, Mr. Carlson said.
Lackluster results in reading sent the Louisiana Resource Center for Educators scrambling to improve its training, said Nancy S. Roberts, its executive director. The Baton Rouge, La.-based alternative program, one of two private providers in the state, couples an intensive summer institute with on-the-job mentoring to train teachers. Officials realized after some deliberation that their training had not included enough explicit reading instruction.
鈥淲e were teaching reading in the content areas, but not enough on reading fundamentals. The districts had told us that they wanted to teach reading their own way,鈥 Ms. Roberts said. 鈥淲e will never make that mistake again.鈥
After hiring a reading specialist to help revamp the curricula, the program added some 35 hours of reading content to its summer coursework. Results have now begun to tick upwards, and in fact, the center鈥檚 reading results were the highest among alternative programs on the most recent report.
Drilling Down
Improvements in curricula generally take a while to show up in the value-added data, analysts say, since it takes at least two years for new candidates to be trained in the revamped classes, and several more for those teachers to instruct enough students to run the calculations.
The impetus for improvement aside, programs continue to grapple with how to home in on the specific changes that need to be made to their programming. Mr. Noell and other Louisiana officials now supply deeper-level analyses of the data to programs at their request.
They include such descriptive information as student scores on particular areas of the test鈥攚ord problems, math reasoning, or computation, for instance鈥攐r broken out by teacher-certification type.
Though they don鈥檛 always show clear patterns, such 鈥渄rill down鈥 data have proved helpful to the University of Louisiana at Lafayette, where the data showed that students taught by the university鈥檚 elementary teachers struggled with essay questions. The university鈥檚 teacher-educators have worked to require more writing instruction in introductory English composition courses.
So far, Tennessee officials have been relying on public disclosure of the value-added information to encourage weaker programs to collaborate with the higher-performing ones. But the appetite for doing that varies.
鈥淪ome programs have reached out for training on value-added,鈥 Ms. Miller of the higher education commission said, 鈥渂ut I can tell you in all honesty that some are ignoring it.鈥
Teacher-educators say part of the reason is that they鈥檙e still trying to locate other sources of data to help them interpret the results. While it generally has performed well on the value-added system, Lipscomb University struggled to determine why some math teachers weren鈥檛 as effective as others. It eventually drew on surveys of program graduates, as well as data from classroom observation, to target a graduate credentialing program for additional math coursework, said Candice McQueen, the dean of the Nashville, Tenn.-based university鈥檚 college of education.
Ms. McQueen, who sits on the executive committee for the state鈥檚 association of teacher colleges, believes it鈥檚 time for an intensive discussion about how programs are making use of the value-added data.
鈥淯ltimately, we need to stop being defensive about this and to start looking at what we can use, and being proactive and specific about what other data points we need from the state,鈥 she said.
Accountability Question
At least one major policy question about the systems is outstanding: Should the value-added information be used only for diagnostic purposes鈥攐r should it be formally integrated into states鈥 program-review and -approval processes?
So far, Louisiana alone appears to have consequences for programs based on the data. Some half-dozen programs have entered 鈥減rogrammatic intervention"鈥攅ssentially, additional state oversight of the particular pathway or content area in which there appear to be weaknesses鈥攖hough no programs have been decertified.
Tennessee officials want to integrate the student-achievement information into its program-accreditation cycle. The state is in the beginning stages of drafting such rules, Ms. Miller said. But according to a review by the Washington-based Center on American Progress, only five of the 12 states that won RTT grants now plan to go beyond the reporting of scores to using them for program accountability. Its author, Edward Crowe, thinks that鈥檚 not enough.
鈥淚t strikes me as pretend accountability,鈥 said Mr. Crowe, a consultant with the Woodrow Wilson National Fellowship Foundation, which runs a grant program to improve schools of education. 鈥淭he history in teacher education is that nothing will happen, unless there is sustained external pressure to force programs to take a look at their own results.鈥
Another question that remains unanswered is whether public reporting will affect the larger marketplace in which such programs operate. None of the university administrators interviewed said they had experienced a notable increase or decrease in applications that they could trace back to the publicly reported value-added results. Nor is it clear whether districts and schools are using the information when hiring.
Still, Mr. Noell of LSU holds out hope that value-added will continue to catalyze program improvement.
鈥淭here could come a day where there are either no or few negative outliers,鈥 Mr. Noell said. 鈥淎nd then I think you reset the conversation altogether. How can you raise the bar for teacher education overall?鈥