Much has been written and discussed of late about the debate over the best method of assessing teacher-preparation programs. As the dean of the school of education at Indiana University Bloomington, I understand that meaningful assessment of teacher preparation requires a multifaceted approach based on a robust research methodology and focused on program outcomes. A sound study, as researchers know, begins with a viable research question. The design and method of data collection then flow from that question. Moreover, the scientific validity of conclusions reached on the basis of the data depends on the ethical application of research principles and, where appropriate, validation of results through peer review and replication.
For more than 10 years, I have steered the public-reporting results of our of the principals . Our accreditation self-studies and accreditation decisions are also posted routinely for all to see. In addition, we systematically and proudly report on the performance of our graduates where such information is available. For instance, we publicly celebrate the fact that the last four Indiana teachers of the year and seven of the last 10 are Indiana University Bloomington graduates. These educators are selected annually through a peer-review process based on nominations from superintendents and vetted by the Indiana Department of Education.
While it is true that I have openly criticized the research methodology of the National Council on Teacher Quality on the issue of teacher-preparation rankings, I want to emphasize, at the outset, that the decision of our faculty at the school of education at Indiana University Bloomington not to participate in the NCTQ studies has nothing to do with fear of accountability or concern about how our programs might be ranked.
In fact, the first national NCTQ teacher-preparation study in 2013, in which our school participated as a result of a public-records request, ranked our university鈥檚 secondary education program among the top 10 percent of such programs nationally, earning it a place on NCTQ鈥檚 鈥渉onor roll.鈥 Upon learning of the results, however, I issued a statement categorically rejecting the rankings because I thought . I am of the opinion that the NCTQ methodology is fatally flawed, but that is a discussion for another day.
The National Academy of Education recently proposed a set of principles for making decisions about teacher-preparation program evaluation. At the center of the proposal was the principle of validity, meaning 鈥渢he requirement that an evaluation system鈥檚 success in conveying defensible conclusions about a teacherpreparation program should be the primary criterion for assessing its quality.鈥 As the points out, finding imperfections in evaluation methods is easier than fixing them. But that鈥檚 no excuse for making indefensible claims based on faulty methods.
As the report makes clear, evaluation is a complex process requiring careful alignment between the methods used and the purpose of the evaluation, be it for or ensuring accountability, providing information to consumers, or enabling program improvement. If the purpose of an evaluation is accountability, then the National Academy of Education insists that data 鈥渘eed to be collected scrupulously and interpreted rigorously, according to defined and accepted professional standards.鈥 In other words, a high-quality assessment system is necessary to draw valid conclusions about program effectiveness.
The Jan. 8, 2014 issue of 澳门跑狗论坛 included a letter to the editor from Gerardo M. Gonzalez, the dean of the school of education at Indiana University Bloomington, in which he explained why his school would not voluntarily participate in the National council on Teacher Quality鈥檚 teacher-preparation study. In a subsequent letter, which appeared in the Jan. 29 issue, the executive director of Teach Plus Indianapolis, Caitlin Hannon, and three Teach Plus teaching fellows challenged Mr. Gonzalez to 鈥渢ake the lead in creating a framework that holds preparation programs accountable for their graduates鈥 performance.鈥
The Commentary editors invited Mr. Gonzalez and Teach Plus Indianapolis to discuss how they might improve teacher-preparation accountability. Mr. Gonzalez鈥檚 essay appears this week; Ms. Hannon and the teaching fellows鈥 essay will be published in the March 26 issue.
If I were to design a study to hold preparation programs accountable for their graduates鈥 performance, as the group Teach Plus Indianapolis has challenged me to do, I would start with the question of whether a given teacher-preparation program produces graduates who can work effectively in school classrooms to increase student learning and achieve other valued educational outcomes. Then, I would select or create appropriate measures of student learning and related educational outcomes, as well as ways to assess teacher effectiveness on the impact of those measures.
Although the research community is making progress on assessment of both student learning and teacher performance, significant scientific challenges remain in creating valid and reliable measures that can control extraneous influences and minimize random error. Nevertheless, it is now possible to utilize results of student test scores, expert observation of teaching, teacher self-reflection, student-teacher evaluations, work products of students and teachers, and other indicators to address specific questions on teacher effectiveness. As the reliability and validity of such assessment systems improve, it should be possible to link aggregated results to the programs that prepare teachers.
I understand that meaningful assessment of teacher preparation requires a multifaceted approach based on a robust research methodology and focused on program outcomes."
Some states, such as Louisiana and Tennessee, are experimenting with the application of these types of assessment systems to teacher-preparation accountability. Early results of these studies have been published in peer-reviewed journals and are being replicated or otherwise examined by the research community.
Currently in Indiana, legislation is moving through the General Assembly to create a teacher-preparation accountability system that, among other things, would link required annual teacher evaluations to the programs that prepare teachers. Various deans of education, members of our faculties, and I have testified in favor of the legislation. In addition to the aforementioned link of district-level teacher evaluations to teacher preparation which include evidence of measurable gains in student test scores, one attractive feature of the legislation is that it recognizes the role of the standards of the in teacher-preparation accountability. The CAEP standards focus on selection criteria for applicants to teacher-preparation programs; clinical experiences in schools; and measurable improvement against desired student outcomes. The council鈥檚 methods include review of course syllabi, observations of student-teaching experiences, measures of growth in student achievement, results of surveys of program graduates and their employers, and other information collected through both examination of documents and program visits.
In a Sept. 18, 2013, 澳门跑狗论坛 Commentary, 鈥Why the New Teacher Ed. Standards Matter,鈥 Mary Brabeck and Christopher Koch pointed out that the CAEP standards were not created 鈥渂y policymakers in Washington, but by a broad cross section of stakeholders, including university and P-12 officials, representatives of nontraditional programs, chief state school officers, critics, union officials, and others who put aside their differences to create new expectations for the field.鈥 There鈥檚 really no need for me to take the lead in developing a framework that holds preparation programs accountable for their graduates鈥 performance. The field has already done that through the development of the CAEP standards and accreditation procedures.
Teach Plus Indianapolis issued a challenge to me that I鈥檝e tried to address here. Now, I鈥檇 like to present Teach Plus Indianapolis with my own challenge: Insist that any studies you endorse or accountability measures you propose are based on valid data. At a minimum, that means using an appropriate methodology for the research question under investigation and, where warranted, credible Institutional Review Board approval, peer-reviewed publication of results, and replication. It does not suffice to call for a tough conversation about teacher preparation if it is driven by ideology and devoid of meaningful data to inform decisions.
The education and scientific community has a duty to provide a critical review of the evaluation methods used and insist that conclusions reached as a result of the 鈥渃onversation鈥 on teacher education accountability are based on valid data. Otherwise, it鈥檚 just empty rhetoric.