If you know how much difference good teachers can make鈥攁nd how hard it might be to spot one from a r茅sum茅鈥攜ou can appreciate the value of new, analytical research being done by those who toil in the realm of 鈥渁dministrative鈥 data kept on teachers.
That label refers to the information kept by states and school districts to track teachers for such purposes as pay and licensure, which is proving to be a treasure trove for officials seeking a better understanding of teacher quality.
The day is not far off, teacher-quality advocates say, when a host of professional and policy decisions could be informed by analysis of data from thousands of teachers and students observed over time. Such longitudinal data allow researchers to measure changes in student achievement鈥攁nd to link them with teacher characteristics.
鈥淭hey hold the potential of answering the questions that are important to quality teaching and, ultimately, the best education for our kids,鈥 says Jacqueline J. Paone, the executive director of Colorado鈥檚 Alliance for Quality Teaching, a coalition of business leaders, state policymakers, and educators.
For instance, teacher-preparation programs could be slated for overhauls鈥攐r not鈥攄epending on how well their graduates perform. Or state policies could reflect new knowledge about which qualifications indicate teacher effectiveness.
Emerging Conclusions
Already, researchers using teacher data from Florida, New York, North Carolina, and Texas have begun drawing some important conclusions:
- Teacher effectiveness varies enormously within schools and districts, although teachers are consistently weakest in their first year or two.
- 螒fter the novice years, the path a teacher took into the classroom seems to make little difference, and the value of experience does not build in equal increments with years on the job.
- A few good teachers in a row can raise students鈥 achievement significantly.
Ultimately, says Jane Hannaway, the principal investigator for the National Center for Analysis of Longitudinal Data in Education Research, or CALDER, at the Urban Institute, such data systems could give 鈥渞eal-time feedback at the classroom level.鈥
That feedback, eventually, could allow matches between specific types of students and specific types of teachers with observations of whether the pairings helped the students learn more.
The teacher data sets go back decades in some cases, and remain useful in their own right. For example, researchers with the Illinois Education Research Council mined that state鈥檚 records on who is teaching in Illinois, discovering that new-teacher attrition from the profession has remained fairly constant since the late 1980s, and in the vast majority of Illinois schools does not constitute the crisis that has been widely claimed nationally. The finding highlighted the special problems of a subset of the schools that overwhelmingly are serving children from low-income families.
Peering Inside the Box
But even the riches of data like those the research council used pale in comparison with data linking teacher characteristics and circumstances over time with student-assessment data. Such a link allows researchers to peer into the 鈥渂ig black box in education,鈥 says Hannaway.
Researchers have figured out that teaching dwarfs other in-school contributors to student academic growth, but still don鈥檛 know much about how that happens.
CALDER was founded last year as a partnership between the Washington-based Urban Institute and scholars from six universities to take advantage of the burgeoning data on student achievement engendered by state and federal accountability systems, especially in those states that have good information on teachers. The researchers intend to focus initially on Florida, Missouri, New York, North Carolina, Texas, and Washington state, which have such comprehensive databases.
The center鈥檚 work is funded by the Institute of Education Sciences, the research arm of the U.S. Department of Education, which last summer awarded more than $62 million in grants to 13 state education departments for the design and upgrade of state longitudinal-data systems. That was the second round of grants under the program, which seeks to improve the quality and comprehensiveness of such systems for the purposes of federal reporting, research, and decision-making.
It鈥檚 not an easy job, just from a technical point of view, according to Hannaway. Typically, data on student achievement and on teacher characteristics are housed in different 鈥渟ilos,鈥 sometimes more than one computer system, even within the same government agency.
鈥淕etting all these data to talk to each other is not a trivial task,鈥 Hannaway says. 鈥淭hey need to be linked together and linked over time.鈥
States Uneven
The Data Quality Campaign, a 2-year-old effort to promote state longitudinal-data systems in education, calls a unique teacher identifier one of the 10 essentials of doing the job right. The identifier鈥攁 number or code that pinpoints one individual鈥攕hould also have the ability to match teachers to students, according to the campaign, which is based in Austin, Texas, and financed by the Bill & Melinda Gates Foundation.
But, according to the group, just 13 states have data systems far enough along to answer the question: Which teacher-preparation programs produce the graduates whose students have the strongest academic growth?
Louisiana is one of the states that, at least partially, can answer that question. Officials there unveiled last fall their first official results from a data system built specifically to gauge the effectiveness of Louisiana鈥檚 teacher-training programs.
A 2007 survey by the Data Quality Campaign finds that all but four states and the District of Columbia assign unique identification numbers to all teachers. Of the states that track teachers, only 12 can link teacher IDs to data on their students鈥 performance.
SOURCE: Data Quality Campaign, 2007
Novice teachers grouped by undergraduate preparation program are being measured against experienced teachers, using student test-score gains. The idea is not to assess the teachers themselves, but to uncover the strengths and weaknesses of their preparation.
鈥淲e鈥檙e just going to another whole level of measuring the effectiveness鈥 of training, says Jeanne M. Burns, the Louisiana education department鈥檚 associate commissioner for teacher education initiatives.
In the past, she says, the state鈥檚 accountability system for teacher-preparation programs centered on such measures as aspiring teachers鈥 passing rates on the certification exam, results from surveys of new teachers, and numbers of teachers produced for shortage areas.
The new system will also allow researchers to gauge the effectiveness of various local versions of a state-required program for supporting new teachers.
Burns says Louisiana鈥檚 system ran into little public opposition because it grew out of the work of a blue-ribbon commission on teacher quality that linked effectiveness to student learning gains.
鈥淚f they had not identified growth of learning as part of our teacher-preparation accountability system, I don鈥檛 think we鈥檇 be where we are today,鈥 she says. Officials and advocates have been clear, she adds, that the endeavor is not 鈥渁bout getting rid of teacher-preparation programs.鈥
In Colorado, proponents of more advanced data systems that include teacher characteristics have further to go. The state founded an 鈥渆ducational data warehouse鈥 in 2001 and it has made strides in being able to track students over their school careers. The state education department also collects teachers鈥 Social Security numbers as part of licensing.
But in practice, says Paone, of the Alliance for Quality Teaching, advocates are left without a ready source of information that they can use to track teachers from job to job.
鈥淲e hear, anecdotally, that teachers move from urban districts to more affluent suburban ones,鈥 she says, 鈥渂ut we have no data to support that.鈥 Knowing whether those suburban districts are, indeed, drawing teachers would help school leaders design policies that would keep more experienced teachers in place, Paone says.
Teacher Concerns
Yet tensions remain around building such data systems. Teachers and their unions, in particular, worry about systems that link teacher data with student-achievement records.
While such systems have the potential to yield rich information on differences that affect student learning, they also raise a thorny question: Might teachers be ranked, assigned, or fired on the basis of such data?
To steer clear of the question, some experts advise a clear focus on student improvement, measured by assessment data, as the goal of any teacher database.
Whatever the exact output of the data system, those who design it must get local districts to see the value of the work beyond the administrative tracking that serves them directly, the experts say.
鈥淚f people resist this, what gets in the system is very bad data,鈥 says Jay Pfeiffer, who heads the Florida education department鈥檚 accountability, research, and measurement division. 鈥淵ou have to get them enthusiastic about it.鈥
Pfeiffer warns, too, that the system鈥檚 partners鈥攖hose who actually collect the information鈥攎ust be confident that the data on individuals will not leak out. 鈥淧rotecting the identifiable information is paramount in this,鈥 he says. 鈥淥ne mistake unravels everything.鈥
In the 1980s, with Pfeiffer playing key roles along the way, Florida began building what is today one of only four state education data systems that meet all 10 criteria of the Data Quality Campaign. The other states in that category are Arkansas, Delaware, and Utah.
Those four states have information, for instance, from student transcripts and can match student records among the elementary, secondary, and postsecondary levels.
Recent papers by Douglas N. Harris and Tim R. Sass, economists at Michigan State University and Florida State University, respectively, dig into the Florida data for student learning gains that might be linked to teacher characteristics: college entrance- or placement-test scores, training, and certification by the National Board for Professional Teaching Standards.
The researchers found that undergraduate teacher preparation has little influence on student achievement, though content-focused professional development appears to help middle and high school math teachers raise scores. Harris and Sass discovered no evidence that teachers鈥 college entrance- or placement-test scores affected their students鈥 achievement gains.
On national certification, they concluded that the voluntary credential鈥檚 ability to indicate teacher effectiveness as measured by student test scores was 鈥渉ighly variable.鈥 Also, the process of becoming nationally certified does not appear to boost teacher effectiveness, nor do nationally certified teachers appear to influence their colleagues in that regard.
The economists鈥 paper is just one example of what can be learned through such data.
鈥淭here鈥檚 lots of research potential in these databases,鈥 Pfeiffer says.
So much potential, he continues, that the education and research communities need to find ways to work together to realize it. Ultimately, students will be the winners.