New York City is collecting data to measure the performance of 2,500 teachers based on how well their students perform on tests, but the experiment is under fire from the local teachers’ union.
Under the pilot project at 120 of the city’s 1,400 schools, teachers are being measured on the number of students making predicted progress, and how their performance compares with that of peers who have similar students and similar experience, as well as with teachers citywide. The value-added model controls for such characteristics as class size, number of special education students and English-language learners, and classroom-discipline issues.
Officials in the 1.1 million-student district say they are not sure exactly how they will use the data that they collect, and whether the information will be used to evaluate teachers or for making tenure decisions.
Christopher Cerf, the deputy schools chancellor spearheading the project, said officials are attempting to find ways to close achievement gaps between students of different backgrounds. Research has shown that struggling students paired with high-performing teachers tend to do better than those taught by low-performing teachers, he said.
Measuring Quality
While it is not easy to tell from teachers’ SAT scores, pathways into teaching, and competency-test scores how good they are, Mr. Cerf added, student performance can be a powerful indicator of teacher effectiveness.
“I am unapologetic in considering that student outcomes are important in measuring the quality of a teacher,” he said.
But the United Federation of Teachers has raised a cry against the pilot project, saying it is unfair to teachers and hurtful for children.
Randi Weingarten, the president of the American Federation of Teachers affiliate, called the project “a terrible thing to do to kids,” because it will lead to teachers’ focusing more intensely on preparing their students to perform better on tests, to the exclusion of other activities that help make up a well-rounded education.
The city’s department of education is keeping the names of the schools where the data is being collected confidential, because of an agreement with the principals. Ms. Weingarten, who criticized the secretive nature of the pilot, said requests from the union for that information had been denied.
The union knew about the experiment before it began, she said, but refused to participate other than to send two representatives to a technical expert panel for the initiative. The UFT did not object to the experiment publicly, however, before a story revealing the initiative appeared on the Jan. 21 front page of the New York Times.
“This is a fundamentally flawed way of thinking about teaching and learning, it is terrible educationally, incredibly unfair to schoolteachers, and it is legally and contractually invalid,” Ms. Weingarten said.
Thomas Toch of Education Sector, a Washington-based think tank, sounded a note of caution about using test scores to judge teachers. More than half of all public school teachers do not teach subjects that are tested, he said, and most tests focus on low-level skills.
“As a result, using test scores to evaluate teachers gives an advantage to those whose teaching skills are focused on teaching only low-level skills,” Mr. Toch said. “Such a system puts at a disadvantage teachers who are able to help students master a wider range of skills.”
Under the pilot project, principals at a different set of 120 schools are carrying out subjective evaluations of roughly the same number of teachers as those in the pilot test.
“I am trying to find out if there is a correlation between how effective a principal thinks a teacher is and what the data shows,” Mr. Cerf said.