Teacher-training exams have been subject to many criticisms: that there are too many of them, that their content isn鈥檛 relevant, and that their costs are objectionable. New data open a further avenue for criticism: They鈥檙e too easy.
Every state sets the passing score on its teacher-licensing tests below the mean score of the pool of test-takers, according to a federal analysis released recently, suggesting that the exams pose little challenge to many of the individuals taking them.
The data confirm a 2012 澳门跑狗论坛 analysis showing similar gaps in a sample of states.
Released in an annual report issued this month by the U.S. Department of Education, the data compare the average passing scores on each state鈥檚 teacher exams against the average performance of candidates taking those tests. A clear pattern emerges of tests that, on the whole, most teachers pass partly because of where states set the bar, even as multiple groups call on states to institute policies to recruit academically stronger candidates.
Excluding the U.S. Virgin Islands, gaps between cutoff scores and the average score of test-takers range from a low of 10.1 points, in Arizona, to 22.5 points, in Nebraska. For the nation as a whole, the average certification-test passing score is set nearly 15 points below the mean score of candidates.
The gaps aren鈥檛 strictly comparable from state to state, because of differences in the subjects and certification fields tested and in the tests鈥 scales. Many states use the Princeton, N.J.-based Educational Testing Service鈥檚 Praxis series for their licensing tests, and others use state-specific exams designed by Evaluation Systems Group, a Pearson entity based in Hadley, Mass.
States also administer the tests at different points, typically requiring candidates to pass one before entry to a program and others before granting a candidate a teaching certificate. Finally, test content varies, though the exams often measure knowledge beneath the college level.
The federal data represent test-taking from the 2009-10 year.
New Rules
The analysis was made possible by new provisions in federal law. In its 2008 rewrite of the Higher Education Act, Congress directed states to begin reporting both the passing rate and the average scaled score of all test-takers on each teacher examination. (A scaled score is raw performance on the exam translated to its scale, which is used to facilitate year-to-year comparisons.)
States and individual higher education institutions are required to publish that information, and many other details, on teacher preparation on annual 鈥渞eport cards鈥 to the public.
Though licensing-test cutoff scores have, in general, risen in recent years, states have instituted many more tests to meet requirements in the Elementary and Secondary Education Act to staff all core-content classes with a 鈥渉ighly qualified鈥 teacher鈥攐ne who demonstrates subject-matter knowledge, among other things.
Some officials say the gaps are expected because the tests aren鈥檛 meant to do more than prevent the weakest candidates from teaching.
鈥淭hese tests are a measure of minimum content knowledge. They鈥檙e not designed or validated to say that if you score significantly higher, you鈥檙e going to be a better teacher,鈥 said Phillip S. Rogers, the executive director of the National Association of State Directors of Teacher Education and Certification, in Washington, which represents individuals who direct licensing or sit on states鈥 teacher-standards boards. 鈥淲hat it does mean is that the person who passes the test has the minimum content knowledge that the jurisdiction thinks is necessary.鈥
The Education Department report notes that the gaps could be the result of other factors, too.
鈥淚t is also possible that a small gap ... signals relatively low-performing test-takers and a large gap signals relatively high-performing test-takers,鈥 it says.
It is not possible to know the relative difficulty of the exams without knowing the spread of scores on the tests鈥 scale鈥攕uch as what percent of test-takers scored at the bottom quartile. States do not have to report that information on their report cards.
Seeking Comparability
But teacher-educators said the report does raise questions about how to make the data more transparent and comparable.
鈥淐ut scores on state teacher-licensing tests do vary widely across states, and we need more consistency,鈥 said Sharon P. Robinson, the president of the American Association of Colleges for Teacher Education, a Washington group that represents about 800 institutions.
The group supports efforts to establish common cutoff scores across states, which would 鈥減rovide consistency across the profession in terms of expectations for candidates鈥 performance on these exams,鈥 Ms. Robinson said. 鈥淗owever, multiple-choice and selected-response tests will not answer the most essential question: 鈥業s a new teacher ready for the job?'鈥"
AACTE has been working with a Stanford University center and about half the states to pilot an exam that purports to measure teacher-candidates鈥 classroom readiness, based in part on their student-teaching performance.