In a move that could reshape academic assessment in nearly every corner of the country, the U.S. Department of Education has awarded $330 million in grants to collaboratives of states to design better ways of measuring student learning.
The , awarded Sept. 2, went to two groups of states that sought the money under the federal Race to the Top program, spawned last year by the federal economic-stimulus law. Through the two consortia, 44 states and the District of Columbia are taking part in shaping the new assessments, which are scheduled to debut in the 2014-15 school year.
Dividing the money almost equally, the consortia must build testing systems that gauge how well students master a newly crafted set of common academic standards in mathematics and English/language arts that have been adopted so far鈥攁t least provisionally鈥攂y 36 states and the District of Columbia. Those testing systems also must be capable of producing rapid feedback for teachers on students鈥 learning and of measuring the effectiveness of teachers, schools, and school leaders.
A third group of 12 states applied for a separate $30 million pot for development of high school exams but did not get a grant.
Two consortia, or groups, of states have won Race to the Top grants to design new K-12 systems of assessment to be rolled out in the 2014-15 school year.
Partnership for the Assessment of Readiness for College and Careers Consortium
鈥 MEMBERSHIP: 26 states
鈥 AWARD: $170 million
鈥 PROCURING STATE: Florida
鈥 GOVERNING STATES: Arizona, the District of Columbia, Florida, Illinois, Indiana, Louisiana, Maryland, Massachusetts, New York, Rhode Island, Tennessee
鈥 ADVISORY STATES: Alabama, Arkansas, California, Colorado, Delaware, Georgia, Kentucky, Mississippi, New Hampshire, New Jersey, North Dakota, Ohio, Oklahoma, Pennsylvania, South Carolina
SMARTER Balanced Assessment Consortium
鈥 MEMBERSHIP: 31 states
鈥 AWARD: $160 million
鈥 PROCURING STATE: Washington
鈥 GOVERNING STATES: Connecticut, Hawaii, Idaho, Kansas, Maine, Michigan, Missouri, Montana, Nevada, New Mexico, North Carolina, Oregon, Utah, Vermont, Washington, West Virginia, Wisconsin
鈥 ADVISORY STATES: Alabama, Colorado, Delaware, Georgia, Iowa, Kentucky, New Hampshire, New Jersey, North Dakota, Ohio, Oklahoma, Pennsylvania, South Carolina, South Dakota
Note: PROCURING STATES are the fiscal agents. GOVERNING STATES shape test-design policy. ADVISORY STATES consult on test design, but have no decisionmaking authority.
SOURCE: 澳门跑狗论坛, State Consortia
In a speech announcing the winners, Secretary of Education Arne Duncan lauded the competition for its potential to be a transformative force in education by giving teachers immediate, meaningful feedback on students鈥 understanding and leaving 鈥渂ubble tests鈥 behind to measure a more complex range of student skills. He also said the Obama administration hopes to see common tests of other subjects, such as science and history, created in the future.
鈥淚 am convinced that this new generation of state assessments will be an absolute game-changer in public education,鈥 Mr. Duncan said at a meeting of education policymakers in Alexandria, Va.
Only two consortia competed for the bigger pot of funding. The , or PARCC, which consists of 26 states, won $170 million. The , or SBAC, which includes 31 states, won $160 million. A dozen states belong to both groups, but most were expected to choose one eventually.
Common Features
Analysts have noted that the two winning groups鈥 plans have much in common.
Both say they would combine the results from performance-based tasks administered throughout the academic year with a more traditional end-of-the-year measure for school accountability purposes. Both also plan to administer their year-end assessments via computer, but only the SMARTER Balanced group would use 鈥渃omputer adaptive鈥 technology, which adjusts the difficulty of questions to students鈥 responses, as the basis of that year-end test.
Both seek to reflect not simply students鈥 skill at factual recall, but also their strength at analyzing material and applying their knowledge. But the SBAC group intends to focus more effort on devising interim and formative assessments to capture students鈥 progress as they learn. (鈥淭hree Groups Submit Applications for Race to Top Assessment Grants,鈥 July 14, 2010.)
A panel of nine peer reviewers, convened by the Education Department, evaluated the applications. On scoring guides totaling 220 points, the panel awarded 164 points to the Partnership consortium and 151 points to the SMARTER Balanced group. PARCC won more money only because it requested more, not because it scored higher than its rival, according to department spokeswoman Sandra Abrevaya.
In evaluating the proposal for high-school-level tests, submitted by the State Consortium on Board Examination Systems, or SCOBES, the reviewers gave it 126 points out of 220. The proposal drew widely varying reactions from among the nine reviewers.
The SCOBES group had hoped to help states and districts offer 鈥渉ighly integrated鈥 systems of instruction that included a core curriculum with course syllabuses, exams derived from those course outlines, and professional development. That idea was spearheaded by Marc. S. Tucker, the president of the Washington-based National Center on Education and the Economy. Noting that the project predated the Race to the Top, Mr. Tucker said the 12 states involved plan to seek other sources of funding and move ahead.
Ms. Abrevaya said the department would not repeat the high school exam competition. Mr. Duncan, in a conference call with reporters, noted that the winning proposals by the two larger consortia both include a high school component.
Building Consensus
Observers caution that many challenges remain as the winning consortia begin the complex work of turning ideas into new assessment systems.
In the last few months, both groups worked to refine their governance structures and their test-design ideas. But now they move into a more complicated phase of inviting and evaluating proposals, hiring additional staff members, and juggling the dozens of people involved in the effort, from state education department officials to project managers and vendors. Even scheduling meetings鈥攚hether actual or virtual鈥攚ill take on new degrees of difficulty.
鈥淚t won鈥檛 be easy getting everyone into the big tent,鈥 said Tony Alpert, who is a co-chairman of the SMARTER Balanced consortium鈥檚 executive committee. 鈥淭o put together even one meeting, we have to work across five time zones.鈥
The consortia will need to build the right types of expertise into their work, said Michael Cohen, the president of Achieve, a Washington-based group that is the project-management partner for the PARCC consortium. His group, for instance, will need to build capacity in project management, quality assurance, assessment development, and engagement of precollegiate and college-level education systems, among other areas, he said.
Nurturing and maintaining consensus among consortium members will be a key challenge as well, assessment experts say.
Scott Marion, the associate director of the Dover, N.H.-based Center for Assessment, said there will be plenty of nitty-gritty issues to work through. Can states agree, for instance, on what the testing window will be for all the assessment pieces? Will they all agree to the same policies in making accommodations for students with disabilities or for English-language learners?
鈥淭here are all sorts of things like that, all these nagging details when you go from ideas to operation,鈥 said Mr. Marion, whose group helped all three consortia as they shaped their proposals.
Mr. Cohen recalled an effort nine years ago to craft a common Algebra 1 test for a group of Southern states. It didn鈥檛 take long, he said, for state assessment directors to 鈥渁gree that they couldn鈥檛 agree鈥 on a host of issues, including when to administer the test.
Another unresolved question facing the consortia concerns the extent to which states can be granted freedom to vary their approaches to the common assessments.
鈥淚t鈥檚 hard to get 30 states to agree on anything,鈥 said Gary W. Phillips, a vice president and chief scientist at the American Institutes of Research, based in Washington. 鈥淏y the nature of the states and their natural independence, they will want to have a lot of flexibility, but you can鈥檛 have things that are common and also have a lot of flexibility.鈥
Multipurpose Measures
The consortia will grapple, too, with the tension inherent in trying to use one assessment for multiple purposes, observers say.
The Education Department鈥檚 invitation to applicants sought systems of assessments that would be used to take a snapshot of students鈥 achievement, measure their growth over time, gauge their college and career readiness, and evaluate the effectiveness of teachers, principals, and schools. The tests are supposed to produce summative information for accountability purposes, as well as feedback to help teachers adjust instruction in real time.
鈥淭he more additional purposes you add to an assessment system, the less likely you are to be able to fulfill those additional functions well,鈥 said W. James Popham, a professor emeritus of education at the University of California, Los Angeles, who focuses on assessment.
He added that it is possible to design a system both to evaluate schools and to provide instructional feedback to teachers, but he ventured that it will be difficult in an assessment industry dominated by 鈥渢raditionalists鈥 accustomed to making tests that compare students with one another.
As the consortia design their assessments, it is expected that Congress will rewrite the Elementary and Secondary Education Act, which is already long overdue for reauthorization. That could put the assessment consortia in the position of having to adjust their work to make their products usable for federal accountability purposes, a number of experts noted.
That is one reason why the Race to the Top assessment grants are being handled as 鈥渃ooperative agreements,鈥 Education Department officials said. That arrangement allows the state consortia to make鈥攐r the federal government to demand鈥攃hanges in the work agreement in response to changing circumstances, according to those in the department and the consortia.
Rather than simply supplying 鈥渄eliverables鈥 to fulfill a grant, consortium members will have a more interactive, involved relationship with Education Department officials as they complete the cooperative agreements, consortium insiders said.