Three state consortia will vie for $350 million in federal financing to design assessments aligned to the recently unveiled common-core standards, according to applications submitted Wednesday to the U.S. Department of Education.
Part of the Race to the Top program, the competition aims to spur states to band together to create measures of academic achievement that are comparable across states.
Two consortia鈥攖he , which consists of 31 states, and the , or PARCC, which consists of 26 states鈥攚ill compete for the bulk of the funding, $320 million, to produce comprehensive assessment systems.
Potentially signaling a shift away from the multiple-choice questions that dominated tests in the wake of the No Child Left Behind Act, both consortia would combine results from performance-based tasks administered throughout the course of the school year with a more traditional end-of-the year measure for school accountability purposes.
Read the applications from:
;
; and
State Consortium on Board Examination Systems
State officials 鈥渨anted to make sure that the assessments were actually signaling appropriately the kind of instruction that teachers were expected to engage in and performance students were expected to be able to do,鈥 said Michael Cohen, the president of Achieve, a Washington-based nonprofit group that is a project-management partner for the PARCC consortium. 鈥淭hey didn鈥檛 want a bunch of bubble tests to drive instruction.鈥
Both consortia also plan to administer their year-end assessments via computer. But only the SMARTER Balanced group would use 鈥渃omputer adaptive鈥 technology, which adjusts the difficulty of test questions in relation to a student鈥檚 responses, as the basis of that year-end test.
The consortia also propose to provide participating states with formative-assessment tools and data-management systems to help administrators and parents access student-performance information over the course of the year and to help teachers intervene and adjust instruction as it occurs.
鈥楽MARTER鈥 BALANCED ASSESSMENT CONSORTIUM (31 STATES)
Procurement state: Washington
Governing states: Connecticut, Hawaii, Idaho, Kansas, Maine, Michigan, Missouri, Montana, Nevada, New Mexico, North Carolina, Oregon, Utah, Vermont, Washington, West Virginia, Wisconsin
Participating states: Alabama, Colorado, Delaware, Georgia, Iowa, Kentucky, New Hampshire, New Jersey, North Dakota, Ohio, Oklahoma, Pennsylvania, South Carolina, South Dakota
Key Design Elements: Summative assessment will be based on computer-adaptive technology. Results will be coupled with those from performance-based tasks administered over the course of the year.
Performance tasks will include two in English/language arts and two in mathematics in grades 3-8 and up to six by grade 11 for both subjects. Tasks will be delivered by computer and will take one to two class periods to complete.
Consortium will support development of optional formative and interim/benchmark assessments that align to performance tasks, and an interface for parents, teachers, and administrators to access information on student progress.
PARTNERSHIP FOR THE ASSESSMENT OF READINESS FOR COLLEGE AND CAREERS (26 STATES)
Procurement State: Florida
Governing states: Arizona, District of Columbia, Florida, Illinois, Indiana, Louisiana, Maryland, Massachusetts, New York, Rhode Island, Tennessee
Participating states: Alabama, Arkansas, California, Colorado, Delaware, Georgia, Kentucky, Mississippi, New Hampshire, New Jersey, North Dakota, Ohio, Oklahoma, Pennsylvania, South Carolina
Key Design Elements: Summative test will be delivered by computer and results coupled with those from performance-based tasks administered over the course of the year.
Performance tasks will include three in English/language arts and three in mathematics.
Benchmarks will be designed so that stakeholders can determine whether students at each grade are on track to be prepared for college or for careers.
Consortium will support development of interface for parents, teachers, and administrators to access information on student progress.
STATE CONSORTIUM ON BOARD EXAMINATIONS SYSTEMS (12 STATES)
Procurement state: Kentucky
Governing states: Arizona, Connecticut, Kentucky, Maine, New Hampshire, New Mexico, New York, Pennsylvania, Rhode Island, Vermont, Massachusetts, Mississippi
Key Design Elements: Consortium will adapt board examination systems from other countries to align to the common-core standards.
Plans include at least three board examination systems in lower-division high school grades and five in the upper division, in English, math, science, and history, as well as in three career and technical occupational groupings.
SOURCE: 澳门跑狗论坛; Consortia Applications
A smaller band of 12 states is the only contender for a smaller, $30 million competition earmarked by the Education Department to support specific exams aligned to high school grades or courses.
Similarities and Differences
The federal competition initially gave rise to six assessment consortia, but those consortia merged into three before the final applications were due. (鈥淪tates Rush to Join Testing Consortia,鈥 Feb. 3, 2010.)
澳门跑狗论坛 obtained the three proposals from the consortia in advance of the application deadline, after officials at the Education Department said they could not make the applications immediately available online. The education department also received a fourth application, from a Texas-based organization called Free to Be, but that application listed no states as consortium members, a required eligibility criterion for the competition.
Experts familiar with the applications noted the similarities between the two larger consortia鈥檚 submissions.
鈥淭hey look a whole lot alike,鈥 said Scott Marion, the associate director of the Dover, N.H.-based Center for Assessment and a consultant to officials in both the SMARTER Balanced and PARCC groups. 鈥淭hey started with very different visions and ended up converging.鈥
For instance, both the PARCC and SMARTER Balanced consortia envision a system that couples a year-end assessment with several performance-based tasks, or 鈥渢hrough-course assessments,鈥 that take place over the course of the school year.
Those tasks, the applicants wrote, reflect an emphasis in the federal competition guidelines and in the work of the Common Core State Standards Initiative on measuring students鈥 ability to synthesize, analyze, and apply knowledge, not merely recall it.
And although both consortia would use some form of selected-response questions on their year-end accountability measures, they underscored that their states would explore the use of 鈥渢echnology enhanced鈥 items that gauge higher-order critical-thinking abilities, rather than rely solely on multiple-choice questions that don鈥檛 lend themselves to measuring those skills.
Such abilities might be measured, for instance, by using items that require students to interact with on-screen features, such as a graph.
In one key difference between the two proposals, the SMARTER Balanced group plans to employ computer-adaptive technology in some of its measures rather than a traditional fixed-form test. A number of states have experimented with adaptive-test technology, but only Oregon now uses it to meet the NCLB law鈥檚 annual testing requirements.
Joe Willhoft, the assistant superintendent of assessment and student information for Washington, the state applying on behalf of that consortium, said the technology helps meet the federal competition鈥檚 stipulation that the common tests cover the breadth and depth of the common standards and provide equally accurate information on both low- and high-achieving students.
鈥淩elatively quickly, the adaptive engine can find questions that are appropriate to a student鈥檚 level of performance and get a measurement of precision,鈥 Mr. Willhoft said. 鈥淲ithout adaptive testing, it鈥檚 pretty hard to imagine how you could develop a test that鈥檚 long enough for that.鈥
In addition, the SMARTER Balanced group plans to invest significantly in developing 鈥渋nterim鈥 and formative assessments that would align to the performance-based tasks. Educators would use those tools to gauge student progress and to pinpoint areas of instructional weakness, not for school accountability.
鈥淥ur theory is that [a] summative [assessment] alone cannot deliver all the information to have actionable data in the hands of teachers,鈥 said Susan Gendron, the education commissioner in Maine, one of the governing states in the consortium, at the Council of Chief State School Officers鈥 recent assessment conference in Detroit. 鈥淪o we will develop interim and formative tools teachers can use to look at learning progressions and where a student is at a given moment on that continuum.鈥
The main goal of the PARCC group would be in devising an instrument that is used for making judgments and that helps determine whether students are able to succeed in college without remediation, or do well in entry-level jobs. It plans to expend less effort overall on devising the formative-assessment pieces and supports, though it would help educators make use of its instruments and released test items for instructional purposes.
Both consortia also plan to build systems for sharing data with educators, parents, and teachers throughout the school year to help them know whether students are on track to reach benchmarks.
Despite the overall similarities in the proposals, officials involved in both consortia said there was no concerted attempt to merge them into just one.
They did say, however, that they planned to work together in some areas鈥攕uch as devising common benchmarks indicating when a student has met standards and to devise methods for comparing student performance across all the states.
And a handful of states, including Alabama, Ohio, and South Carolina, are participating in both of the large consortia, but aren鈥檛 yet part of the governing body of either one.
Scoring Glitches
If both of the major consortia were to win grants, they could expand teacher-scored assessments to a scale not seen before in public education. Though a handful of states have experimented with such scoring, most states discarded the practice in the wake of the NCLB law.
Of the two consortia vying for grants to create comprehensive assessment systems, the SMARTER Balanced one takes a stronger approach toward the teacher scoring of assessments. That group views teacher scoring as a critical part of professional development for teachers that would help them recognize when students鈥 work products show evidence that they鈥檝e mastered standards.
Under its proposal, teachers would score the performance events and some open-ended questions, supplemented by 鈥渁rtificial intelligence鈥 computer-scoring software. Teachers also would audit a sample of the graded exams, and they would help score interim or benchmark assessments.
The PARCC group envisions a similar system combining both computer and human scoring, but it would allow states to determine whether teachers would participate in human scoring or whether test vendors would do it.
Some states, like New York with its regents exams, have much more experience with teacher-scoring practice than others, Achieve鈥檚 Mr. Cohen noted.
鈥淕iven the histories, traditions, and costs in different states, it seemed sensible to leave that up to states to decide or districts to decide rather than to make a uniform decision across the states,鈥 he said.
Neither group, however, would permit teachers to score their own students鈥 summative-exam results.
The PARCC application notes that some states may decide to eschew teacher scoring if they choose to use the results from the standardized tests to gauge teacher and principal effectiveness.
High School Assessments
Both the SMARTER Balanced and the PARCC consortia seem well poised to receive 20 additional competitive points in the Race to the Top competition for attaining 鈥渂uy-in鈥 from higher education. Each secured many commitments from public colleges and universities to use the results of high-school-level assessments developed by the consortia to place students into credit-bearing courses.
According to the PARCC consortium鈥檚 application, 184 institutions of higher education across the states that represent 90 percent of direct-matriculation students have agreed to do so. In the SMARTER Balanced states, 162 institutions signed on, representing 74 percent of direct-matriculation students.
The consortium that envisions the most far-reaching changes to the structure of the high-school-to-college pipeline is the sole applicant for the smaller high school assessment competition.
The State Consortium on Board Examination Systems, a group of 12 states, seeks changes from the Carnegie unit system for high school, based largely on seat time and credit hours, to one in which students are given options after mastering a performance-standard based on a board examination.
States would adopt such exams from among a choice of examples from around the world. The consortium would help align the exams to the common standards, but would not create brand-new tests.
Participating states have signed memorandums of understanding committing to pilot the system in select schools. They must also agree to offer a new diploma as early as the sophomore year of high school to those students who pass lower-division exams offered that year.
After passing such exams, students could go directly to open-admission colleges without needing to take remedial classes; follow a career and technical education pathway; or continue on in high school to pass higher-division exams in preparation for entry into selective colleges.
Marc S. Tucker, the president of the National Center on Education and the Economy, the project management partner for the high school consortium, said that the board examination systems come complete with other elements, such as curricula, to raise the level of instruction.
鈥淲hat we鈥檙e offering is very high-quality assessment that is directly linked to curriculum, directly linked to teacher training,鈥 said Mr. Tucker, whose nonprofit organization works to improve the linkages between education and the workplace. 鈥淚t鈥檚 not just the assessment we鈥檙e adapting, it鈥檚 the entire instructional system.鈥
But it would not pave the way to a common curriculum, since states could choose which board examination system to adopt, he added.
Before the RTT application deadline, a panel of state career and technical education officials had filed an intent to compete in the high school competition. But that group ultimately decided not to advance a proposal, according to Kimberly A. Green, the executive director of the National Association of State Directors of Career Technical Education Consortium, a Silver Spring, Md., membership group.
Ms. Green said the officials withdrew in part because of the short time frame for submitting a proposal and a lack of capacity among interested states.
But it was also because the smaller competition was not dedicated to assessment of CTE-related skills and contexts, she said.
The competition 鈥渟till required you to do two academic tests, and nobody could quite figure out how it differed from [the larger competition],鈥 she said. 鈥淐TE was a competitive priority, but it was just a small piece.鈥
Her group and several of the states instead will participate in the CTE component of the SCOBES consortium鈥檚 work.
Next Steps
With the applications in, the competition now lies in the hands of the Department of Education. It had planned to award up to two comprehensive assessment-system grants and one high school assessment grant under the competition.
There is much enthusiasm for the new efforts, but already some experts have concerns. Mr. Marion of the Center for Assessment, for one, says there isn鈥檛 much of a knowledge base to determine how the new performance tasks would work in practice.
For instance, it鈥檚 unclear how heavily scores on the performance-based tasks should be weighted in the overall assessment score鈥攁 key point that neither of the large consortia addressed in its application.
鈥淲e don鈥檛 have at this point enough understanding about how to do these through-course assessments and incorporate them validly and fairly into summative judgments,鈥 Mr. Marion said. 鈥淚t鈥檚 a good idea, but there is a way to go to put them into practice.鈥
Wayne Camara, the vice president for research and development at the College Board, said at the Detroit CCSSO meeting that the lack of funding for assessment research in general hampers any effort to develop sound, large-scale new tests. That situation is exacerbated by the fast timeline that the RTT program requires, with tests fully operational by the 2014-15 school year.
He questioned whether the timeline allows sufficient time to field-test and pilot the new tests and cautioned that developing and using them too quickly could pose significant risks.
鈥淩esearch in isolation from scale-up鈥 is what鈥檚 needed, Mr. Camara said.
Even before then, states must figure out how to reach agreement on outstanding details. Mr. Cohen of Achieve conceded that the PARCC proposal is still at 鈥渁 high level of generality,鈥 and that if the group wins a grant, much work remains to iron out details on the essential features of the assessment system.
鈥淩eaching and sustaining consensus among a large number of states, when you get down to details of test design and administration, is not an easy thing to do,鈥 he said. 鈥淲e learned that with [the American Diploma Project鈥檚 algebra assessments], ... and this is much more challenging.鈥