A group of states that is designing tests for the common academic standards has taken a key step to ensure that the assessments reflect students鈥 readiness for college-level work: It gave top higher education officials from member states voting power on test-design questions that are closest to the heart of the college-readiness question.
At its quarterly meeting on April 3, the governing board of the Partnership for Assessment of Readiness for College and Careers, or PARCC, voted unanimously to give members of its advisory committee on college readiness voting power on four issues: how to describe the expected performance levels on the tests, who will set the cutoff scores for the tests, what evidence will be used to decide the cutoff scores, and, crucially, what the cutoff scores will be.
The move puts the highest-ranking officials from one college or university system in most of PARCC鈥檚 24 member states at the voting table, alongside its governing board鈥攖he K-12 schools chiefs from each member state鈥攚hen it comes to the most pivotal questions about crafting tests that reflect college readiness.
One of the two state consortia designing assessments for the Common Core State Standards has decided to give higher education representatives from its leading states voting power in deciding 鈥渒ey matters鈥 related to test design. They are:
- How to describe the expected performance levels on the test.
- Who will set the college-readiness cutoff score for the tests.
- What evidence will be used to decide the cutoff score.
- What the college-readiness cutoff score will be.
SOURCE: Partnership for Assessment of Readiness for College and Careers
Richard M. Freeland, the commissioner of higher education in Massachusetts and co-chairman of PARCC鈥檚 college-readiness advisory committee, told the governing board that getting an active voice in the test-shaping process was something 鈥渨e enthusiastically endorse and are happy to put our energy behind.鈥
The consortium is 鈥渢aking a huge step in operationalizing鈥 a definition of college readiness that reflects higher education鈥檚 expectations, Mitchell D. Chester, the commissioner of K-12 education in Massachusetts and the chairman of PARCC鈥檚 governing board, told the meeting participants.
Support Pivotal
PARCC鈥檚 decision illustrates the importance that states are placing on higher education鈥檚 embrace of the common-standards tests as proxies for college readiness. Colleges and universities pledged support to the idea. But their willingness to actually use the final tests as proxies for readiness鈥攖o let students skip remedial work and go right into entry-level, credit-bearing courses鈥攊s considered pivotal to the success of the common-standards initiative, which rests on the idea that mastery of those expectations will prepare students for college study.
鈥淭his verges on being historic,鈥 said David T. Conley, an Oregon researcher widely known for his work to define college readiness. 鈥淚n the U.S., on this scope and scale, it鈥檚 unprecedented to have this level of partnership between postsecondary systems and high school on a measurement of readiness.鈥
PARCC and another group of states, the SMARTER Balanced Assessment Consortium, have $360 million in federal Race to the Top money to design assessment systems for the Common Core State Standards. The standards, which cover English/language arts and mathematics, have been adopted by 46 states and the District of Columbia.
When the U.S. Department of Education offered test-design funding to groups of states, in April 2010, it asked for assessment systems that can serve many purposes. Those include measuring student achievement as well as student growth, judging teacher and school performance, offering formative feedback to help teachers guide instruction, and providing gauges of whether students are ready鈥攐r are on track to be ready鈥攖o make smooth transitions into college and good jobs.
Leaders of both consortia recognize that much is riding on the support of higher education, since the common-standards initiative rests on the claim that mastery of the standards鈥攁nd passage of tests that embody them鈥攊ndicate readiness for credit-bearing entry-level coursework. If colleges decline to use the tests to let students skip remedial work, that could undermine the claim that the tests reflect readiness for credit-bearing study.
That thinking was woven through the Education Department鈥檚 initial invitation to the states to band together to design the tests. To win grants in that competition, the consortia had to show that they had enlisted substantial support from their public college and university systems. Both did so.
The Challenge of Consensus
Whether those higher education systems maintain their support for the final tests remains to be seen, however. Skeptics have noted that getting states鈥 K-12 systems and their diverse array of college and university systems to agree on cutoff scores that connotes proficiency in college-level skills, for instance, will be challenging.
鈥淭his cut-score thing is going to be a nightmare,鈥 Chester E. Finn Jr., the president of the Thomas B. Fordham Institute, a Washington think tank, said at an August 2010 meeting of the National Assessment Governing Board, which sets policy for the National Assessment of Educational Progress, or NAEP. 鈥淚鈥檓 trying to envision Georgia and Connecticut trying to agree on a cut score for proficiency, and I鈥檓 envisioning an argument.鈥
PARCC鈥檚 college-readiness committee will not only vote on test-design issues, but it also already plays an active role in the consortium鈥檚 strategy to engage higher education colleagues in dialogue about the assessment and enlist their support, PARCC officials said. The consortium鈥檚 higher education leadership team, which includes additional college and university leaders, is also playing a leading role in that dialogue and engagement.
The SMARTER Balanced Assessment Consortium鈥檚 nine-member executive committee includes two higher education representatives with full voting power: Charles Lenth, the vice president for policy analysis and academic affairs for the State Higher Education Executive Officers, a Boulder, Colo.-based group, and Beverly L. Young, the assistant vice chancellor of academic affairs for the California State University system.
In addition, the consortium has appointed higher education representatives from each member state to provide input into test development and coordinate outreach to colleges and universities in their states. Higher education representatives also take part in 10 鈥渨ork groups鈥 that focus on key issues, such as psychometrics, technology, and accessibility and accommodations.
The consortium鈥檚 governance structure 鈥渋s designed to ensure input from higher education through representation on the executive committee, collaboration with higher education state leads, and participation in state-led work groups,鈥 said consortium spokesman Eddie Arnold.
Mr. Conley, who advises the SMARTER Balanced group, said it is important to have higher education representatives at the table during test design to create a shared concept of the skills necessary to college success and how to measure those on a test. But he cautioned that those ideas must also have the support of college faculty members鈥攏ot just their leadership鈥攊f the idea of shared standards is to succeed.
Discussion at the PARCC governing board meeting offered hints about the difficulty of getting consensus on critical issues of test design.
Soliciting feedback from board members, Mary Ann Snider, Rhode Island鈥檚 chief of educator quality, asked how many performance levels they thought the tests should have: three, four, five, or some other number. Most states voted for four levels, largely mirroring the current practice in most PARCC states. Ms. Snider asked when indicators of being 鈥渙n track鈥 for college readiness should first appear on test results: in elementary, middle, or high school. Most members voted for elementary school.
She also asked whether the tests should show only how well students have mastered material from their current grade levels, or how well they鈥檝e mastered content from the previous grade level, too. Responses came back deeply divided.
Bumpy Road Ahead
That question attempted to explore an important part of the dialogue about the new assessments: how to design them so they show parents, teachers, and others how students are progressing over time, rather than provide only a snapshot of a given moment. But the prospect of having a given grade鈥檚 tests reflect students鈥 mastery of earlier grades鈥 content raised some doubts on the board.
鈥淚f I鈥檓 a 5th grade teacher, am I now responsible for 4th grade content in my evaluation?鈥 asked James Palmer, an interim division administrator in student assessment at the Illinois state board of education.
Gayle Potter, the director of student assessment in Arkansas, said it鈥檚 important to give parents and teachers important information about where students are in their learning. But she also said she worried about 鈥済iving teachers mixed signals鈥 about their responsibility for lower grades鈥 content.
Some board members noted that indicators of mastery of the previous year鈥檚 content would be helpful in adjusting instruction. But others expressed doubt about whether a summative test was the best way to do that. Perhaps, they said, that function is better handled by other portions of the planned assessment system, such as its optional midyear assessments.