RICHARD J. STIGGINS is widely known as an advocate of classroom assessments in the service of student learning. His long career in testing brought him to that vantage point: He holds a doctorate in educational measurement, has worked as the director of test development at ACT, and served on the faculty at several education schools. As the president of the Assessment Training Institute in Portland, Ore., from 1992 to 2010, he helped teachers design classroom-assessment tools and strategies.
He joins 澳门跑狗论坛 Associate Editor Catherine Gewertz to define what formative assessment is鈥攁nd isn鈥檛鈥攁nd explain its purposes, benefits, and how it鈥檚 distinguished from other types of assessment.
The interview has been edited for space and clarity.
So what do you think the biggest areas of misunderstanding are about formative assessment among educators?
Stiggins: I think one big misunderstanding is among policymakers at all levels. [It鈥檚] the mistaken belief that somehow annual accountability standardized testing improves schools. It鈥檚 not that I鈥檓 opposed to assessment at that level, but the obsessive belief that somehow this is the application of assessment that will improve schools flies in the face of everything we know.
A second misunderstanding is that people are tending to think about formative assessment as an event, rather than a process. The way we have to think about it is that we engage in the ongoing, day-to-day classroom-assessment process to give teachers and their students the information they need to understand what comes next in the learning. It isn鈥檛 a one-time event.
There鈥檚 another misunderstanding鈥攁gain, a lack of understanding鈥攖hat may surprise most people when I mention it. That is our failure to understand the role of the emotional dynamics of being evaluated from the student鈥檚 point of view. For formative purposes, those dynamics have to center on keeping students believing in themselves. It isn鈥檛 merely about getting teachers more information so they can make better instructional decisions. Good formative assessment keeps students believing that success is within reach if they keep trying.
Tell us a little bit more about this idea about students being engaged in the process.
This idea arises from a researcher and assessment expert in Australia; his name is Royce Sadler. What he said to us is, we use formative assessment productively when we use it in the instructional context to do three things. One is, keep students understanding the achievement target they鈥檙e aspiring to. The second is, use the assessment process to help them understand where they are now in relation to that expectation. And the third is, use the assessment process to help students understand how to close the gap between the two. Do you see where the locus of control resides? It鈥檚 with the student.
Should formative assessments ever be graded? Because we certainly hear about them being graded.
Here鈥檚 how I think about it. Anything and everything that students do by way of their work, or their performance, needs to be evaluated, to be sure, in terms of very specific, preset performance criteria that are known to the teacher and the student. So for example, in diagnosis, the judgments about student performance in relation to those criteria help to identify students鈥 strengths and weaknesses. And, of course, diagnosis is, how do you rely on the strengths to overcome the weaknesses? To provide feedback, we need to help students know how to do better. Judgments about how they鈥檙e doing in relation to those performance criteria will reveal that to them and to their teacher.
We need to keep good records so that we can track student changes over time. But [in the formative] context, there鈥檚 really never a need to assign a letter grade in this context. My admonition to teachers is, while the learning is going on, and we鈥檙e diagnosing and providing that good feedback, the grade book remains closed.
There is a variation on this theme that is important. That is, can formative evidence ever serve summative purposes? And the answer is clearly, 鈥測es.鈥 If I have information from the formative application of assessment in my classroom that reveals a higher level of achievement than was revealed by, for example, a unit final exam, then it鈥檚 my responsibility to use the best evidence I have to determine, for example, a student鈥檚 report-card grade. So yes, the barrier between the two can come down, but only under those very specific circumstances.
I鈥檇 love to hear an example or two of a formative assessment that you think was done really well.
Well, the classic example is a process that I was privileged to watch unfold over time in a high school English class. The assignment was to write a term paper: read three pieces of literature by the same author and [defend a] thesis statement in a term paper. What this teacher did was that, to begin with, she distributed a copy of a term paper that was of outstanding quality. What she asked students to do is read it as a homework assignment and try to make judgments about what it was about this paper that really made it outstanding. The next day in class they brainstormed a list of all the attributes that made it an outstanding piece of work in the students鈥 opinion.
Next, she distributed a copy of a term paper that she had actually fabricated that was of dismal quality. Once again, the assignment was to read the term paper and see if you can articulate what makes it an ineffective piece of work. And they brainstormed again. And then she said, 鈥淥K, let鈥檚 talk about the differences between these two papers. What was it about the good paper that differentiates it from the bad paper?鈥 They began to brainstorm that. They had a long list of differences between the two. Now, what I want people to understand is, as this was unfolding, ain鈥檛 nobody writing any term papers. They鈥檙e still working on how to think about this. What she got them to do is to boil down that long list of differences into the four or five most essential differences, coalesce them, group them together. Then they wrote definitions of them, working in teams, definitions of those key attributes.
She had them begin to create student versions of rating scales of quality鈥攍ike, she said for this particular attribute, take a few minutes to think about what that attribute would look like when it鈥檚 outstanding. What would that attribute look like when it鈥檚 of dismal quality? And what would the midrange look like? What they were doing, in effect, under her leadership鈥攁nd understanding that this wasn鈥檛 being left completely to students, she was leading them through this process to center on the key attributes鈥攚ere essentially student-friendly versions of the learning targets that they were expecting to hit. When they were done with all of this, it came time to draft their papers. So ... what happens is, they begin to zero in on the really key attributes of good work before they begin the work.