澳门跑狗论坛

Special Report
Assessment Opinion

My Assessment Problem: A Desk-Eye View

By Ilana Garon 鈥 March 05, 2014 8 min read
  • Save to favorites
  • Print
Email Copy URL

In September of last year, my students at the New York City public high school where I teach sat for a test called the Measures of Student Learning, or MOSL. The test was given for both math and English, each in a one-and-a-half-hour session which, for the sanity of both teachers and students, took place a week apart.

For English, my discipline, the students were given two reading passages, which they were told to use as the basis for an argumentative essay鈥攊n New York City Department of Education parlance, that means an essay in which students state a thesis and use articles to provide both claims and a counter-claim. And here鈥檚 the rub: Per the Department鈥檚 鈥淎dvance鈥 teacher-evaluation initiative, 20 percent of a teacher鈥檚 yearly 鈥渞ating鈥 (Ineffective, Developing, Effective, or Highly Effective) will be based on the students鈥 collective growth on the MOSL assessment, as judged through seven different matrices. Thus, the purpose of the MOSL is not to learn about the students鈥 strengths or weaknesses so much as to learn about their teachers鈥.

Unsurprisingly, my students did not find their teachers鈥 evaluation to be a sufficiently compelling motivator to sit and take this test. The first question they asked was whether, if they took the MOSL, they鈥檇 still have to take the Regents鈥攖he New York State standardized exams鈥攁t the end of the year. When they were told that they would, they asked, 鈥淪o why do we have to take this, too?鈥 and 鈥淒oes this test actually count for our grade?鈥 When told that it would not, they either wrote one-paragraph responses to the essay question (a guaranteed failure, as far as the test score goes) or refused to take it altogether. In the room where I was proctoring, one of the kids called me over and politely inquired, 鈥淲hy are we doing this? This test is a waste of class time.鈥

She couldn鈥檛 have articulated my own feelings more precisely. But it wasn鈥檛 only that I was irked by the fact that the four class periods required for administration of the math and English portions of the test could have been spent reading poetry, talking about novels, practicing writing conventions, or having discussions about world issues that would actually matter in the kids鈥 lives. Nor was it simply the fact that this test was now going to be used to evaluate my colleagues and me as teachers. I also felt quite strongly that, separate from its dubious applications, the MOSL was, quite simply, a badly designed test.

Missing the Point

What do I mean by this? For starters, the articles chosen for the argumentative essay didn鈥檛 pair well. The prompt asked the students to argue whether genius is innate or developed. One article, an excerpt from a Malcolm Gladwell book, examined a case study of violin students finding that greater levels of prestige in the profession corresponded to the number of hours per week the students spent practicing. The students who logged the most hours practicing, the study found, went on to play in world-renowned orchestras; the least consistently practicing students went on to become鈥攚hat else?鈥攙iolin teachers. (It was hard for us teachers not to feel a bit affronted by the choice of this particular passage). But the companion passage鈥攚hich was, presumably, supposed to show that genius was innate (and not developed through hours of practice)鈥攚as only tangentially connected to the prompt. It recounted noted animal scientist Temple Grandin鈥檚 experience of living and working with autism, and contained only a seemingly throwaway line that one had to think a little bit 鈥渙utside of the box鈥 in order to accomplish great things. This, to me, didn鈥檛 especially convey the supposed counter-point idea to the Gladwell article; and I don鈥檛 think my students got the connection either, since a number of the kids who bothered to respond at all wrote some very earnest essays on the subject of 鈥渨hy autism is a hard disease to have.鈥

The problems stemming from the MOSL and other assessments of its ilk really break down into two categories of questions: Are these tests good for evaluating teachers? And what uses鈥攊f any鈥攄o standardized tests have as far as students are concerned? (The tertiary issue of whether all these tests are simply a means of lining the pockets of various test corporations big-wigs is also a valid consideration, but not one I鈥檒l go into here).

Evaluation Dysfunction

From my standpoint, there are a great many reasons why assessments like MOSL are not good for evaluating teachers. One that I don鈥檛 hear articulated enough is how little they can control for outside factors, irrespective of what proponents of the infamous 鈥渧alue-added鈥 models might assert. Whether students do brilliantly or poorly, it鈥檚 nearly impossible to attribute that performance solely to the one teacher they鈥檝e had in a particular subject that year. Perhaps one English teacher鈥檚 students did well on a test because they had a history teacher who consistently drills them on essay writing. Or perhaps it was a writing teacher in an earlier grade who trained them particularly well. Whose pedagogical effect is the test measuring, exactly? Any group of scores is the result of a cumulative effect, not one single teacher鈥檚.

By the same token, what if a whole cohort of students does poorly on a given assessment because they are late-enrollers routed into 鈥渇ailing鈥 public schools, as they all too often are, according to a by the Annenberg Institute? Such students often come from families in extremely stressful situations, including recent immigration from non-English speaking countries and bouncing between relatives鈥 homes and homeless shelters, among other family crises. As a result, they tend to be the poorest performers on assessments.

I recently spoke with a group of researchers from the Human Resources Research Organization, a nonprofit that specializes in personnel management. They acknowledged the faulty link between a teacher鈥檚 performance and students鈥 scores on tests. They likened attempts to evaluate the performance of teachers of students in high-needs schools through their test scores to measuring the performance of an umbrella salesman in a desert: Even great 鈥減erformance"鈥攁 deep product knowledge, great salesmanship, and a great personality鈥攚ould not be 鈥渆ffective鈥 in yielding umbrella sales in a dry zone, just as even great pedagogical performance (interesting lessons, rigorous assignments, a way with kids) in no way guarantees effectiveness in terms of a group of students鈥 test scores hitting a certain mark.

In fact, what proponents of test-based teacher evaluation claim tests can show us about a teacher鈥檚 effectiveness can probably be determined better in other ways. These include principal and peer-to-peer observations, student questionnaires, and examining a teacher鈥檚 curriculum of lessons with an eye for rigor, creativity, and variety. Using students鈥 test scores as a means of evaluating teachers attempts to put a number on something that is inherently too unquantifiable, nuanced, and broadly impacted to be identified through an exam alone.

Failing the Students

The student benefit of these state-mandated assessments is also questionable. The premise of the particular exam my students took鈥攖hat the argumentative essay is somehow the basis of the critical thinking the students will need to reach that ever-elusive 鈥渃ollege-readiness鈥 benchmark鈥攊s in itself faulty. In my own preparation for college, a premium was placed on skills in expository writing, research, clear explanation of sources, and perhaps most importantly, my ability to come up with my own unique interpretation of any given source or set of sources. The argumentative essay as given by the MOSL requires students simply to choose the one reading that seems more 鈥渃orrect鈥 as far as answering the prompt, summarize it, and then mention the points of the other one. It鈥檚 laughable to assume that from this single exercise, one could distill their critical-thinking skills better than from any of the other tasks they perform over the course of their school year, or to believe that this test presents some 鈥減ure鈥 example of critical thinking.

Apologists for 鈥渢eaching to the test鈥 (such as the author of a recent ) might argue that tests like the MOSL promote critical thinking in that they require students to analyze passages of text for meaning and tone鈥攁nd that teachers are simply too lazy to teach difficult passages of reading or critical thinking that these tests require. This argument simplifies the truth, which is that assessments such as MOSL represent only one particular type of critical thinking that is neither universally relevant, nor indicative of all ways one might deduce, synthesize, and re-convey information. It is, however, painfully boring for the kids. And in schools in which truancy is an issue, in which teachers are making every effort to make learning exciting, engaging, and relevant for kids, simply so that they鈥檒l show up consistently, forcing students to take assessment on top of assessment in the hopes that something new will be shown is downright detrimental to educational outcomes.

The best and only useful application of assessments is a limited one, both in scope and frequency: They may be used diagnostically, in class, in the beginning of the school year or new curricular units, in order to help teachers (and parents) determine what strengths students already have and what weaknesses teachers need to address. They should be skill-specific, and not time-consuming.

The idea that an assessment can measure something so broad and nebulous as 鈥渃ritical thinking鈥 or 鈥渃ollege readiness鈥 or 鈥渢eacher efficacy鈥 must be given up entirely, in favor of small-scale mini-assessments, oriented towards discrete skills or topics, that can help teachers to target instruction effectively throughout the school year. Only when viewed this way can assessment through tests鈥攕tate-mandated or otherwise鈥攁ctually serve a useful purpose, both in guiding teachers鈥 instruction in the classroom, and enabling students to develop and obtain achievable educational goals.

Coverage of policy efforts to improve the teaching profession is supported by a grant from the Joyce Foundation, at . 澳门跑狗论坛 Teacher retains sole editorial control over the content of this coverage.

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of 澳门跑狗论坛's editorial staff.
Sponsor
Reading & Literacy Webinar
Literacy Success: How Districts Are Closing Reading Gaps Fast
67% of 4th graders read below grade level. Learn how high-dosage virtual tutoring is closing the reading gap in schools across the country.
Content provided by 
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of 澳门跑狗论坛's editorial staff.
Sponsor
Artificial Intelligence Webinar
AI and Educational Leadership: Driving Innovation and Equity
Discover how to leverage AI to transform teaching, leadership, and administration. Network with experts and learn practical strategies.
Content provided by 
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of 澳门跑狗论坛's editorial staff.
Sponsor
School Climate & Safety Webinar
Investing in Success: Leading a Culture of Safety and Support
Content provided by 

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide 鈥 elementary, middle, high school and more.
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.

Read Next

Assessment Opinion Students Shouldn't Have to Pass a State Test to Graduate High School
There are better ways than high-stakes tests to think about whether students are prepared for their next step, writes a former high school teacher.
Alex Green
4 min read
Reaching hands from The Creation of Adam of Michelangelo illustration representing the creation or origins of of high stakes testing.
Frances Coch/iStock + 澳门跑狗论坛
Assessment Opinion Why Are Advanced Placement Scores Suddenly So High?
In 2024, nearly three-quarters of students passed the AP U.S. History exam, compared with less than half in 2022.
10 min read
Image shows a multi-tailed arrow hitting the bullseye of a target.
DigitalVision Vectors/Getty
Assessment Grades and Standardized Test Scores Aren't Matching Up. Here's Why
Researchers have found discrepancies between student grades and their scores on standardized tests such as the SAT and ACT.
5 min read
Student writing at a desk balancing on a scale. Weighing test scores against grades.
Vanessa Solis/澳门跑狗论坛 + Getty Images
Assessment Why Are States So Slow to Release Test Scores?
Nearly a dozen states still haven't put out scores from spring tests. What's taking so long?
7 min read
Illustration of a man near a sheet of paper with test scores on which lies a magnifying glass and next to it is a question mark.
iStock/Getty