澳门跑狗论坛

Standards & Accountability

Science Curriculum Reviews Are Out, and Results Aren鈥檛 Great

By Stephen Sawchuk 鈥 February 28, 2019 10 min read
  • Save to favorites
  • Print
Email Copy URL

The first independent review to weigh whether new science curriculum series are truly aligned to a set of national standards .

Four of the series鈥擠iscovery鈥檚 Science Techbook, Carolina Biological Supply Company鈥檚 Science and Technology Concepts, and two versions of Teachers鈥 Curriculum Institute鈥檚 Bring Science Alive!鈥攚ere deemed insufficiently aligned to the Next Generation Science Standards. One series, Houghton Mifflin Harcourt鈥檚 Science Dimensions, was considered partially aligned. Only one series, Amplify鈥檚 Amplify Science, got top marks for alignment, coherence, and usability, according to the nonprofit EdReports, which conducted the reviews.

Those texts represent a sample of middle school science curricula; others will be reviewed in the future. All six of the series were designed for grades 6-8, and they represent a range of print and digital materials.

Already, some of the publishers of the series are disputing the way EdReports carried out these reviews. Let鈥檚 dig into the results a bit and see what they reveal about curriculum development for the NGSS and the shape of the current science-materials market.

Who are these folks, and how did they conduct these reviews?

EdReports is a nonprofit that tries to gauge whether published learning materials align to states鈥 expectations for students, including the NGSS and the Common Core State Standards. It鈥檚 mostly been supported by philanthropies, including the Bill & Melinda Gates Foundation, which has given the group more than $15 million over the past decade.

As it has in the past, EdReports relied on teachers to use an in-house framework to judge each curriculum. This time around, about 40 reviewers, mostly practicing teachers or science specialists, participated.

Each series goes through a succession of gateways. Only those learning materials that get a high enough score on the first gateway, which is focused on how well they embody key features of the NGSS, were assessed on the next two, which focus in more detail on the specific coherence of lessons and their usability for teachers.

In the past, not everyone has liked the gateway concept. EdReports鈥 very first math reviews, several years ago, were criticized by publishers and math groups partly for this reason. (The organization changed its methodology slightly after that.)

Remind me. What鈥檚 in these standards?

The NGSS were rolled out in 2013 and have since been adopted in 19 states and the District of Columbia. Even proponents acknowledge that the standards are incredibly complicated: They expect students to master science and engineering practices, such as analyzing and interpreting data. Students are also supposed to recognize themes that cut across biology, earth and physical science, and chemistry, such as how energy and matter flow into and out of systems. And both the practices and the crosscutting themes are layered on top of core science content, including weather patterns, natural selection, and ecosystems.

(There鈥檚 rather a lot of jargon and acronyms in the standards to explain all this.)

Publishers have struggled to figure out how to embody all those demands in their materials鈥攆or example, should they attempt to put every science practice or crosscutting theme in every unit? Or can they be parceled out over the course of two or three units?

Still, most of the major publishers鈥攊ncluding the three biggies, Pearson, HMH, and McGraw-Hill鈥攈ave put out series supposedly aligned to the NGSS in the past three years.


See also: Teachers Scramble for Texts to Match Science Standards


Where did EdReports think these materials fell short?

The series that reviewers thought fell short didn鈥檛 make it through Gateway 1 on overall design criteria for the NGSS; they received fewer than half of the 26 points available for that section. (HMH鈥檚 series, which reviewers deemed partially aligned, got exactly half.)

That doesn鈥檛 mean they鈥檙e terrible, but it should be something for teachers to think about, said Morgan Martin, a teacher on special assignment from the Los Alamitos district in California and one of the reviewers.

鈥淚 don鈥檛 think not passing a certain gateway is the red flag [signaling] 鈥楾his is the worst thing ever,鈥 she said. 鈥淚t鈥檚 just really good for teachers to have information about where the strengths are in the program, and then to know your teaching team and how qualified are you in understanding certain components and whether you鈥檙e capable of filling in some gaps.鈥

And what are those gaps? They fall into three main buckets.

One of the big problems was that most of the series apparently didn鈥檛 consistently measure all three dimensions (I warned you that there was gonna be jargon!) of the standards: the science and engineering practices, crosscutting concepts, and disciplinary content. Discovery鈥檚 series Discovery Science, for example, had some lessons with objectives that didn鈥檛 include any of the crosscutting concepts or science and engineering practices, reviewers said.

Another problem has to do with phenomena. Basically, the standards indicate that scientific phenomena should undergird lessons and units. (A phenomenon could be something like why, in dry weather, you get a shock when you shuffle rubber-soled shoes on a woolly carpet and touch a metal doorknob, or why certain organisms in a particular ecosystem appear to be dying out.)

Then, each sub-unit is supposed to help students learn more about what鈥檚 causing that phenomenon, to use scientific practices to record data about and make sense of it, and to connect it to the crosscutting themes. By the end of each unit, students should be able to generate a hypothesis for what caused the initial phenomenon and back it up with evidence.

EdReports鈥 reviewers felt that some of the series appear to have gotten this a little backwards: The phenomena they included were more for illustrative purposes. For instance, here鈥檚 what the reviewers wrote about one of the TCI series: 鈥淭he 鈥楢nchoring Phenomena鈥 are most often used as examples of the content topic or concept as opposed to a driving mechanism for student questions and sensemaking.鈥

Finally, some of the series ran into problems with assessment. HMH鈥檚 Science Dimensions, for example, had some classroom-based assessments that didn鈥檛 give teachers enough helpful information to change their teaching. Discovery Science鈥檚 end-of-unit tests didn鈥檛 always match the objectives given at the beginning of each unit. And Carolina鈥檚 Science and Technology Concepts鈥 tests didn鈥檛 adequately measure all three of the dimensions.

鈥淭here was a huge range of quality and types of assessment, and some of the concerns were the ones that were multiple- choice content only, and didn鈥檛 really look different or new,鈥 said Martin.

What do the publishers say about these findings?

In an interview, Carolina Biological Supply Company officials agreed that their series鈥 presentation of the crosscutting themes wasn鈥檛 as explicit as it could be, and promised they鈥檇 change that in future versions of the curriculum.

David Heller, director of product development for Carolina鈥檚 curriculum division, also felt that the the findings reflect how interpretations of the standards have evolved in the K-12 science field. In the early days of NGSS, curriculum writers knew that phenomena were supposed to get kids questioning and thinking, but 鈥渂eing so explicit about how [lessons] relate back wasn鈥檛 as much an understood point,鈥 he said.

Discovery Education officials claim the review process suffered from some serious flaws. Their main dispute is how EdReports gauged a section of the Techbook鈥檚 curriculum that allows teachers several choices of how to proceed. EdReports, they say, interpreted the whole section as optional, even though Discovery says it isn鈥檛.

Overall, said Marty Creel, the chief academic officer at Discovery Education, the review doesn鈥檛 reflect the current version of the Techbook.

鈥淭he reality in classrooms today is that teachers are pulling from materials all over the place, so to take a snapshot that鈥檚 10 months old we think is fundamentally unfair,鈥 he said. 鈥淲e鈥檝e been making a lot of improvements since and the version they are now reporting on is not one we have actively going into classrooms. So we鈥檙e kind of scratching our heads about why you鈥檇 put out a review on a version that basically no longer exists.鈥

Asked about product updates, EdReports officials said they chose to review these series only after publishers assured them they wouldn鈥檛 change radically, and that they did keep up-to-date with additions.

鈥淚t鈥檚 kind of a merry-go-round with some curricula that are digital and it鈥檚 hard to know when to jump on and off,鈥 said Eric Hirsch, the executive director of EdReports. 鈥淏ut we would have called off a review if we noticed the merry-go-round was starting to go around too fast. We also know that these materials will change, and that鈥檚 why we stand ready to re-review.鈥

Two other publishers with low marks both sent statements.

鈥... There are numerous instances in their report where we believe that EdReports overlooked or disregarded evidence that we shared or where EdReports reviewers fundamentally misunderstood our program,鈥 TCI President Bert Bowers said.

HMH鈥檚 assessment was even harsher. The rating, the company said, 鈥渄oes not reflect errors or problems with alignment on HMH鈥檚 part, but rather reveals the EdReports鈥 rubric鈥檚 lack of depth of engagement with NGSS and a philosophical difference in approach to the standards integration.

鈥淲e believe the rubric is limited by a disconnection from the research base of NGSS, its writers, and the community of teacher practitioners implementing the standards,鈥 it concluded.

Publishers also pointed out that the grading framework changed midway through the process, though EdReports officials said they rescored everything when those revisions were made.

All of the publishers get to submit formal responses to the findings, and those should be uploaded now to the

Educational publishing is a tough business, and publishers are definitely concerned about these early findings, though none would say that on the record. But some did acknowledge the results would make marketing the products more difficult, challenging, and also potentially confusing for consumers.

In past reviews, publishers have made revisions to their products after a low-scoring review and earned higher scores on later ones.

Where do these findings fit in with other science reviews?

That鈥檚 a good question. This is the first independent review of science materials. Only a handful of states, notably California and Oregon, have issued comprehensive reviews of at least a half dozen series.

Louisiana has rolling curriculum reviews which, like EdReports, are considered pretty tough; so far, they鈥檝e given just one set of materials a green light, for 4th grade science.

In a way, the lack of thumbs-ups for science learning materials by both EdReports and Louisiana represents the opposite problem from what happened in California, which reviewed last fall. Even California officials say that districts will need additional help in narrowing down the list and making good choices, and there are some projects underway to help them do that.

It鈥檚 also a good reminder that alignment is a bit in the eye of the beholder: Different people can set up different criteria about what constitutes alignment to standards, and reach different conclusions about the results.

And what does the science education field think of these results?

That鈥檚 for you to tell me! I look forward to hearing from you about your thoughts, comments, and critiques on EdReports鈥 findings. Leave a comment, or email me directly at ssawchuk@epe.org.

Clarification: This story has been updated to underscore that EdReports will review additional science series in the future.


For more on NGSS curriculum:

A version of this news article first appeared in the Curriculum Matters blog.