澳门跑狗论坛

Opinion Blog


Rick Hess Straight Up

Education policy maven Rick Hess of the think tank offers straight talk on matters of policy, politics, research, and reform. Read more from this blog.

Education Opinion

The 2017 RHSU Edu-Scholar Public Influence Scoring Rubric

By Rick Hess 鈥 January 10, 2017 11 min read
  • Save to favorites
  • Print
Email Copy URL

Tomorrow, I鈥檒l be unveiling the 2017 RHSU Edu-Scholar Public Influence Rankings, honoring the 200 university-based education scholars who had the biggest influence on the nation鈥檚 education discourse last year. Today, I want to run through the scoring rubric for those rankings. The Edu-Scholar rankings employ metrics that are publicly available, readily comparable, and replicable by third parties. This obviously limits the nuance and sophistication of the measures, but such is life.

Given that there are well over 20,000 university-based faculty tackling educational questions in the U.S., even making the Edu-Scholar list is an honor鈥攁nd cracking the top 100 is quite an accomplishment in its own right. So, who made the list? Eligible are university-based scholars who focus primarily on educational questions (鈥渦niversity-based鈥 meaning a formal university affiliation, including a webpage on a university site). The rankings include the top 146 finishers from last year, augmented by 54 鈥渁t-large鈥 additions named by a selection committee of 27 accomplished and disciplinarily and intellectually diverse scholars. The selection committee (composed of members already assured a bid by finishing in last year鈥檚 top 150) nominated individuals for inclusion and then selected 50 from that slate of nominees. In the handful of cases where one of last year鈥檚 automatic qualifiers is no longer affiliated with a university (typically due to retirement), we took the opportunity to add another 鈥渁t-large鈥 name.

I鈥檓 indebted to the committee members for their assistance and would like to take a moment to acknowledge the members of the 2017 RHSU Selection Committee. They are: Deborah Ball (U. Michigan), Camilla Benbow (Vanderbilt), Linda Darling-Hammond (Stanford), David Deming (Harvard), Susan Dynarski (U. Michigan), Susan Fuhrman (Columbia), Dan Goldhaber (U. Washington), Sara Goldrick-Rab (Temple), Jay Greene (U. Arkansas), Eric Hanushek (Stanford), Shaun Harper (U. Penn), Doug Harris (Tulane), Jeff Henig (Columbia), Gloria Ladson-Billings (U. Wisconsin), Marc Lamont Hill (Morehouse), Pedro Noguera (UCLA), Robert Pianta (U. Virginia), Morgan Polikoff (USC), Jim Ryan (Harvard), Marcelo Suarez-Orozco (UCLA), Bridget Terry Long (Harvard), Jacob Vigdor (U. Washington), Kevin Welner (CU Boulder), Marty West (Harvard), Daniel Willingham (U. Virginia), Yong Zhao (Kansas), and Jonathan Zimmerman (U. Penn).

Okay, so that鈥檚 how the list of scholars was compiled. How were the scholars ranked? Each scholar was scored in nine categories, yielding a maximum possible score of 200鈥攁lthough only a handful of scholars actually cracked 100.

Scores are calculated as follows:

Google Scholar Score: This figure gauges the number of articles, books, or papers a scholar has authored that are widely cited. A neat, common way to measure the breadth and impact of a scholar鈥檚 work is to tally works in descending order of how often each is cited, and then identify the point at which the number of oft-cited works exceeds the cite count for the least-frequently cited. (This is known to aficionados as a scholar鈥檚 鈥渉-index.鈥) For instance, a scholar who had 20 works that were each cited at least 20 times, but whose 21st most-frequently cited work was cited just 10 times, would score a 20. The measure recognizes that bodies of scholarship matter greatly for influencing how important questions are understood and discussed. The search was conducted using the advanced search 鈥渁uthor鈥 filter in Google Scholar. A hand search culled out works by other, similarly named, individuals. For those scholars who had been proactive enough to create a Google Scholar account, their h-index was available at a glance. While Google Scholar is less precise than more specialized citation databases, it has the virtue of being multidisciplinary and publicly accessible. Points were capped at 50. This measure offers a quick way to gauge the expanse and influence of a scholar鈥檚 work. (This search was conducted on December 13.)

Book Points: An author search on Amazon tallied the number of books a scholar has authored, co-authored, or edited. Scholars received 2 points for a single-authored book, 1 point for a coauthored book in which they were the lead author, a half-point for coauthored books in which they were not the lead author, and a half-point for any edited volume. The search was conducted using an 鈥淎dvanced Books Search鈥 for the scholar鈥檚 first and last name. (On a few occasions, a middle initial or name was used to avoid duplication with authors who had the same name, e.g., 鈥淒avid Cohen鈥 became 鈥淒avid K. Cohen.鈥) We only searched for 鈥淧rinted Books鈥 (one of several searchable formats) so as to avoid double-counting books which are also available as e-books. This obviously means that books released only as e-books are omitted. However, circa 2016, few relevant books are, as yet, released solely as e-books (this will likely change before long, but we鈥檒l cross that bridge when we come to it). 鈥淥ut of print鈥 volumes were excluded, as were reports, commissioned studies, and special editions of magazines or journals. This measure reflects the conviction that books can influence public discussion in an outsized fashion. Book points were capped at 20. (This search was conducted on December 13.)

Highest Amazon Ranking: This reflects the author鈥檚 highest-ranked book on Amazon. The highest-ranked book was subtracted from 400,000 and the result was divided by 20,000 to yield a maximum score of 20. The nature of Amazon鈥檚 ranking algorithm means that this score can be volatile and favors more recent sales. For instance, a book may have been very influential a decade ago and continue to influence citation counts and a scholar鈥檚 visibility but no longer sell many copies. Such a book will typically have a low Amazon ranking. The result is an imperfect measure, but one that conveys real information about whether a scholar has penned a book that is influencing contemporary discussion. (This search was conducted on December 13.)

Syllabus Points: This year, we added a new category to the scoring formula, intended to measure long-term academic impact on what gets taught to new generations of students. A search of 鈥淥penSyllabusProject.org,鈥 a website that collects over one million syllabi from across American, British, Canadian, and Australian universities, gauged how widely used were the works of various authors. (The syllabi are assembled from publicly-accessible university websites across thousands of institutions). A search of the 鈥淥pen Syllabus Explorer,鈥 using the scholar鈥檚 name, was used to identify their top-ranked text. The score reflects the number of times that text appeared on syllabi, with the tally then divided by 5. The score was capped at 10 points. (This search was conducted on December 14.)

Education Press Mentions: This measures the total number of times the scholar was quoted or mentioned in 澳门跑狗论坛, the Chronicle of Higher Education, or Inside Higher Education during 2016. Searches were conducted using each scholar鈥檚 first and last name. If applicable, we also searched names using a common diminutive and both with and without middle initials. In each instance, the highest result was recorded. (Note: In the case of Dan Sarofian-Butin, whose last name changed this year due to marriage, we agreed to a request and used his newly hyphenated last name for Ed Press, Web and Newspaper Mentions.) The number of appearances in the Chronicle and Inside Higher Ed were averaged and that number was added to the number of times a scholar appeared in 澳门跑狗论坛. (This was done to give equal weight to K-12 and to higher ed.) The resulting figure was multiplied by two, with total Ed Press points then capped at 30. (This search was conducted on December 14.)

Web Mentions: This reflects the number of times a scholar was referenced, quoted, or otherwise mentioned online in 2016. The intent is to use a 鈥渨isdom of crowds鈥 metric to gauge a scholar鈥檚 influence on the public discourse last year. The search was conducted using Google. The search terms were each scholar鈥檚 name and university affiliation (e.g., 鈥淏ill Smith鈥 and 鈥淩utgers University鈥). Using affiliation served a dual purpose: It avoids confusion due to common names and increases the likelihood that mentions are related to their university-affiliated role, rather than their activity in some other capacity. If a scholar was mentioned sans affiliation, that mention was omitted. As with Ed Press, searches included common diminutives and were run with and without middle initials. For each scholar, we used the single highest score from among these various configurations. (We didn鈥檛 sum them up, as that produces lots of complications and potential duplication.) Points were calculated by dividing total mentions by 3. Scores were capped at 25. (This search was conducted on December 14.)

Newspaper Mentions: A Lexis Nexis search was used to determine the number of times a scholar was quoted or mentioned in U.S. newspapers. Again, searches used a scholar鈥檚 name and affiliation, diminutives, and were run with and without middle initials. In each instance, the highest result was recorded. Points were calculated by dividing the total number of mentions by two, and were capped at 30. (The search was conducted on December 13.)

Congressional Record Mentions: We conducted a simple name search in the Congressional Record for 2016 to determine whether a scholar had testified or if their work was referenced by a member of Congress. Qualifying scholars received five points. (This search was conducted on December 15.)

Klout Score: A Twitter search determined whether a given scholar had a Twitter profile, with a hand search ruling out similarly named individuals. The score was then calculated from a scholar鈥檚 Klout score, which is a number between 0 and 100 that reflects online presence across several information-sharing platforms.鈥 Klout explains that this score is 鈥渄erived from combinations of attributes, such as the ratio of reactions you generate compared to the amount of content you share.鈥 The Klout score was divided by 10, yielding a maximum score of 10. (This search was conducted on December 13.)

Scores are intended to acknowledge scholars whose body of work influences our thinking and who are actively engaged in public discourse. That鈥檚 why the scoring discounts, for instance, academic publications that are rarely cited or books that are unread or out of print. Generally speaking, the scholars who rank highest are those who are both influential researchers and influential public voices.

There are obviously lots of provisos when perusing the results. Different disciplines approach books and articles differently. Senior scholars have had more opportunity to build a substantial body of work and influence (and the results unapologetically favor sustained accomplishment). And readers may care more for some categories than others. That鈥檚 all well and good. The whole point is to spur discussion about the nature of constructive public influence: who鈥檚 doing it, how much it matters, and how to gauge a scholar鈥檚 contribution. If the results help prompt such conversation, then we鈥檙e all good.

A couple of notes regarding questions that come up annually. First, there are some academics that dabble (very successfully) in education, but for whom questions educational are only a sideline. Such individuals are not eligible for the Edu-Scholar rankings. (I鈥檓 sure they鈥檒l survive.) For a scholar to be included, education must constitute a substantial majority of their scholarship. Otherwise, for instance, Nobel laureates who鈥檝e dabbled in education would play havoc with the rankings. This policy helps ensure that the rankings serve as something of an apples-to-apples comparison among scholars who focus on education. Second, scholars sometimes change institutions in the course of a year. My policy is straightforward: for the couple of categories where affiliation is used, the searches are conducted using a scholar鈥檚 year-end affiliation. Otherwise, it creates concerns about double-counted material while placing an undue burden on my RA鈥檚. The result is that scholars get dinged a bit in the year which they move. But that鈥檚 life.

Two questions commonly arise: Can somebody game this rubric? And am I concerned that this exercise will encourage academics to chase publicity? As for gaming, I鈥檝e few worries. If scholars are motivated to write more relevant articles, pen more books that people read, or get more proactive about communicating in an accessible fashion, that鈥檚 great. That鈥檚 just good public scholarship. As for academics chasing publicity: well, there鈥檚 obviously a point where communication turns into sleazy PR, but most academics are so far from that point that I鈥檓 not unduly concerned.

Tomorrow鈥檚 list represents obviously only a sliver of the faculty across the nation who are tackling education or education policy. For those interested in scoring additional scholars, it鈥檚 a straightforward task to do so using the scoring rubric. Indeed, the exercise was designed so that anyone can generate a comparable rating for a given scholar in no more than 15-20 minutes.

And a final note of thanks: For the hard work of coordinating the selection committee, finalizing the 2017 list, and then spending dozens of hours crunching and double-checking all of this data for 200 scholars, I owe a big shout-out to my gifted, diligent research assistants Kelsey Hamilton, Grant Addison, and Paige Willey.

The opinions expressed in Rick Hess Straight Up are strictly those of the author(s) and do not reflect the opinions or endorsement of 澳门跑狗论坛, or any of its publications.