This New York Times article last August about crowd-sourced peer reviews has caused buzz throughout academia and some alarm among publishers and peer reviewers. The Times focused on a crowd-sourcing trial by the 60-year-old Shakespeare Quarterly. While predictions of a complete change in the peer-review process are premature, there are implications for resource creators, publishers, reviewers, adopters, and users.
A fascinating spontaneous peer review occurred last summer. Julie Rehmeyer reported: “Vinay Deolalikar, a computer scientist at Hewlett Packard labs in India, sent an email on August 7 to a few top researchers … staking a claim on the million-dollar Millennium Prize offered by the Clay Mathematics Institute…The examination (by the crowd)… helped spur on a new model of research.”
Kathleen Fitzpatrick of Pomona College advocates crowd-sourced pre- and post-publication peer reviews in her forthcoming book Planned Obsolescence: Publishing, Technology, and the Future of the Academy. Dr. Fitzpatrick walks the talk by opening the manuscript to crowd review.
I think of crowds as well-meaning anonymous amateurs. In contrast, textbook and journal publishers assure that their anonymous reviewers truly are peers of the author. I am always uncomfortable with anonymous sources though their writings can be excellent. This outstanding post by someone named Mike presents arguments for both traditional and crowd-sourced peer reviews. I wish that Mike would share his credentials.
College Open Textbooks selects peers of the textbook adopters (i.e., people who teach the subject at the college level) and we share their names, credentials, and affiliations on our reviewers page.
There are three types of reviews:
1) reviews that result in the resource being improved prior to publication
2) in-depth post-publication reviews that cite the strengths and weaknesses of the resource based on well-defined criteria
3) thumbs-up, thumbs-down reviews that can happen before or after publication
College Open Textbooks reviews definitely fit into the second category. Our peer reviewers rate each chapter of the book on 11 criteria and provide comments. We publish the average rating for each criterion and a summary paragraph. Would-be adopters can receive the entire spreadsheet. We are moving in the direction of contacting the textbook authors, other creators, and publishers and encouraging them to improve the textbook based on the peer reviewers’ comments.
If crowd-sourced reviews are of the thumbs-up, thumbs-down type, they are a threat to the whole concept of peer review. Treating an academic paper or textbook like a movie, novel, or restaurant risks the complete erosion of quality standards. The Internet makes it far too easy to give a resource a cursory glance, click a rating button, and move on. People who dislike a resource are more likely to comment than those who find the resource useful. Amazon and Yelp reviews have this characteristic. So authors, publishers, and merchants rush to balance these negative comments with reviews by friends. Objectivity and standards are lost.