Most instructors today use some form of a rubric to assess student writing or project-based-learning. Gone are the days of a professor writing a letter grade on the paper, and that being the only form of feedback the students receive. However, even when using a rubric, instructors may grade each rubric category in a holistic way, a result of their depth of knowledge of the topic and years of experience.
Learners, though, do not have the experience, knowledge, or ability to grade an essay or project in this kind of holistic way. For learners, the rubric needs to be a tool both for assessment and for learning. If they use a well-designed, objective peer-review rubric, learners can grade more like an experienced instructor, improve their critical analysis, and produce better finished products. In fact, research has shown that the reviewers improved the quality of their own work as a result of reviewing their peer’s work (Wooley, et al., 2008).
Peer Rubric Distinctives
Unlike rubrics designed for instructor use, peer rubrics should be structured in a way to (1) assess a document, whether it be a writing assignment, video upload, presentation, podcast, or any type of project, and (2) help the reviewer learn how to think critically about the content of that document. For this reason, peer rubrics should be designed to teach students how to analyze the assignment in terms of specific, objective criteria and identify the quality of work at each level.
Using peer review often requires making changes to the rubric to transform it from an effective teacher rubric to an effective peer rubric. Using learner-friendly language, stating clear qualitative differences between rubric levels, and assessing one aspect of an assignment at a time will help your rubric become better for peer review.
Whether you are transforming an instructor rubric into a peer rubric or creating a peer rubric from scratch, here are some questions to help guide the process.
Questions to ask to Create an Effective Peer-Review Rubric
1. Do the commenting prompts encourage students to give constructive criticism that brings out strengths and weaknesses in the document?
Design your commenting prompt to elicit the type of feedback that you want your students to write and receive. Encourage students to identify both strengths and weaknesses and to provide suggestions for improvement. Focus students’ attention on any specific areas on which you want them to provide feedback. Students appreciate the peer review process more when they receive useful, specific feedback that they can apply to future assignments.
For example, this commenting prompt on the content of a video presentation is designed to have the reviewer tell the presenter what they learned and what is not yet clear.
Content Commenting Prompt: What was something new that you learned or a new insight that you had as a result of this presentation? What in the presentation could have been clearer or more fully developed to enhance the content of this presentation?
2. How much reviewing can my students handle and still provide thoughtful, accurate feedback?
Reviewing several long documents with numerous commenting and rating prompts can lead to reviewer fatigue, where the reviewer starts to comment and rate carelessly. We recommend having multiple rating prompts but only one commenting prompt for each dimension (aspect or criteria being evaluated). For example, if there is a Content dimension for a peer review of a presentation, the rubric may include the following rating prompts: Depth of Content, Argument Structure, and Quality of Sources Used. If you are encouraging students to provide detailed, constructive feedback in the commenting prompt, having a small number of commenting prompts allows for thoughtful responses on all of the documents a student reviews.
3. Do my students understand all of the words and terms used in the rubric?
Make sure that the rubric only includes words that you know your students know. Remember that your students may not be familiar with technical or academic language. Spend time defining or describing the rubric terminology in class. It is easy to assume that all students will understand the language or know the difference between “competent” and “satisfactory,” but unless you have defined it in class, students’ interpretation may vary.
4. Does the rubric have clear, objective differences between each rating level?
While an instructor can quickly identify the difference between “poor” and “fair” work or “progressing” and “mastering” a skill, learners need more guidance as to what each of these levels means. For that reason, avoid using only one-word descriptors for your ratings and instead include a short description for each rating level.
In your description, use objective language that can be quantified or measured by a learner in the class. Identify what distinguishes one level from another in the assignment and use words of frequency, amount, quality, or proficiency to describe each level. Provide examples if you think there will be any confusion.
Here are some useful words in creating distinct dimension levels:
- Frequency: never, rarely, sometimes, often, always
- Amount: less than 2, 3-4, 5-6, 7+; no more than 50%, over half
- Quality: poorly, well, thoroughly; satisfactory, exemplar
- Proficiency: developing, emerging, mastery, expert
- Actions: lacks X; includes X; includes and develops X; includes, develops, and analyzes X
5. Does this rubric scaffold student learning as well as allow for accurate peer review?
It is important to meet learners where they are in their understanding of concepts in your course. Your rubrics for peer assessment should include all the information a student needs to assess that particular assignment and not assume that students have outside knowledge of that concept or skill. This may mean that you remove or rework concepts such as grammatical accuracy, depth of analysis, or ability to synthesize material from the peer review rubric until students have had greater exposure to it. Or, you may choose to focus on one quantifiable aspect of that concept (appropriate use of verb tenses, number of sources used, etc.) rather than asking students to assess that dimension in a holistic way.
Wooley, R., et al. “The effects of feedback elaboration on the giver of feedback.” 30th Annual Meeting of the Cognitive Science Society. Vol. 5. 2008.