A number of instructors at McGill have been integrating peer assessment (PA) in their courses and have generously shared some of their reflections on the experience.
Lawrence Chen teaches Introduction to the Engineering Profession (FACC 100), a required course for all first-year students in the Faculty of Engineering. During a conversation about his experience with PA, he shared how he implemented PA in this course of approximately 400 students (across two course sections), and shared feedback from his students about their experience.

What was the PA assignment?
Students had to propose how they would resolve an ethical issue described in a particular scenario. They had to justify their answer in view of theories we had covered in class. For the PA exercise, I gave students specific criteria to look for and instructions on how to assess the paper. Criteria included things like clarity and proper application of the ethical theories.
What did you hope students would get out of the PA experience?
I wanted them to be exposed to PA, to learn how to critically analyze work, and to learn how to give and receive useful feedback.
I wanted to sensitize students to the notion and process of PA. In academic or corporate settings, you’re assessed by peers when applying for a grant or submitting a paper for publication. When it’s time for merit in companies, you’re assessed through a process involving several people. I shared a personal example of a journal paper that didn’t get the most glowing assessment, and explained that less-positive assessments can happen even when you think you’ve done everything properly.
I also wanted the students to develop their ability to analyze and critically assess written work subject to specific criteria. This involved providing not only a numerical assessment but also written feedback. I wanted them to learn how to provide useful feedback to each other. Feedback like “great job” with a numerical assessment of 10/10 is nice to receive but not so helpful if you don’t know what the peer is referring to. Similarly, feedback that just says “poorly written” without further explanation won’t help you understand what you could’ve done better. I wanted students to learn to accept criticism on their work and to figure out what to do with the feedback they received. You can be frustrated by the feedback, but maybe there’s an element of truth to it. Students have to look at their papers more objectively and think, “What should I do with the feedback I got?”
How did you help students prepare to assess their peers’ work?
I explained to the students that they have two roles: they’re authors and they’re reviewers. As authors and reviewers, I wanted them to develop their critical thinking, analysis and assessment skills. As authors specifically, I wanted them to learn how to accept feedback and deal with criticism.
I did a 30-40 minute in-class “calibration exercise” to help students prepare for PA. I gave them a couple of sample papers to read. I also gave them a rubric with scoring criteria. Students assessed the papers. They then discussed their assessments in pairs or small groups. After the discussion, I asked them, “How many of you assessed this paper within one mark of each other? How many of you assessed this paper more than two marks apart?” It became clear that the majority of the students had assessed within one mark of each other. This calibration exercise allowed students to see that they were assessing the sample papers in a relatively similar manner. Then I explained, “This is how I would have assessed this paper…” Generally, the assessments students made were not that far off from how I would have assessed the papers.
Could you talk about how technology supported the PA assignment?
The first time I did a PA assignment in this course (a couple of years ago), I had to do it manually. The logistics were challenging with such a large class. Now I use Peerceptiv, a software tool designed to support PA in large classes. It’s a phenomenal tool to help manage the process. I’ll explain.
The PA was a double blind process, so students didn’t know who they were evaluating or who was evaluating them (although I could see this information). There were five steps:
- Students wrote their papers and submitted them through Peerceptiv.
- Each student was assigned five papers to review. They entered their written feedback and a numerical assessment into Peerceptiv.
- Students received the assessments from their peers and subsequently provided written feedback—and a numerical assessment—to the reviewers on how useful the written feedback was. Providing feedback on feedback in this way is called “back evaluation.”
- The students then revised their papers and resubmitted them.
- The revised papers were assigned to five other students for review.
The assignment rolled out over a nine-week period: two weeks to submit the first paper, two weeks to do the reviews, a week to do the back evaluation, a week to revise the paper, two more weeks to do the second round of reviews, and a week to do the second round of back evaluation.

How was the assignment grade calculated?
The assignment was worth 20% of the final course grade, broken down as follows:
- 25% was based on timeliness – students had to submit drafts, peer assessments, revisions, and back evaluations on time.
- 45% was based on the quality of the written feedback they provided on peers’ work.
- 30% was based on the average numerical assessment from 10 reviews of their paper. Each student’s grade was the average of the 10 numerical assessments. There are a whole bunch of algorithms that are built into Peerceptiv, so it wasn’t just the raw average of all 10 student evaluations. If one evaluation was really far from the other four, the algorithms gave it less weight in the averaging.
Some students expressed concern about a portion of their assignment grade being decided by peers, but 6% of the overall course grade, (that is, 30% of the 20% assignment grade), is not a “make or break” kind of scenario.
What did the students think about the PA experience?
I explicitly asked about the assignment and Peerceptiv in the course evaluation: “The peer assessment assignment and using a peer assessment tool had a positive impact on my learning.” Here are the results (out of approximately 200 students per section):
Section 1 | n=93 |
Strongly agree/agree | 63 |
Neutral | 23 |
Disagree/strongly disagree | 7 |
Section 2 | n=84 |
Strongly agree/agree | 44 |
Neutral | 21 |
Disagree/strongly disagree | 19 |
Students also wrote comments in the course evaluation. Here are some examples:
- “Peer assessment is really interesting and made me learn a lot.”
- “It was interesting to see how other students view writing assignments and how they think and how they approach them.”
- “People responded poorly to criticism.”
- “I liked the honest feedback that people gave me. Some were easy graders but some were pickier over certain details.”
- “Forced me to have a strong understanding of the theory from the class.”
While there was a wide range of comments, most of the students said it was a useful exercise and they would do it again. However, the ones who commented more negatively on it were clearly concerned about their grades. So in my mind, students’ feedback validates the experience. I’ll continue to have students do PA using Peerceptiv.
What would you say to an instructor interested in trying PA for the first time?
Go for it! Be clear in your explanations to students of what you want them to get out of the exercise. Explain to them that they can benefit from carefully considering the assessments from their peers, even if they don’t agree with them at first, as there may be aspects of their written work that they overlooked initially.
Also, ask for your students’ comments on their experience. It was some of their comments that made me realize that I would like to educate the students a little more about the PA process the next time we do this exercise.
If you do a peer assessment activity in your course, how do you (or how might you) gather feedback from students about their experience?
Pingback: Varied approaches to implementing peer assessment | Teaching for Learning @ McGill University