This post featuring Prof. Lawrence Chen is the latest installment in our ongoing series about assessment tools for large classes. On March 17, 2015, Lawrence was the guest speaker at a brown bag lunch session on Evaluation and Feedback for Large Classes. In his presentation, Peer Review as an Active Learning Strategy in a Large First Year Course, Lawrence shared his thoughts on the pedagogy and logistics related to his experience implementing a peer review writing assignment with nearly 500 undergraduate Engineering students, as well as his students’ thoughts on engaging in this peer review task.
Highlights from Lawrence’s presentation in Q and A format with one of the presentation organizers:
Before we get into the details of your peer review assignment with nearly 500 students, can you tell us how you understand peer assessment?
Sure. I’ll give you a quotation. Peer assessment is an instructional approach where “learners consider and specify the level, value, or quality of a product or performance of other equal-status learners (…) Peer-assessment can be summative or formative” (Topping, 1998).
What is the context for the course where you tried out this peer review assignment?
The course is called Introduction to the Engineering Profession (FACC 100). It’s a gate-way course taken by all first-year Engineering students, which means about 850 students a year. I taught the course in Fall 2013 and had two sections with a total of close to 500 students.
What were the writing tasks that would be peer reviewed?
There were two assignments – a 1-page reflection and a 1-page process description. The reflection was designed to introduce students to the interdisciplinary nature of the Engineering profession. The process description called upon students to choose one of two situations they might encounter as Engineers where they would have to make a decision according to an ethical theory and/or decision-making process discussed in class. Using specific criteria, they had to justify their choice of decision-making process. Intentionally, these assignments didn’t have right or wrong answers.
How did students know what to address in the peer feedback?
We created rubrics so that students would have criteria to follow. Reviewer students first had to provide a one-sentence summary of the content. Then, they had to give a numeric score according to specific criteria. Finally, they had to justify that score with brief comments. In order for students to be able to give good quality feedback, I asked them to read each assignment twice: once in order to understand the content and a second time in order to give thoughtful feedback according to the rubric criteria.
I should also mention that students were able to see the rubrics before submitting their writing assignments. That way, they knew what to do in order to meet the criteria.
How much of students’ final grade were these assignments worth?
Each of the two assignments was worth 10%, but that 10% was based on peer feedback and instructor/TA feedback. Let me explain the break-down: 5% of the grade was assigned by peers. Each student’s assignment was reviewed by three peers. An average of those three grades made up 5% of the mark. The other 5% of the grade was based on students’ participation in the peer review process. TAs and I assessed the quality of each student’s feedback and assigned the balance of the grade based on the quality. We did not grade the students’ papers, only the quality of their feedback. In the end, a total of 10% of students’ final course grade was assigned by peers.
What did students have to say about the task of reviewing their peers’ work and assigning a grade?
Evidence suggests students were amenable to the peer review assignments.
Actually, I did a survey to find out what students thought. It was a paper and pencil survey. We received 314 responses, of which we analyzed 35%. Here are the results:
- Did you enjoy reading other students’ papers? Yes: 82%; No: 6%; Indifferent: 7%; No answer: 5%
- Did knowing your paper would undergo peer review change the way you approached your writing? Yes: 50%; No: 39%; Somehow: 2%; No answer: 9%
- Is this a useful exercise? Yes: 77%; No: 9%
- What are ways to improve this assignment? Responses to this question were grouped into categories:
- Bias in peer’s grading
- Acquisition of new skills
- Grading scheme
- Roles of TAs and course instructor
- Quality of writing
- Out of the nearly 500 students who took the course, I received only two emails with complaints.
I also read the course evaluations for comments on the peer review assignments. Out of about a 45% response rate, there were only two or three negative comments across two sections.
So, overall, it seems students were satisfied with the peer review assignment.
Why did you decide to have students do a peer review assignment?
I like my students to develop their critical analysis skills, to develop an ability to follow standards and to be able to provide and receive feedback. These skills are important for their future, whether they start practising as engineers and especially if they continue to graduate studies.
Do you feel the peer review assignment supported your students’ learning?
Yes, but in the future, I’d like to do a better job of communicating to students what to do with the feedback. Students got feedback from their peers, but there was no requirement to act on it or process it in some way. I might consider doing one peer review assignment and giving students the opportunity to revise their writing based on the feedback.
I also plan to do a calibration exercise as an in-class activity. I would give students the rubric and a paper to grade. Then, they would discuss in class how they approach assessment of peers’ writing.
Can you tell us about the logistics of doing a peer review assignment with a large class?
It was a challenge. It was a double-blind review process for academic privacy reasons and because I wanted to mimic the academic peer review process. I tried to run the assignments and reviews through myCourses, but unfortunately, the software is not designed to accommodate blind reviews. In addition, students were not attentive to instructions such as, “Don’t identify yourself by name or ID number anywhere in the assignment or in the file name.” In the end, I went through each file—assignment submissions and reviews—one by one to remove identifying information. And because myCourses isn’t designed for this type of exercise, grades had to be recorded manually and then entered into myCourses. Logistically, it was a frustrating and time-consuming endeavour. Really, McGill needs a suitable peer review platform, and I know TLS is looking into it.
In light of these challenges, do you think you will try this assignment again?
Yes, it’s definitely an assignment worth repeating because Engineering students have to become more aware of the importance of communicating effectively in writing for being successful professionals in the field.
After Lawrence’s presentation, colleagues posed questions. Below are selected questions along with Lawrence’s answers.
Q: This is an ambitious project. You deserve a lot of credit for it. What was the average score given by students?
A: The average grade received overall, with both parts combined, was generally about 8/10, which means students were giving an average grade of 4/5. Some students scored consistently high or consistently low; others made use of the full rubric scale.
Q: Was there a change in how reviewers provided comments from the first to the second assignment? Did the comments improve as reviewers saw how they themselves were being graded?
A: I didn’t analyze this metric although the data is available. We’d have to go back and look at that.
Q: Knowing undergraduates, would it be beneficial to have assignments with an actual “right” answer?
A: Well, if it were a technical question in Engineering, it would require more content expertise. Students in this first-year course wouldn’t necessarily be able to adequately assess technical problems.
Q: What were the TAs’ impressions of this peer review assignment?
A: Generally, they felt frustration, which can be attributed to the logistical problems. But when I explained why we were doing what we were doing, they said, “Oh yeah, that makes sense.” But I don’t really know to what extent they felt the assignment had value.
Q: Would it be logistically simpler to use paper submissions?
A: I originally avoided paper for environmental concerns, but in light of the logistics problems, I intend to have printed submissions in future, which will be scanned personally to ensure they are double-blind. Of course, I may yet find a way to run this automatically. Also, McGill is in the process of investigating software for peer review, so maybe a tool will eventually be in place.
Other teaching initiatives in the Faculty of Engineering
Click to read more about teaching and learning in McGill’s Faculty of Engineering.