I4 Blog

Categories

Instructional Design
Teaching Effectiveness
Educational Technology
Assessment/Evaluation
Workforce Development
Instructional Practices
Inclusive Strategies
Project Management
Student Support
Uncategorized

Authors

Brandy Bagar-Fraley
Matthew Barclay
Patrick Bennett
Jeannie Black
Barbara Carder
Lewis Chongwony
Barbara Fennema
Jesse Fuhrman
Joel Gardner
Niccole Hyatt
Natalya Koehler
Jessie Kong
Natalie Kopp
Gregory Kurtz
Marivic Lesho
Carolyn Levally
Garry Mcdaniel
Karen Miner-romanoff
Xiaopeng Ni
Roberta Niche
Jeffrey Ohler
Meghan Raehll
Kevin Stoker
Yuerong Sweetland
Stephanie Theessen
Amie Tope
Constance Wanstreet
Tasha Weaver
Erin Wehmeyer
Rob Wood
Yi Yang

Archived Articles

2021
2020
2019
2018
2017
2016

How Peer Review and Peer Grading Can Inspire Knowledge Building in the Classroom

March 28, 2016 | By Yuerong Sweetland
Assessment/Evaluation
Instructional Practices

Citing von Glaserfeld (1995), Brill and Hodges (2011) suggested that peer reviews were conducive for knowledge building among learners through shared experiences. Drawing on their own practices in teaching instructional design courses, they concluded that peer reviews helped support professional standards and strengthen learners’ real-world skills in solving complex problems.

Examining A Real World Peer Review Situation

There is no doubt that peer reviews could significantly facilitate and/or inspire knowledge building in a learning community. At one higher education institution, a capstone course recently adopted a peer review system in order for students to learn from each other. This system required students to use a pre-defined rubric to evaluate their classmates’ projects and provide written comments for improvement. In addition, the review and comments would also be evaluated using a second pre-defined rubric, also by peers. Both steps generated scores contributing to students’ final grades for the course, at 15% (the averages of the peer review results on student projects) and 7.5% (the average of the peer review results on reviews students submitted). These percentages were intentionally set to be very small, in order to alleviate potential concerns that peer evaluation results would have too much of an impact on students’ final grades. Students would use improvement suggestions they received from their peers to improve their own projects and submit their final drafts to their professor for grading, which would count towards 20% of their final course grades.

The instructor of the course, who had been teaching this course for a long time both before and after the adoption of the peer-review system, pointed out that the overall quality of students’ projects benefited from the peer review process. Students also reported that it was helpful both to receive improvement suggestions from peers and to have the opportunity to view others’ projects. Several students described “significant learning moments” that occurred as a result of the peer review. One student, in his reflection paper, pointed out:

“If our team had not taken the advice of [peer] reviews …, our project would have suffered greatly with a consistent viewpoint from three team members, not a single unified voice of a well-oiled group.”

In spite of the overall positive feedback, there were concerns from some students, who suggested that peer evaluation results could be “unfair” or too harsh (even though the results only accounted for a very small percentage of course grades). Also, some students did not think that their peers were “qualified” to give out scores.

Challenges of Peer Grading

Similarly, Hamer, Kell, and Spence noted student complaints along the same lines for their peer assessment system (2007).

It seems that in both cases above, students were uncomfortable having their work GRADED by peers. This was not surprising, given that grading has traditionally been done by authoritative figures such as faculty. Scardamalia (2002), in identifying principles of advancing knowledge that include “idea diversity” and “democratizing knowledge,” argued that for a learning community, “ideas are at the center,” and “knowledge building is the job” (p.12). As such, it might be worthwhile in reconsidering how grading and peer review in a knowledge building community can co-exist.  Or can they?

In traditional senses, grades are almost always quantitative and contribute to students’ GPAs.  However, in a learning community that should be “democratizing knowledge” with the ultimate goal of building knowledge, educators need to take consideration of the social implications of grading and be cautious in using peer review as a traditional grading tool. The major goal for peer reviews should still focus on having students provide well-thought-out feedback to each other’s work, using well-crafted rubrics. If faculty and administrators still want to use this as a “grading” tool, here are a few recommendations:

  1. The peer review results can be presented in the form of “rubric” categories, such as “emerging,” “developing,” “proficient,” and “exemplary.” This data can still be aggregated for informing and improving program performances, which academic programs would often need.
  2. Students should be informed that the peer review results will not count towards their final course grades.
  3. Students should be advised to focus on the written feedback for improvement rather than on the different rubric categories.

Research has demonstrated the value of peer reviews in building knowledge for learning communities. In the past few years, there have been research emerging on using peer grading for MOOC courses (e.g., Luo, et al., 2014). However, how peer grading can be used in a traditional college class still needs a lot of exploration and research.

References:

Brill, J.M., & Hodges, C.B. (2011). Investigating peer review as an intentional learning strategy to foster collaborative knowledge building in students of instructional design. International Journal of Teaching and learning in higher education. Accessed March 8, 2016 from http://digitalcommons.georgiasouthern.edu/cgi/viewcontent.cgi?article=1025&context=leadership-facpubs

Hamer, J., Kell, C. and Spence, F. (2007). Peer Assessment Using Aropa. In Proc. Ninth Australasian Computing Education Conference (ACE2007), Ballarat, Australia. CRPIT, 66. Mann, S. and Simon, Eds. ACS. 43-54.

Luo, H., Robinson, A. C., & Park, J-Y. (2014). Peer Grading in a MOOC: Reliability, Validity, and Perceived Effects. Journal of Asynchronous Learning Networks 18(2), 1-14.

von Glaserfeld, E. (1995). Radical constructivism: A way of knowing and learning. Washington D.C.: Falmer Press.

Scardamalia, M. (2002). Collective Cognitive Responsibility for the Advancement of Knowledge. In: B. Smith (ed.), Liberal Education in a Knowledge Society. Chicago: Open Court, pp. 67–98.