It Will Never Work in Theory: Live!

Our next set of online lightning talks is happening April 25-26, 2023. Check out the speakers and get your ticket now!

Online vs. Face-to-Face Pedagogical Code Reviews: An Empirical Comparison

Reviewed by Greg Wilson / 2011-12-04
Keywords: Code Review, Computing Education

Hundhausen2011 Christopher D. Hundhausen, Pawan Agarwal, and Michael Trevisan: "Online vs. face-to-face pedagogical code reviews". Proceedings of the 42nd ACM technical symposium on Computer science education - SIGCSE '11, 10.1145/1953163.1953201.

Given the increased importance of communication, teamwork, and critical thinking skills in the computing profession, we have been exploring studio-based instructional methods, in which students develop solutions and iteratively refine them through critical review by their peers and instructor. We have developed an adaptation of studio-based instruction for computing education called the pedagogical code review (PCR), which is modeled after the code inspection process used in the software industry. Unfortunately, PCRs are time-intensive, making them difficult to implement within a typical computing course. To address this issue, we have developed an online environment that allows PCRs to take place asynchronously outside of class. We conducted an empirical study that compared a CS 1 course with online PCRs against a CS 1 course with face-to-face PCRs. Our study had three key results: (a) in the course with face-to-face PCRs, student attitudes with respect to self-efficacy and peer learning were significantly higher; (b) in the course with face-to-face PCRs, students identified more substantive issues in their reviews; and (c) in the course with face-to-face PCRs, students were generally more positive about the value of PCRs. In light of our findings, we recommend specific ways online PCRs can be better designed.

This paper comes from software engineering education rather than software engineering per se, but has a lot to say about the latter. Code review is now a regular part of most open source projects, thanks in part to online code review tools like ReviewBoard. Here, the authors compare those kinds of reviews with face-to-face reviews, and find that the latter are more effective in several ways: people enjoy them more, they find more issues, and they are more likely to come away believing that reviews are worth doing. It would be fascinating to replicate this study with both junior programmers joining established teams, and developers with more experience who are undertaking reviews systematically for the first time.