Software peer review has proven to be a successful technique in open source software (OSS) development. In contrast to industry, where reviews are typically assigned to specific individuals, changes are broadcast to hundreds of potentially interested stakeholders. Despite concerns that reviews may be ignored, or that discussions will deadlock because too many uninformed stakeholders are involved, we find that this approach works well in practice. In this paper, we describe an empirical study to investigate the mechanisms and behaviours that developers use to find code changes they are competent to review. We also explore how stakeholders interact with one another during the review process. We manually examine hundreds of reviews across five high profile OSS projects. Our findings provide insights into the simple, community-wide techniques that developers use to effectively manage large quantities of reviews. The themes that emerge from our study are enriched and validated by interviewing long-serving core developers.
Rigby and Storey's report won't surprise people steeped in the open source development culture, but everyone else may find it instructive. If you are used to assigning code reviews to some of your peers, to have other reviews assigned to you, and to only work on the items in your queue, the broadcast method of many open source projects (that is, broadcast your patch to the mailing list and hope it will be picked up, improved, and accepted by others) may seem entirely dysfunctional. Still, for the most part it works very well, and the authors explain how and why. If you don't have time for the full paper, the last section provides a good summary of its findings.
(Full disclosure: I'm currently affiliated with Dr. Storey's lab.)Comments powered by Disqus