It Will Never Work in Theory

Reducing Human Effort and Improving Quality in Peer Code Reviews using Automatic Static Analysis and Reviewer Recommendation

Posted Jun 19, 2013 by Fayola Peters

| Code Review |

Vipin Balachandran: "Reducing Human Effort and Improving Quality in Peer Code Reviews using Automatic Static Analysis and Reviewer Recommendation." ICSE'13, 2013,

Peer code review is a cost-effective software defect detection technique. Tool assisted code review is a form of peer code review, which can improve both quality and quantity of reviews. However, there is a significant amount of human effort involved even in tool based code reviews. Using static analysis tools, it is possible to reduce the human effort by automating the checks for coding standard violations and common defect patterns. Towards this goal, we propose a tool called Review Bot for the integration of automatic static analysis with the code review process. Review Bot uses output of multiple static analysis tools to publish reviews automatically. Through a user study, we show that integrating static analysis tools with code review process can improve the quality of code review. The developer feedback for a subset of comments from automatic reviews shows that the developers agree to fix 93% of all the automatically generated comments. There is only 14.71% of all the accepted comments which need improvements in terms of priority, comment message, etc. Another problem with tool assisted code review is the assignment of appropriate reviewers. Review Bot solves this problem by generating reviewer recommendations based on change history of source code lines. Our experimental results show that the recommendation accuracy is in the range of 60%-92%, which is significantly better than a comparable method based on file change history.

In the case of human code review vs. tool assisted code review, this paper by Balachandran favors tool assistance. The tool in question, Review Bot (an extension of the online code review tool Review Board) automates the checks for standard coding violations and defect patterns that turn up regularly in approved code reviews. In other words, let the tool sweat the small stuff while developers focus on the logic verification aspects in the code. However, for a tool like Review Bot to be useful, it must be used. A paper we reviewed a few days ago said that developers were unwilling to adopt an analysis tool. This paper finds the opposite. With Review Bot, developers acted on 93% of the auto-review comments.

This paper does not stop there: Review Bot improves on reviewer assignment for large projects. Using line change history to rank reviewers outperforms the commonly used file change history. However, it has a shared problem with manual reviewer assignment—it cannot be used on new files. Balachandran left this problem for future work and proposes a syntactic similarity method as a possible solution. It will be interesting to see if his idea works.

Comments powered by Disqus