DEV Community

Arnout Boks for Moxio

Posted on • Originally published at moxio.com on

Review Roulette: Everyone is a winner!

In an earlier blog post I introduced our idea of Review Roulette, a process of randomized code reviews with the aim to foster learning and increase collective code ownership. I explained that we would try this idea out as an experiment for two months and evaluate afterwards. I also promised to share the results of that evaluation. In this post I will do so, and describe the steps we took to make Review Roulette work even better for us.

Evaluate…

For evaluation we asked all participants to fill out a short questionnaire with questions from three different viewpoints:

  • Experiences as reviewer
  • Experiences as reviewee
  • General impressions

The questions focused on the (perceived) usefulness of Review Roulette as a means of finding defects, improving internal quality and sharing knowledge. Additionally we asked about some aspects we identified as potential obstacles for a well-functioning process, like time consumption and communicational aspects.

From the results we can conclude that, overall, Review Roulette received really positive feedback. 100% of the participants agreed that we should continue the experiment as a structural process, and the statement “I think that doing Review Roulette is useful” received a score of 4.75 out of 5 on average. For us, this was sufficient to decide to continue doing Review Roulette, which we still do up to this day. Based on the feedback, we made some small adaptations, though.

…and adjust

In the questionnaire results we noticed that the statement “The commits I got assigned were useful to review” received an average score of only 3.38 out of 5, with 63% of participants scoring it a 3 or lower. This was mainly caused by commits with trivial one-line textual changes being randomly selected for review. To improve on this, we added a filter that only selects commits with at least 3 changed lines in relevant source files (php, js, css, json, etc.) as eligible for review. Implementation of such a filter was fairly straightforward using diffstat to analyze the commits. This should weed out most trivial changes, leaving only the more interesting commits for review. As a side-effect, this also fixes the issue where an import of a binary file was put up for review.

Another obstacle that we identified was the time needed for doing Review Roulette. Although we agreed a time limit of one hour per week when starting the experiment, the statement “I was able to find sufficient time to review the commits assigned to me” only scored 3 out of 5 on average, with 63% of participants scoring it a 3 or lower. This was mainly influenced by our relatively large number of student employees, who work 1 or 2 days a week besides their studies. While one hour a week may be almost negligible on a full-time work week, it is a significant time investment when your work week is only 8–16 hours. Based on this feedback, we introduced a bi-weekly schedule for part-timers (only assigning them a review once per two weeks), while keeping the full-time employees on the original weekly schedule. This ensures that we can still benefit from bi-directional knowledge sharing with our student employees, while limiting the amount of time they need to spend on it.

One last observation

An interesting observation from the questionnaire results was that the perceived usefulness of Review Roulette was higher from the perspective as a reviewee than from the perspective as a reviewer, both in terms of bugs found (3.38 vs. 2.63 out of 5) and improved design (4.50 vs. 3.13 out of 5). The lesson to draw from this is that we must not underestimate the value of the feedback that we give to others. Things that may seem trivial or charted territory to ourselves may be entirely new to colleagues. This is exactly the type of knowledge sharing that we aim to amplify using Review Roulette.

Conclusion

This experiment and evaluation bears a lot of similarities with agile software development: it is often useful to start with a small and simple experiment, then evaluate and adjust. We have been using Review Roulette with the mentioned improvements for about 4 months now, and are very happy with the added value it brings.

Originally published at www.moxio.com on March 30, 2017.


Latest comments (0)