This is one of the most controversial aspects of code reviews. People just don't seem to agree about how long one should be taking doing code reviews, how long the ideal code review is, or even if you should time them at all.
The underlying thought is valid : code reviews depend on our ability to remain focused and detect minor flaws. It's not an activity that you can perform well for very long. Tiredness and distractedness will destroy your productivity.
So how do we optimise this? Should we measure time? Set targets, such as the number of lines of code reviewed? Trust our body and perceptions?
We have some thoughts ourselves, but we'll leave those for the end of the article. In the meantime, let's look at the best arguments.
Code reviews should take a fixed amount of time
This is one of the more common suggestions. The idea is that whether you're measuring how much code you actually go through or not (because this is heavily reliant on its quality), you should focus on time, so you know you aren't doing too much and reviewing whilst tired.
Common answers range from 60 minutes to 2 hours, and it is generally agreed that anything that exceeds two hours is too much and would necessitate taking breaks.
Not everyone emphasizes fixed amounts, however.
Code reviews should be limited by space, not time
This is the second most popular suggestion. The argument is that time isn't the best way to limit reviewing, the number of lines of code checked is.
The idea is that different reviewers with different skill sets are likely to take varying amounts of time to get the same work done.
This makes time-limited sessions an unreliable predictor of when the job will be done.
Proponents of best practices usually recommend around 300 or 400 lines of code at one go.
Cognitive loads vary, so you should do both
This is by far the most common answer out there.
To put it quite simply, different people in different teams have varying degrees of tolerance for cognitive load. Some can work for longer hours, some can deal with more work.
No one is superhuman, though, and we shouldn't expect that.
The ideal solution, if you are doing both, is to set limits in terms of both time and space and stop whenever you reach whichever is first.
Our suggestion: figure out what works for you
When recommendations for best practices are this divisive, odds are different strokes will work for different folks.
This is not a bad thing.
We do believe that best practices for code reviews will be better defined in the future (and we definitely hope to contribute to that conversation), but some aspects of it will always be up to the characteristics of the team.
Here's our recommendation, that we've included in our Best Practices article:
Time your own code reviews.
Figure out how long it usually takes you to get through your work, and be mindful of your own limitations. Once you've understood how long it usually takes you to get through the amount of work you're comfortable with, you will learn a number of important things, such as:
- Which part of the codebase takes you the longest;
- How long it typically takes you to handle specific issues;
- When any given review is taking longer than usual, which can mean there are problems that need fixing elsewhere in the development process;
- When you will need to compartmentalise and divide the work.
This is part of a larger point about code reviews best practices, which is all about acquiring metrics that make your work easier.
Something else that makes your work easier is having the correct tools for the job.
Which, incidentally, is what we make.
Top comments (1)
How long do you think code reviews should take?