We have discussed best practices and principles. Now is time to get to the task and start arguing about practicalities. If you are leading a team, and you need your code reviews to be better, more effective, and a more perfect tool for team-building, what can you actually do?
We have the answers.
Does that title sound excessively ambitious? Is Reviewpad finally becoming political? No to both!
What we mean is that you need to produce and maintain a well-documented source of truth to keep every member of your team on the same page. Code reviews are essentially about co-ownership of the final product, and at their core they are a collaborative endeavor. If all members of the team (especially the more opinionated) don't agree on some basic principles, this can undermine the whole deal.
Here's what this document should contain (at least):
- A clear-cut definition of the team's priorities. What do you do in special/urgent cases? Can you skip tests when the team is in a hurry or not? Should code reviews supersede other work when the deadlines are getting close? What constitutes "a hurry"? What qualifies as "getting close"? Etc.
- Establish a pull request description template. And make sure people use it.
- Establish a code review checklist. And make sure more experienced developers, who probably won't need a checklist, don't fail to keep to its standards.
Code reviews are an engaging, interesting, personal experience. They are not an afternoon spent nitpicking for teeny bugs that a linter can easily find.
We have written an article where we detail where and when automation is a must-have, and also some of our thoughts about when you can't rely on it. We recommend going through it, but the bottom line is simple: figure out what kinds of work a machine can do better than you, and let it. Linters, test coverage tools, etc. are great tools that you should use to your advantage.
We tend to focus much of our attention on how to improve the review itself. We do, however, know very well that the quality of the review is heavily reliant on how sensible requests are.
Speak to your team, and create clear guidelines: make life easier for the reviewer.
Encourage pull requests of manageable size and clear context , both of which can be defined in the document mentioned earlier.
This one is self-explanatory, but it must be said: everyone needs to be doing reviews. There's no advantage in overworking your most senior teammates (which happens too frequently), and there's a lot to lose in terms of learning and improvement from keeping juniors from doing them.
Be sensible in how you assign code reviews, of course , but share the wealth!
Avoid rubber stamp reviews as much as possible. Code reviews are a process of continuous improvement , and every bit of work you do should follow in this spirit. If there are in fact no mistakes, and the code is absolutely pristine, share this achievement with the rest of the team..
Team spirit is fundamental. Co-ownership is key. One of the reasons why code reviews sometimes fail, and create rifts among teams, is that people are working towards their own views and goals about the sprint/product instead of agreeing on collective goals.
Let your team control the review process. Allow the establishment of goals to be a collective process, and have everyone commit to it. This will naturally lead to what should be encouraged (positive/constructive/actionable) comments, instead of what should be discouraged (negative/damaging/empty) ones.
To empower your team is to make them responsible.
We may be a little biased, but the fact is we are using Reviewpad to do our reviews while we build it. We have been doing it from really early on.