DEV Community

Cover image for 7 frustrations to avoid with code review best practices
Cédric Teyton for Packmind

Posted on • Updated on • Originally published at packmind.com

7 frustrations to avoid with code review best practices

Code review is one of the most established processes in software engineering. Recent studies indicate that almost 90% of IT companies practice it. The emergence of AI technologies brings since 2022 new tools to improve the code review experience.

While its maturity has unveiled some common best practices for code reviews, it also emphasized several frustrations for developers in this context. At Packmind, we discussed their code review process with many technical leaders, engineering managers, and developers for several years. We’re especially interested in their improvement fields and what pains they still have. This post summarizes our findings.

Frustration 1: Spend too much time to understand the context and the code

Lucas submitted a Pull Request to Sara. She realizes it contains 800 lines (”whao!”) scattered among 40 files. There are 3 commits with the respective messages: 1/ “Add new feature” 2/ “Fix bug” 3/ “Fix bug for good”. The PR description simply indicates “Add new features on product listing”. “I have no idea what I’m gonna read”, sighs Sara.*

What frustrates Sara is the lack of clear context, leading to a more significant effort from her to dive into the code and understand it. Also, its size is an issue. There is tons of web content about what should be the ideal size of the code review. It depends on the context, and there’s no silver bullet. As the submitter, you should preview the Pull Request to gain empathy for the reviewer. This person will have a diff-based perception of your changes, which is not your vision of your changes.

As a submitter, soon as you start working on your code, think about how this code will be reviewed. Try to build atomic commits will clear and concise messages, and give a proper description to the PR. This context will help the reviewer understand what you’ve achieved and save precious time, avoiding frustrations. If the commit messages are too long, that’s a bad smell of a commit that encompasses too many changes.

Frustration 2: Bikeshedding and debating about standards

Sara reads the review comments made earlier by Sofia. One is about a coding style that “doesn’t change the behavior, but I prefer like this”. Sara is annoyed because she wrote the code like many other parts of the software, so she wanted to keep consistency. She tells Sofia, “Both solutions are okay, but maybe it’s better to keep the current way of doing this?”. But Sofia argues it’s a chance to bring a new evolution in our coding practices. Sara gets frustrated because this debate delays the review completion.

Sometimes, don’t you regret that code reviews tools don’t limit the length of discussion threads? While some reviews might be critical and require a deep analysis, most of the time they do not. If the review is a discussion space to improve a piece of code, the scope of the suggestions made should be limited and known by all reviewers. In the scenario below, the problem is that Sofia began a discussion that goes beyond the scope of the review because it’s about a coding standard; this should involve the whole team in making such a decision.

Comments should target what’s necessary to update in the code before it gets merged. All other discussion topics should be addressed in a dedicated session gathering all developers in the team. That’s the framework suggested by Packmind, whose code review extensions offer a smooth way to convert code review comments into coding practice suggestions; this practice will be discussed and validated (or not) during a workshop.

Frustration 3: Spend time reviewing trivial code

Sara starts reviewing a pull request from John, including two modified files and less than 20 lines of code changed. She’s in the middle of an emergency, but still took the time to do the review to avoid a latency for John. She approves the PR, but wonders why she was called to review that code. John could have done it himself!

Rigorous process right? A solution is to consider that not all code changes deserve the same attention. Some changes will be pretty basic and target non-critical parts of the system. Inversely, some changes will be more complex, impacting the application's core logic, or requiring strong attention regarding security concerns. We met several companies that have a rigorous process of code review, including, for instance, three reviews before validation. But does it make sense to follow this process every time without consideration of the review material?

Depending on the code submitted, we could imagine that developers are free to merge if they consider there’s no risk and need to assign a reviewer, and they should ask for two reviewers if the code changes are critical. That’s the idea promoted by the Ship Show Ask model. With solutions like Reviewpad, you can implement your assignment and merge rules depending on the content of the review.

Frustration 4: Getting few valuable feedbacks

Sara carefully submitted the review: she created isolated commits with clear messages and added detailed context in the PR description. It was the first time she had worked on this part of the software, so she took care to avoid any mistakes and expected to get constructive feedback. The PR contains about 400 hundred added lines. Stefan took the review but only wrote on comment: “LGTM”. Sara is relieved but expected a bit more material.

Maybe Sara just did the job perfectly, right? Even if it’s true, a good practice during the code review is to provide positive feedback on the code, to highlight what’s been done correctly. People often emphasize what’s “bad” and forget to comment when the work done is fine. This refers to empathy: Sara paid attention to the quality of the submission, and Stefan probably felt that, so he should also show empathy towards her.

To optimize the quality of the feedback, the reviewers must allow a sufficient amount of time for the review. Otherwise, they’ll extract only shallow and superficial content. And the submitter will feel that. So better don’t do the review if you’re in a hurry or not in the mindset for that. Ask a colleague to replace you this time. Also, adapt your expectations depending on who’s gonna make the review (her technical skills and expertise on the code, …). If the reviewer has few knowledge of the context, maybe it’s best to ask for a second review from a more advanced developer.

Frustration 5: The latency until the review gets completed

Sara opens a pull request at 01:00 pm. She asks her team if someone can review her changes. Bob will show up at 3:40 pm. and, after spending 5 mins reviewing, will suggest one minor change. Sara immediately takes 2 min to apply the change and asks Bob to approve the new version of the code. Bob approves at 5:00 pm, and the code is shipped in production. The whole process lasted 4 hours, but stuff happened only for 7 minutes. Fortunately, only one reviewer was required!

The first step is to focus on the root problem: why is this latency significant during the process? Do reviewers lack time because they’re overwhelmed? Don’t they pay enough attention to their notifications from the code review tool? Or is it because you don’t like to review, so they procrastinate as much as possible? The whole completion time indeed depends on the availability of the reviewer. There’s no silver bullet here, but this problem must be discussed internally to identify how to improve this process.

Between the PR creation until it’s merged, the majority of the time, nothing will happen. We wait for the next step to happen. Unfortunately, that idle time hurts the delivery time (the lead time for changes). While they wait for a review, developers will be tempted to start another task, leading to context-switching. Practicing mob programming can prevent such latencies. Also, a solution like Axolo offers a Slack application to manage GitHub pull requests and reduce the Pull Request merge time.

Frustration 6: Getting unpleasant or hurtful comments

The egoless programming principles stand as core concepts for doing great code reviews. One stipulates that “You’re not your code”. Code review, such as any other discussion space in a Tech organization, should be respectful among the participants. So we’re all kind to each other, and we formulate constructive comments about how to improve the code. Even though you’re uncomfortable with the code you’re reading, there’s no reason to write something like: “This is a non-sense, and this code should be thrown away”. Be constructive instead: “What do you think about this alternative instead?”. If developers are stressed or feel bad before getting review comments, your team has a severe problem to solve.

Frustration 7: The source code has obvious bugs

If you review a code and immediately say, “Hey, how this code possibly works?”, this does not smell like a great start. But you might be wrong and have missed something. So you politely ask to code author: “I’m possibly wrong, but I think this code has an issue and does not have the expected behavior since …”. If the answer is, “Oh yes, my bad, I didn’t run it actually.” I can definitively understand your frustration ;-) This is an apparent lack of rigor and empathy from the code author. To avoid this, set guidelines and rules before submission; for instance: code must have been tested (manually) and/or covered by unit tests, linters should have been executed, and the CI/CD pipeline is checked, …

Feel free to share with us your main frustrations when doing code reviews; we’d be happy to read them. To go further, we suggest you read our dedicated post on how to accelerate the code review process. Thanks for reading ;)

Top comments (5)

Collapse
 
alvarolorentedev profile image
Alvaro

Nice post, nevertheless is not the problem that PRs are use like the default peer review process? It generates a lot of waste being an async process, and there are better alternatives for collocated teams.
I would call better to stop over using PRs, and find the correct tool for the job as kent beck recomends on this article

Collapse
 
cteyton profile image
Cédric Teyton

Thanks @kanekotic for the Kent Beck's blog post, very insightful :)

I've watched a talk form Dragan Stepanovic which is mob programming advocate and provided nice illustrations on the latency issues:
twitter.com/d_stepanovic/status/13...

Collapse
 
wadecodez profile image
Wade Zimmerman • Edited

Great post!

From my experience most problems occur when there is some form of ego involved. When code does not work as expected, it is what it is, something unexpected. Yelling at the world, blaming people, or playing shoulda-coulda-woulda only leaves the code "broken" for longer. Even when there is full test coverage things will "break". Let those who have zero tolerance for bad software live their unhappy lives.

Collapse
 
canro91 profile image
Cesar Aguirre

I've witnessed most of your frustrations too. Personally I started to always (well, most of the time) include a recommendation or example with every comment I left.

I think it's a cognitive bias (we judge the things we know, I can't remember the name), but often the whole discussion gets around naming of vars or functions, because that's what we can judge :)

Collapse
 
cteyton profile image
Cédric Teyton

Thanks for sharing this:)
I think you do the right way!