DEV Community

Cover image for How to accelerate your code reviews?
Cédric Teyton for Packmind

Posted on • Edited on • Originally published at packmind.com

How to accelerate your code reviews?

Code reviews through Pull/Merge Requests have become an essential process for the majority of tech companies. A recent study in 2022 unveils that 84% of the surveyed companies use it. Several years of practice allowed practitioners to take a step back on this methodology to identify pain points and how to overcome them. Besides, code reviews should always be continuously evaluated internally to improve how it’s done. In this post, we discuss how to add more flexibility to the Pull Request process and, thus, improve the delivery time.

The lead time for change is the amount of time it takes a commit to get into production, and is part of the 4-key metrics defined by the DevOps Research and Assessment (DORA). Engineering teams seek to optimize this delay to deliver sooner value to their customers. But before bringing solutions, let’s first define the problem:

During code reviews, what can slow down the whole process, create bottlenecks, and grow the lead time for change?

Packmind's CPO Arthur Magne recently discussed this topic with Freddy Mallet and Marcelo Sousa from Reviewpad during a webinar, and we summarized the main thoughts here.

Narrowing the context and purpose of code reviews

The most often quoted obstructions to code reviews are the lack of human resources and the workload of engineers that don’t allow enough time for code reviews. A common reaction consists in adding more human resources to address the issue. Not so fast!

First ask yourself:

Is each code review truly necessary? Can we affirm that all the discussions within reviews are relevant and should happen there?

Indeed, what we observe in practice is that code reviews often lack a clear context and purpose. If you’ve opened a merge/pull request, what do you expect from that? From our point of view, we propose two primary purposes for a Pull Request.

#1 Code approval

I need to ensure my code is okay before merging ⇒ I need your approval

In this case, it may raise friction from both sides, as reviews come late in the process. Authors can feel reviews as a constraint that slows down their delivery, and a frustration to get many comments, with which they won’t always agree. Reviewers can also adopt defensive behavior. What if they think: “I wouldn’t have done [the code] this way. Should I tell this? Should it be blocking for merging? My colleague waits for 2 days; how will my feedback be welcome?” So you can see that frictions can emerge from code reviews, mainly because we have the pressure induced by the quest for a satisfying lead time for change.

#2 Get help to improve the code

I’d like to show you my code and get your feedback to improve it ⇒ I need your help.

Compared with item #1, we’re looking to improve our code and learn from a peer. In this context, it’s worth considering whether this review is part of the delivery process. We may be convinced this code could be merged first. That’s something commonly observed in trunk-based development.

The workflows and policies are different based on the context of the PR, so it makes sense to define it clearly.

Another problem we observe during code reviews is the long discussion threads about implementation choices, design, and best coding practices. And there, discussions get emotional and passionate: this is precisely what we want to avoid during reviews.

Do we still need code reviews?

Aside from this discussion, you may ask whether doing code reviews still make sense in 2023, in an era where we can find plenty of “DevOps automation” tools on the market.

We consider that, yes, indeed, code reviews are still necessary. Even though automatic tools have become increasingly powerful, they’re good at detecting known problems. They can’t prove the absence of problems. Think about this Dijkstra quote:

“Program testing can be used to show the presence of bugs, but never to show their absence!”

So during a Pull Request, we seek for what tools can’t detect.

Now let’s go back to our main point: how to address the aforementioned issues?

Do code reviews only when it makes sense

The concept of Fluid Pull Request

To answer the question: “Do all Pull Requests require a code review?”, we can explore the model Ship Show Ask. In this approach, PR authors are considered free of trust to evaluate themselves if:

  • The code doesn’t require a code review since the changes are trivial (Ship)
  • They’d like to show the code changes to colleagues and get their feedback (Show); This mode can sound totally counternatural as the feedback can be provided after the merge. This is part of the paradigm shift.
  • The code requires a review (Ask)

So instead of adding more human resources to review the code, we reconsider the necessity for each PR to be approved; This is how we can positively impact the lead time for change. This concept is also called Fluid Pull Request.

Next generation pull requests with Reviewpad

Reviewpad is a GitHub app designed to automate your code review process and which implements the Fluid Pull Request concept. A single configuration file in your Git repository will contain all your workflows and policies for your Pull Requests. Based on the context, you’ll define rules to enforce or relax your security net. With Reviewpad, you can define rules such as:

  • If a file in this folder is edited, one person from this group should approve the code.
  • If the PR includes a new method in the code with an annotation @Critical, add a label ‘security, and require a double-validation.

Rules can decide whether PRs can be automatically merged or refused, define a validation workflow, and more. Rules can address syntactic patterns in the code changes.

Here is an example of workflow to automatically raise awareness when a critical change is introduced:

api-version: reviewpad.com/v3.x

labels:
  critical:
    description: Modifications to critical changes
    color: "#294b75"

groups:
  - name: owners
    description: Group of owners
    kind: developers
    spec: '["marcelosousa", "ferreiratiago"]'    

workflows:
  - name: changes-to-critical-code
    always-run: true
    if:
      - rule: '$hasAnnotation("critical")'
      - rule: '$hasFileName("runner.go")'
    then:
      - '$addLabel("critical")'
      - '$assignReviewer($group("owners"), 1)'
      - '$info("@marcelosousa: you are being notified because critical code was modified")'
Enter fullscreen mode Exit fullscreen mode

Discuss best practices and design outside code reviews

Knowledge sharing should be collective

If you request a review to improve your code or show it, this shouldn’t be done under pressure before merging the code. Engineers should instead find dedicated time to discuss code design and best practices, avoiding friction. If you’re having a 1:1 conversation and debating about a best practice, you hold a discussion that may impact your whole team. Indeed, if you come up with a decision, shouldn’t everyone agree with that? We can assume that you seek code uniformity and practice alignment.

So maybe it’s time to consider running dedicated workshops on your best coding practices, as 1:1 conversations should not be the right place for that purpose.

Best practices sharing with Packmind

Packmind is a knowledge-sharing platform for developers and provides integrations with IDE and code reviews (GitHub, GitLab, Azure DevOps, Bitbucket, and Helix Swarm). With the code review plugins, you can create a best coding practice from a comment, and the code is sent to Packmind along with your best practice proposal. You thus avoid long 1:1 discussions during code reviews.

https://res.cloudinary.com/practicaldev/image/fetch/s--KGpVUTR8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a4xhwtql0kvy9m1k1jdt.gif

Then, Packmind helps your team regularly run dedicated workshops to review contributions from developers (made during code reviews or within IDE). It’s a suitable time to share knowledge and make technical decisions together and ensure everyone is aligned with the appropriate gestures to apply in the code. This is a continuous improvement process for your best coding practices and to grow developers’ skills.

Want to accelerate your code reviews?

If you feel your engineering team has bottlenecks in their code reviews, we hope you found in this post some insights to improve your process:

  • Consider the necessity to approve each code change;
  • Consider dedicated sessions to avoid never-ending discussions during reviews.

We also showed that Reviewpad and Packmind can be complementary to accelerate the code reviews while fostering knowledge sharing.

Tell us if you’ve improved your lead time for change in your context and how you made it; we’d be happy to learn!

Top comments (0)