DEV Community

loading...
Cover image for Should my small team really, really, really do code reviews?

Should my small team really, really, really do code reviews?

aurelio profile image Aurelio Originally published at aurelio.me ・6 min read

While nearly everyone on paper agrees that code reviews are "a good thing" and everyone should do them, the reality often tells a different story.

Big teams and companies all pretty much have a code review process in place. On the other hand, small startups often skip this step, especially at the early stages of their life.


Note: If you're short of time feel free to skip at the last section and get a quick list of to-dos.
(And by the way, the answer to the question in the title is yes)


When you shouldn't do code reviews

Or rather, how people justify their team skipping reviews.

  • Code reviews slow us down
  • We're all senior developers, we trust each other
  • We will start code reviews as soon as it makes sense
  • We're only 2 developers, why should we?
  • It's a quick change, it's ok

If you recognise yourself in one or more of the above statements, I hope this post will convince you to reconsider.

Let's talk about code quality

Code reviews are one of the best way to favour accountability and "good quality code". If done right, they can be better than any linting tool, code style guide or automated process.

Having someone proof read what we type helps us keeping our standards higher. Even a seasoned developer can slip sloppy code in the codebase when tired, be it consciously or by mistake.
But code quality comes at a cost, and the money used to pay for it is one of the most valuable currencies for a startup: time.

It seems commonly accepted that doing things right is slower than doing them worse.
Martin Fowler talks about this in his article "Is High Quality Software Worth the Cost?". He mentions something interesting for us:

Developers find poor quality code significantly slows them down within a few weeks [emphasis mine]. So there's not much runway where the trade-off between internal quality and cost applies.

Code quality increases velocity

this chart is taken from Fowler's article

If we agree on Mr Fowler's take we can see that "good code" pays off very quickly, even at very early stages.
Higher quality means higher velocity, and this is true earlier than many seem to realise.

While the first lines of a new codebase are often seen as something temporary ("it's only a prototype after all"), the reality is that those lines will influence the speed of development very early on. In a few weeks this advantage will very likely be noticeable.

What to review

When it comes to code reviews, sometimes the question arises about what exactly to review. From what we've seen so far it's perhaps clear that my suggestion is that every new piece of functionality or any meaningful change should go through code review from day 1.
But what about very small changes? What about a quick readme update?

I keep it simple, do code review absolutely everything. Every single PR should be peer reviewed. And yes, this "unfortunately" means no pushing to straight to master, no "this is a quick one" PRs that go in unreviewed.

Sounds harsh? Maybe. But it's very hard to draw a line of what exactly is a meaningful change and what is a trivial one. Is a 200 lines indentation tweak worth more or less than a 1 line change in the models of your application? Even a readme update can contain misleading information or be based off wrong assumptions. And if we allow very small changes to go through by skipping review, what's the threshold that divides what's to be reviewed from what's not? How do you explain this to the next dev joining?

If it's a quick change, it will also be very quick to review.

But, but... this slows me down!

I hear you. Yes. You're pushing a one-liner and I'm telling you that you need to wait 2 hours, or even a day for someone else to review it.
I agree, this is unacceptable. And that's why having prompt reviews should be a top concern of your team processes.

We're in a small startup and speed is everything. Well, it turns out, speed is everything at massive scales, too.

At Google, we optimize for the speed at which a team of developers can produce a product together, as opposed to optimizing for the speed at which an individual developer can write code

-- From Google Engineering Practices Documentation

Here are the guidelines from Google:

  • If you are not in the middle of a focused task, you should do a code review shortly after it comes in.
  • One business day is the maximum time it should take to respond to a code review request (i.e. first thing the next morning).
  • Following these guidelines means that a typical PR should get multiple rounds of review (if needed) within a single day.

Your startup is fast, your reviewers should be too.
If you're looking for a nifty tool to automatically track how fast your team is I suggest Velocity.

How thorough should your reviews be?

Reviews don't mean quickly scrolling through dozens lines of code and pressing "approve", perhaps adding a thumbs up to add a human touch to it.
It's very important to actually read and understand the code. If we are always very permissive we will get sloppy and just let anything go in. If you - on the contrary -  make a habit of reading critically and making an effort to understand what's going on, then it will get normal to ask questions and clarifications on many PRs.

This will make commenting a part of the daily routine of the company. When it'll be time to be more critical regarding an approach taken in code it will be less likely to cause a stir or be a source of litigation.

Reading the code also means that knowledge spreads faster within the team. Code reviews are not just about code quality, they are also about knowing what's happening around the codebase while you're not looking.

Read the code, and understand it. If you don't, ask questions.

Who should be assigned to reviews?

In super small teams this is a non issue, you don't have much of a choice.
When the team grows, the question of who should review might arise.
In my view:

  • Many assignees are better than fewer
  • Aim for at least 2 reviewers, if the team's size allows. Look at how you can make this a policy with Github's protected branches
  • Also have junior devs do review.
  • I repeat, have junior devs do reviews!

If you want to make assigning people faster, you can use Github's codeowners that will automatically assign for you. Open a PR and you're done.

One extra tip: comment your own PRs

Adding comments to a PR should not be limited to other people's PRs. You can add comments to your own ones as well.
I'm not the only one doing this and I find it really helpful when someone does it.

You can anticipate questions, explain why you've taken a certain approach, tell the reader that you already have a follow-up PR in mind.
This, again, will make reviews faster.

In short

I hope by now I convinced you that code reviews are an investment that have short and long time benefits that even an early stage startup should consider.
Here's a quick list of action items for the busy ones and a recap for everyone who read so far.

  • Do code reviews from day 1
  • Review every single change, no matter how trivial
  • Require "Pull request reviews before merging". See how on Github
  • Use codeowners so you don't have to waste time picking reviewers.
  • Strive for fast code review turn arounds. Maximum waiting time should be one day, but preferably it should be quantified in minutes. Read how Google does it.
  • You can track your team's speed and impact of code review by using tools like Velocity
  • Be thorough when reviewing, understand what you're reading. Avoid giving thumbs up to absolutely everything, without reading.

Cover image by Thom Milkovic on Unsplash

Discussion

pic
Editor guide
Collapse
davidteren profile image
David Teren

Good post. As you mention Velocity I assume you have Quality in your pipeline.
It would benefit readers to mention the use of a build pipeline that fails if checks do not pass can go a long way towards minimising code-review overhead. It can be helpful if all failures are addressed before requesting a review.
On GitHub, you can use Actions, Security advisories, Dependabots not only to validate the app but also as mentioned in the above post, to require at least one review before merging is permitted.

Available tooling is language and framework dependent.

For example Ruby on Rails apps can employ:

Bonus security checks.

  • Bundler Audit for security vulnerabilities in libs.
  • Brakeman for static analysis tool which checks Ruby on Rails applications for security.
  • Snyk for security checks.
Collapse
aurelio profile image
Aurelio Author

Hi David, you raise good points.

Having automation in place is a big win. We do use Code Climate's Quality as you guessed, along other tools you mention. They help the developer get rid of errors and inconsistencies so that the reviewers can really focus on what matters and what only a human can do, i.e. provide contextual feedback on the code.

All these tools are generally trivial to set up in a pipeline and provide great advantages (most of them at little to no cost per month) so that there's really no point in working without them. They offer invaluable help in lowering the margin for errors and increasing speed of delivery.

I would say this is even worth a dedicated post. Thanks for the comment, I really appreciate it.