DEV Community

Peter Harrison
Peter Harrison

Posted on • Edited on

Git tools undermining Code Review

In recent years I have seen the adoption of online tools such as GitHub, GitLab and other git tools to facilitate code reviews. But what do I mean by 'code review'? Over the last three or four years I have seen an approach to code review which is concerning; where it involves a developer eyeballing a pull request (PR) using a git tool and approving it without communication with the author or active testing.

Reviewers are not pulling the code or running it locally to ensure the functionality is actually performing as expected. They don't interact with the author of the code itself, rather leaving comments or approving the PR. Online tools typically will show only the fragment of code under change, rather than showing the entire file, so the scope and visibility of the context is very limited.

What is code review for?

Before getting deeper lets talk about what code review is. Why do it at all? Is it a last minute syntax checking and bug finding exercise?

It should be obvious that there are linters which will check formatting, and compilers to find obvious syntax issues. Unit tests to the extent they exist will protect you from regression. Understanding code review as simply eyeballing the code changes for obvious issues is therefore narrow and potentially pointless.

Code Review exists to ensure quality processes are followed. It should not just be a check at the end when performing a PR. Ideally you should be discussing an intended implementation with other team members for any non trivial functionality before and during development. Its too late once you are at the PR stage to get meaningful input from other developers. Peer Programming takes this idea to the extreme, having two developers at the keyboard, a kind of continuous code review.

The role of a code review is multi-faceted, in that while it is about code quality it is also about knowledge transfer, team building, mentoring, and building relationships between the team. It is an opportunity for senior members to educate new team members through example. It socializes development norms among the team so everyone is on the same page.

Reviews are also about ensuring each PR answers some basic questions. What is the purpose of the change? Has it been run and the functionality verified? Have unit tests been written or modified? Is the code readable? It could also review whether the changes need to be made in multiple places, or whether there are opportunities for refactoring?

The Cursory Review Antipattern

The approach of cursory code review of PRs without any real time dialog between developers kills close in person teamwork and collaboration. By cursory I mean without pulling the PR and directly testing the running application locally to verify it works or inspecting the code in its totality in context.

Because I worked at home for five years I had to overcome barriers to in person code review. Our team used videoconferences to sit down one on one to review code and discuss it in much the same way we would have in previous co-located teams. There can be frustrations, and not everyone is comfortable or confident with this approach, but it did allow us to continue the kind of in person team building and mentoring previously achieved.

When I came into other teams without person reviews, rather depending on PR reviews without real time collaboration with the author, the loss in terms of teamwork and mentoring was painfully evident. So to were quality issues that flowed from failing to pull and test applications locally to check functionality.

Not just Compliance

Code review might be seen as a compliance task imposed from outside rather than something of genuine value. If code review is seen as just a compliance task it probably isn't effective because developers have not bought into its core purpose of quality assurance.

For code reviews to provide any value they need to catch the real drivers of poor quality. They need to catch code with no unit tests, code which isn't actually functional, or is so badly written it is impossible to maintain. It is part of the way we as developers take responsibility for quality.

Counter Arguments?

Some might say that teamwork and mentoring are not the primary points of code review, and that the team should do mentoring and collaboration anyway. This is totally true. Only problem is that developers do what they are incentivized to do, and if they are not given the schedule opportunity to do them, as is often the case, they won't.

In person code reviews entrench opportunities for mentoring and collaboration without explicit programs and meetings to do so. Virtual review practices common in todays development projects have undermined the positive feedback cycles that came from close effective teams of the past.

You might be argued that stringent, effective code review is implemented where you work. Excellent news. I'm not saying poor reviews are universal when using online tools. My point is that there are enough examples in my own experience of cursory code review in a git tool that it must be common enough to be a problem more generally. I might be wrong, perhaps my experience is not representative of the general case.

But even then I think the tools are encouraging this approach, perhaps even creating the belief that review of code in these tools is the correct way to do review. It discourages in person code review which has many second order teamwork benefits.

The Larger Issue

The dreaded virus drove teams who had not previously worked in distributed environments into isolated home environments. Having worked in a such an environment for years I had already adapted my own approach to focus on human communication. While we obviously use git and PRs and the associated tools they were not the only tools.

We were happy to observe the code run on the authors machine in a conference call to demonstrate it works, and to use screen sharing to review code together. The important thing was to ensure not only that unit tests pass, but that the code functioned as expected. It is more than reviewing a diff.

This article is not an attack on git or its tools. It is challenging the idea that your reviews be done alone, by viewing through a online diff viewer, without the author being able to defend their implementation or learn from the reviewer. It is pushing back against the idea that remote working makes this kind of review best practice.

I feel I may be opening myself up to attack from those who use exactly this kind of approach, and consider it best practice. Is it possible for a team to run code review like I'm saying you shouldn't and still have high quality? Maybe. But ask yourself what you are missing? If you compensate with effective team communication it may not be such a problem, but often what is the case in one area spills over into others. In this case a desire to work alone with code review might also occur more broadly. So let me know - how are you doing reviews? How is it working? How do you mentor and spread knowledge among the developers in your team?

Top comments (1)

Collapse
 
moopet profile image
Ben Sinclair

It's a good point.

I definitely feel that when I've worked in places which fully intended to do code reviews (in much the same way they fully intended to write tests...) and eventually passing PRs around on a few projects kind of got seen as "good enough".

And you're totally right, it's not good enough!