In software engineering, a pull request is the process in which a contributor asks a repository owner to merge their changes into the main codebase.
The PR is important, as it acts as a crucial safety buffer when developing software. It gives a chance for both manual validation (e.g. code reviews) and automatic checks (e.g. unit tests) to take place, as well as discussions between the repository's maintainers.
Once the code has been approved, it can be assumed safe to merge in, and will eventually become part of the next release.
This safeguard should result in the main branch of code remaining in a production-ready state, all of the time.
When looking at a pull request, the reviewer is addressing two concerns:
- Is the code of a high quality?
- Does this implementation satisfy the feature's requirements?
The first criterium is, for the most part, a matter of opinion. We can attempt to standardise code quality through the use of linters and code coverage tools, but the final say falls to the reviewer.
The second is much more significant. A failure to meet requirements may result in a bad product, which affects the user experience. In the interest of business objectives, this is something that we absolutely need to minimise.
For distributed software teams, this poses a unique challenge.
Typically, the reviewer of a pull request will not have had any involvement in the specification, design or implementation of the feature.
As a matter of fact, software teams may find it beneficial for the reviewer to be completely out-of-the-loop. Their opinion is more likely to be unbiased if they haven't contributed to the delivery of the work item.
This creates a problem. The reviewer needs to know what they're looking for, but they don't have much information. If they approve of an unsatisfactory implementation, is it really fair to blame them? The responsibility should clearly be elsewhere.
Context of what the implementation is needs to be communicated, so that a reviewer can conduct an unbiased review yet remain an effective gatekeeper.
In our team, we have found success in writing manual acceptance criteria.
This is a formalised checklist, verified by a human, which is marked alongside a code review.
The checklist contains unambiguous steps to follow. This includes any required test data, as well as the expected result of the test.
Why has this been effective?
- We can test the feature with much more clarity on which behaviours we're testing. This results in faster reviews.
- We have certainty that satisfying the acceptance criteria will satisfy the work item's requirements. Unsatisfactory code is less likely to be released.
- The responsibility of identifying (but not testing) behaviours falls on the contributor, who has the most context on what is being implemented.
For the sake of visibility, I like to supplement the review with screenshots at each test.
After all, the success of the review ties into the success of the implementation. So, the contributor may appreciate seeing their hard work validated!
It's a good time to note that acceptance criteria aren't a fool-proof solution to code validation.
There is a requirement for the contributor to write a sensible checklist. This means covering the full scope of the feature, including edge cases, without having such a volume of tests that checking them becomes a laborious process.
Therefore, you should still endeavour to validate all behaviours of your code through unit and integration tests. The acceptance criteria can be thought of as the formalisation of a separate, equally-important process, and not as a replacement.
Work smarter, not harder. 😉
Hey, guys! Thank you for reading. I hope that you enjoyed this.
Keep up to date with me:
- DEV: https://dev.to/tao/
- Twitter: https://twitter.com/LloydTao
- GitHub: https://github.com/LloydTao
- LinkedIn: https://www.linkedin.com/in/LloydTao/
Catch you around!