I have posted in my last blog about how we do code reviews and compared it with pair programming.
One thing I noticed is, every team and team members have a different list of things they look for while doing code review. In this post, I will list out the things I look for. This is no way a complete checklist. If there any checks which you think are missing, let us know by adding them in comments.
Depending on what programming language or platform you use, check whether the new code follows the right design.
What is a right design is up to the team to agree upon? It is always best to have upfront discussions among developers when new abstractions are brought in and the new design is followed. Code reviews can be a way to make sure it is implemented as discussed. This will make sure there will not be a lot of rework. But this can be exempted when you want to implement something new and then get feedback on it. Make sure you write enough so the idea is conveyed.
Following are some of the things you should check for (Thanks to Bhavin Javia for contributing to most of this list):
- Verify if coding conventions as per the designated style guide is being followed to maintain consistency and best practices.
- Check for Duplication inadvertently introduced by developers e.g due to lack of awareness about existing reusable components, misplaced logic/responsibilities, generic/reusable code not extracted and just copy/pasted elsewhere etc
- Identify Refactoring opportunities which were missed or deferred during earlier tech debt and code reviews. Often if the developer is working in that area, has a lot of context which makes it easier to include some refactoring tasks as part of the story/PR vs finding a dedicated block of time for refactoring.
- Spot and warn developers about potential dependencies or conflicts with other developer’s code e.g. when multiple features which touch same areas of the codebase are under development or review
- If there’s any non-standard feature or hack is being introduced, make sure it’s annotated with a #TODO or #FIXME tags along with an appropriate comment e.g. reference to the code snippet/patch being used with reason and what the ideal solution should be. e.g. #FIXME Patch to workaround bug # in <framework> <link>. Remove after upgrading to fixed version.
- Look out for new libraries introduced and see if they are required and are the latest version. Are there better alternatives for it?
- All the above checklist about code also applies to test code.
- Make sure all the code have tests written at all levels — unit, integration and functional.
- Check if all tests passing.
- Check if the tests are acting as documentation describing the intention of the code it is testing.
- Check if tests give proper error messages when they fail. For example:
expect(subject).to include(:customer) #is better than: expect(subject.key? :customer).to be\_true
This is something which most people miss or probably think not as important. Working of the code is as important as its quality. Even if you have a testing team, catching bugs at code review will reduce the cost of fixing it.
- Read the story description, ask questions to the BA or Product Owner and see if all the acceptance criteria is implemented as mentioned. I feel most bugs can be caught with just this check.
- Think about scenarios outside of what is mentioned in the story. Check if the usual user flows related to the new changes are still working as expected.
Also focus on cross functional requirements like Performance, Security, Analytics, Logging, Alerting etc.
- Check if the code can introduce any performance issues like N+1 query problem, or loading entire database into memory etc. Think about running this code in a production environment and anticipate what kind of problems might arise. Of course you can’t find all of them just by looking at the code but you might be able to identify some common mistakes from your past experience. Like doing Customer.all.
- Security is something which can be identified early. Check if data is accessed only by authorised users. Read about common security precautions which can be taken and share the same understanding with the team.
- If you already have integrated with some analytics system, check if the new functionality should integrate with it.
- Check if enough logging is done to be able to debug the application.
- Check if alerts are added when things break.
When you are following continuous delivery, you should make sure the new changes don’t break production data and functionality.
- Make sure data migration scripts are added and check if they execute as expected. Run them on your machine once if required. Don’t make changes which will result in data loss. Also, make sure the features going to production are auto-deployed to a staging or pre-production environment which has a replica of production data (obfuscated) to detected and fix potential issues.
- Check if feature toggles are added for any functionality which you don’t intend to release.
- Check if the code under review is merged with the latest master branch code.
- Make sure instructions are added for any other teams to follow after integrating these changes. For example, informing the QA team what areas to run regression on if there are any major changes. Or informing any downstream consumer systems for any changes in the API contract. Or updating README for helping other developers.
Overall, keep the changes to be reviewed small and raise pull requests soon to get feedback early.
Please add your thoughts in the comments. And if you like the post, recommend it so others can find it and participate in the discussion.