When I did code reviews, and for any team I lead from a technical aspect for, my a review has three phases of confirmation:
Automatic
Did the testing pass? If not, stop.
Did the change add/remove tests? Why?
Did the code quality decrease measurably?
Did the change introduce possible future technical debt? Is it acceptable?
Did any of the tooling indicate a decrease in maintainability or compliance?
Operation
Does the change satisfy the statement of work submitted for the change?
Does the change logic look reasonable?
Can I follow the logic change in text, mentally, and not get lost.
Experience
From the end/admin/manager/etc users perspective does the change function as expected?
Sure it seems like a lot, and yes it may slow down the merge process a bit. But in my experience taking a little longer to review code can help prevent issues from getting to production.
The holy grain though is a 100% automated process with confidence in the automation that when a change begins throwing errors the changes are rolled back, the errors logged, and the committing developer alerted. But that is a story for another time.
Pawan is working as Sr Web developer at JCP, loves to make impact to the customers or masses, comfortable throughout the stack, but presently making difference at front end and devtools.
I don’t always stop if the tests aren’t passing , especially when reviewing less senior level engineers PRs. I’ve found that in a collaborative environment, a detailed review in spite of test failures often shows an obvious mistake that can quickly be pointed out, increasing the overall velocity of the initiative.
One possible addition...
Reviewing the test additions or updates first can often guide the review nicely, framing the changes in an easy to follow and logical way. Plus, if reviewing someone for the first time, it helps clarify the patterns and structures they use.
When I did code reviews, and for any team I lead from a technical aspect for, my a review has three phases of confirmation:
Automatic
Operation
Experience
Sure it seems like a lot, and yes it may slow down the merge process a bit. But in my experience taking a little longer to review code can help prevent issues from getting to production.
The holy grain though is a 100% automated process with confidence in the automation that when a change begins throwing errors the changes are rolled back, the errors logged, and the committing developer alerted. But that is a story for another time.
Really thorough answer. I agree with it all!
Adding to this I ask to use screencastify plugin and create gif to compare experience/features before and after changes.
Wow, I need to write that down. Great list
Great looking checklist!
For frontend development, there seems to be an additional step that is needed.
Pull Requests should be accompanied by UI reviews. Because you can only see so much in code.
Love this list!
A comment on the first point...
I don’t always stop if the tests aren’t passing , especially when reviewing less senior level engineers PRs. I’ve found that in a collaborative environment, a detailed review in spite of test failures often shows an obvious mistake that can quickly be pointed out, increasing the overall velocity of the initiative.
One possible addition...
Reviewing the test additions or updates first can often guide the review nicely, framing the changes in an easy to follow and logical way. Plus, if reviewing someone for the first time, it helps clarify the patterns and structures they use.
Awesome