Pull Requests are the primary way new code is proposed, reviewed and merged. We know that Pull Requests are important.
But programmers put in varying levels of effort in writing Pull Request descriptions.
The minimal effort PR
An example of a minimal effort PR could be:
# Description
Added column feature to table.
It doesn't give any context why the change was made, nor the rationale, nor give any hints about what behaviours has changed or even give a link to an issue tracker to allow viewers to find this information.
Why this may be occurring
It can be easy for us programmers to focus on the code, and see Pull Requests as the annoying but necessary step to get the code up for review.
We can make the mistake in thinking PRs are simply perfunctory thing and the content's doesn't matter.
We may think
- "If they are curious they will ask questions"
- Or "they can search up the related ticket if they are curious".
- Or "What's the point, once it's merged it's history anyway".
But this kind of thinking is short-sighted.
Why careless PR content is harmful
Yes, reviewers should ask questions if things don't make sense. But you are:
A. Making it harder on your reviewers than it needs to be; and
B. More importantly, you are removing the valuable context about why a piece of code was added and links to important information.
I'll dig in a bit deeper about (B).
Code rarely remains static. It evolves and over time it morphs into something very different from when it was originally concepted and created. The same piece of code may be based on different knowledge at different points in time.
There will be situations in the future that you or others need to understand why a certain code change was made. You can use git blame or history to find the code changes. But without proper PR descriptions the context is lost to history.
This kind of thinking is short-sighted.
If you write proper descriptions however, this rich information and context can be used by future programmers to understand the rationale behind your code and make quicker and more quality code decisions.
Tips
Some things that have worked for me are to:
- Add a descriptive title for the PR
feat [ENG-1234]: Add ability to add feedback column to PR table
- Specify at a high level all the important changes made in your description. You don't need to get into detail but at least list at a high level, enough to give context to the corresponding code.
This PR adds the following updates to PR Table:
1. Users can now add a column to add feedback.
2. Touch support.
3. Updates to unit tests to reflect new interactions.
4. Update third-party packages to add animation support.
- Where there is something of note about the code changes specify this in the PR.
Note. This PR results in visual regressions to support better touch targets. See the images below...
- Include the rationale behind the code. You don't need an essay but a short line behind why the feature was added.
To allow users to add a column to add feedback information. For more information about the rationale check out this `[slack link](url)`.
- Include a link to the issue tracker ticket. If you decide to not include a rationale this is doubly important.
[https://this-is-an-issue-tracker.com/eng-1234](https://this-is-an-issue-tracker.com/eng-1234)
(For front-end) Where visual changes are made include an image or video.
Use a PR template to encourage inclusion of information like description, rationale and PR Checklist.
Top comments (8)
On GitHub, linking to an issue is even easier than your example. To link to issue 5 for example, it is just:
#5
. And if you want to auto-close the issue when the PR is merged, you can use:Closes #5
.Some advantages over explitly using markdown link syntax is that it enables easily writing something like:
Because of #5, we had to ....
. Nice simple sentences without link clutter. And combined with theCloses
keyword, it sets up the link while also providing a meaningful natural language explanation.@cicirello that's a great tip! Didn't know about the auto-close short-cut, that's quite nifty!
Really great point about using the cross reference link when linking to related issues.
That's something I didn't mention in my post, but I've found it really helpful to do this. It allows a trail when PRs are connected. I've found it especially helpful for bug fixes which addressed an issue in a previous PR - this adds a further layer of context to understand the changes.
Yes, linking a PR to an issue describing bug, or even describing a feature request, helps give context to the PR.
I think there are a couple keywords that work for the autoclosing an issue when PR is merged. The one I use regularly is Closes, such as
Closes #5
to auto-close issue #5 when the PR is merged. But I thinkFixes #5
does the same thing.Good article!
Under the "Why this may be occurring" part I think it's worth mentioning that sometimes the person working on the PR doesn't actually know in debt what the changes they've made affect.
So, it's easier not to describe what the PR does, compared to trying to understand what the changes does.
Another tip is to add an ‘How to test’ section that contains the steps to make the functional review easier.
Love this! In our team we have a PR template which includes a PR checklist with things to test for.
For example the codebase I primarily work on at the moment is the front-end codebase so we have items like:
But a more detailed 'how to test' section can be even more helpful. In a previous team we had a list of around 10 items (some optional), which was really helpful when understanding when a PR was ready to finalise.
High data quality is an important prerequisite for sound empirical research. Meade and Craig (2012) and Huang, Curran, Keeney, Poposki, and DeShon (2012) discussed methods to detect unmotivated or careless respondents in large web-based questionnaires. We first discuss these methods and present multi-test extensions of person-fit statistics as alternatives. Second, we applied these methods to data collected through a web-based questionnaire, in which some respondents received instructions to respond quickly which can result in more careless responding. In addition, we conducted a simulation study. We compared sensitivity and specificity of different methods and concluded that multi-test extensions of person-fit statistics are a good alternative as compared to other methods, although the sensitivity to detect careless respondents using empirical data was lower than using simulated data. See more information dev.to/[chontico.](https://chontic...
chonticodia.com/ astroluna.xyz/ gosloto.co/uk-49s-lunchtime-results/