I find myself writing the following command multiple times daily.
git stash push pacakage-lock.json
git stash drop 0
I had been reading a little about bash functions and wanted to see if I couldn’t automate the process.
There are two files in particular that I was running these commands for due to our build process: pacakge.json
and package-lock.json
.
My first solution was a hard coded version called cpkg
. I saved the function my ~/.zshrc
for easy access.
#cpkg = "clear changes to pacakge"
cpkg() {
git stash push package-lock.json;
git stash drop 0;
git stash push package.json;
git stash drop 0;
}
This function achieves my goal. It wipes out changes to the package.json
and package-lock.json
.
Still, I wanted to challenge myself to see if I could add some dynamism. Maybe I wouldn’t want to clear my package, but drop changes from a different file.
This required a function that accepted arguments.
Parameters And Bash Functions
The first place I looked, of course, was Stackoverflow. This conversation was a good introduction of how functions pass parameters.
In my case, I only needed to handle a single parameter - so order wasn’t important and I could use the $1
without any type checking.
What I came up with was:
#gsdf = "git stash drop file"
gsdf() {
git stash push $1;
git stash drop 0;
}
It’s simple, but it does work as expected.
If I run gsdf package.json
the file will be added to the stash (if there have been changes) and then dropped - letting me move on in peace.
This works by assigning package.json
to the $1
parameter during execution.
The next steps would be to handle multiple arguments, so I could pass multiple files at once.
Caveats
This will not affect untracked or ignored files. It will only wipe out changes to files that have been previously tracked or committed.
The reason is that git stash
does not stash untracked or ignored files by default.
For more on this, see a different conversation on Stackoverflow.
Final Thoughts
Is this the best way to achieve my goal? Probably not, but it’s been w working for me and I understand it. If you can point me to a more common pattern with git for this, I’m all ears.
I'd also be very interested if someone knows a good way of handling an unknown number of arguments to create a loop of repeating this process for each file passed into the gsdf
function.
Top comments (6)
I'm very new to bash myself, but I've been trying to make little scripts to make some things more efficient.
Bash has a few ways to do loops, including a for loop. Here's the basic thing of it as it applies to your script:
$@ is a special reference to ALL arguments passed to the function.
So if we run this:
It would print each file name on a newline. The arguments need to be space separated.
I think from the above you could adapt it to your needs.
Someone may come and offer a different thing. My knowledge of bash is a bit limited.
Thanks Dan! I'll try that out :)
How about
Yup! That would work too.
I think the reason I avoided
git checkout
is that I think of it as branch management, not files.I concede that it's a totally valid way of doing this -- though, the flag of
--
is rather ambiguous to me. :)Technically speaking, it is part of branch management: you are checking out = resetting a file (or files, as you can use * wildcards) to the original branch state.
Got it. That makes a lot of sense. I'll update
cpkg
togit checkout -- package*.json
.Sometimes you just need to put things out there to have people show you a better way! Appreciate it!