Platformless Devops with Docker and Nginx in "Just a VM" (4 Part Series)
In part 4 of this series of articles, we are going to do add the final piece of the puzzle with how the actual CI/CD process will happen.
When pushing new features, we want them to be automatically picked up and deployed. Gitlab has their own easy to learn CI system which we will use for this. The whole pipeline will be declared in the file
The basic process of deploying in the "Just a VM"-way is this:
- with ssh do git pull
- with ssh run deploy.sh
This is done by the deploy stage in
.gitlab-ci.yml. We could have expanded the deploy stage to check for and clone the repository if it was not yet present. However, since we have to configure quite a bit on the server anyway, it doesn't bother us to do the initial git clone manually as well.
So we only really need to set up how gitlab uses SSH to run this.
Before continuing, I should add that I prefer having a private gitlab runner on a separate Digital Ocean VM. I used this guide as the basis for this setup. The alternative is to use gitlabs paid offers which I have not used, but is absolutely worth checking into. Or you can use gitlabs free public shared runners, but remember that these are limited in time and performance and you need to accept the uncomfortable fact that your SSH-keys are copied onto shared machines, though it is temporary and inside isolated docker containers.
To gain ssh access from gitlab runner to the production environment we need to add some gitlab environment variables. These are set in your projects CI/CD settings. When completed it should look something like this:
On local machine do:
ssh-keygen -f id_rsa_gitlab cat ~/.ssh/id_rsa_gitlab
Copy private key to Gitlab CI/CD environment variable
SSH_PRIVATE_KEY. Take notice that this should be of type
File and that you need to add an extra newline to it for it to work.
On local machine do:
We need this public key on the VM.
On the VM append the public key to
Then we should get the public hosts key from the server. This is ideally done on the server itself for security, but can be done on any client. These keys should be added to safeguard against man in the middle attacks. They are added automatically the first time you ssh onto an IP, but for hostnames you will get a question to highlight the risk. Gitlab runner sets the ssh option StrictHostKeyChecking which disables automatic known_hosts handling, so we really do not have a choice for our use case.
On the vm run this:
ssh-keyscan yourdomain.com ssh-keyscan <vm ip>
Append all these to your local machines
You can now also ssh using
ssh firstname.lastname@example.org instead of the IP of the VM.
Also add them to the Gitlab Environment File variable
SSH_KNOWN_HOSTS with a newline at the end.
You can delete id_rsa_gitlab from your local machine now, don't be greedy.
The last two environment vars we will add is
SSH_HOST. This first is just the password you used for devopsuser. The second can be set to
We could also have set up passwordless sudo for devopsuser. But IMO it is preferable to have the password as gitlab env var. You get a minor inconvenience and insecurity in the scripting stage, for the moderate second-line-of-defense security of having password for sudo.
The last step here is to push something to master to see if this all worked. If the gitlab pipeline succeeds we should be good to go. While
deploy.sh runs you should also be able to observe your domain temporarily swapping out frontend statics.
This is end of part 4. We now have a complete stack running. The only thing missing now is automated database backups and a Postgres client on our local machine to do database maintenance from. This will be covered in a future part 5.