loading...
Cover image for Platformless Devops with Docker and Nginx in "Just a VM" - Part 1 (Intro, VM, Nginx)

Platformless Devops with Docker and Nginx in "Just a VM" - Part 1 (Intro, VM, Nginx)

lucassen profile image Asbjørn Lucassen ・7 min read

Platformless Devops with Docker and Nginx in "Just a VM" (4 Part Series)

1) Platformless Devops with Docker and Nginx in "Just a VM" - Part 1 (Intro, VM, Nginx) 2) Platformless Devops with Docker and Nginx in "Just a VM" - Part 2 (web API with Nginx as proxy) 3) Platformless Devops with Docker and Nginx in "Just a VM" - Part 3 (DNS,HTTPS) 4) Platformless Devops with Docker and Nginx in "Just a VM" - Part 4 (Gitlab CI/CD)

PaaS-tools are great for very basic backend use cases, but for a variety of reasons it can either become too expensive or too inflexible. Putting your program in "Just a VM" has gone out of style and gotten an undeserved reputation of being too much effort if you are in a startup-like situation.

The point of this multi-piece is to show that if you are past or anticipate going past a proof-of-concept program where PaaS are perfect, and the conditions are there, you are much better of creating a 2020 version of the LAMP-stack. The LAMP stack was never fundamentally flawed for the use cases it tried to cater to. Of course the actual use of mysql and php is not as common anymore to put it politely. Instead we apply the concept and use our preferred backend and frontend. It just drowned in all the noise of tools, frameworks, new languages and patterns that has emerged which often are not the right tool for the job. It seems like in the early stage of products there is a habit of using big guns for small sparrows.

What we aim for is the classic frontend+backend api+database triad hosted as simply and cheaply as possible while at the same time not digging ourselves into a hole which may be hard to get out from. Scaling needs are either non-existing for our use case or should be less of a priority because financial runway and iterating on features are more important than achieving a perfect microservice architecture before the revenue column has numbers in it. When the scope is small, but rising, the small effort required to set up "Just a VM" is better spent there for the flexibility and control you get.

That's why I have made my own buzzword, "platformless", which is a play on another buzzword, serverless. It hopefully doesn't catch on because it brings nothing new to the table and is smug attention-seeking. Anyway, enjoy.

It has to be said now, right after the intro priming you for knowledge erection, thinking this is going to be hot shit: I believe there is no silver bullet when it comes to systems architecture. Some tools are a better fit for some jobs, while others are subpar. Knowing when to reach for which tool is where the effort should be put

In this multi-part series on "platformless", we will eventually end up with a full stack and a devops path to success, including database migration, CI and hosting.

Coffee shop

Keeping our devops lean takes us one step further towards our dream of modelling for remote work in a coffee shop. Dream big people. Photo by Jacek Dylag on Unsplash

Using ONE cheap ubuntu VM, docker, nginx and bash scripts run over ssh as the basic building blocks, we can get a powerful DevOps-system infinitely extensible for cheap. If your use case can be solved by ONE machine, you should go to great lengths to keep it like that as highlighted in a great post on why distributed systems are inherently hard. Vertical scaling shouldn't be a dirty word. It takes you a long way and when it doesn't anymore we didn't dig a hole, remember?

By avoiding the PaaS-platforms with backend focus(Heroku or the big cloud providers offerings with "App" in the name) we have a clear way to extend our infrastructure without breaking bank.

Product increments should linearly increase data and architecture complexity. That's why I believe serverless is a poor fit when the work is primarily on the backend and database. I also think learning any serverless platform is just moving the crux of your limited time spent learning anything to learning the peculiarities of the platform.

So, thats why we are trying to avoid PaaS platforms. We want to stick to open source, tried and tested tools operating "straight in the OS". Docker as used in here is an abstraction operated directly in the OS, so it also actually fits this moniker. I also avoid anything where the learning material is not readily available and easy to understand.

  • Docker for hiding the application layer, daemonizing the server and connecting local modules.

  • Nginx for exposing ourselves to the internet in a concise, performant way. We also use nginx for ease of proxying to our backend and optionally to serve frontend statics faster and arguably easier than most backend servers as long as we look away from the devops complications (and we do).

  • Linux in an ubuntu VM, because it is probably the most accessible Linux distro regarding learning and configuration

  • For data we use a proper RDBMS because we have already identified that data is our primary concern and going for anything less is doing future us a disservice. My preference is Postgres in local docker containers. For caching/queuing needs we slap a redis image into the mix.
    For data migration we want something concise and replicatable which ideally follows along with application code in our source control. We avoid framework-like systems or code-first migration systems straying too far away from SQL basics, because that could severely complicate architecture changes down the road.

  • For backend and frontend we should be able to use anything as long as it is 12-factor app-ish and easily slapped into a docker container working behind a proxy. I very much prefer static frontend builds to SSR, but that is more of a religious view and probably stems from me primarily being a front-end developer.
    We control DNS and SSL ourselves.

  • For source control and CI gitlab is my weapon of choice, but this is definitely the area of least importance in this piece. Bring your own drinks.

  • For hosting we of course want "Just a machine", but avoiding platforms to the point of becoming nutty zealots is not the way. That's why a major cloud provider which can give us "Just a VM" will win this one. The qualities I look for here is ease of use and price and features I get without getting in the way of "Just a VM". The one I like the most that fits this bill is DigitalOcean.

With this strategy we can still iterate quickly when the business side of things changes. It is a small cost in setup time, which we will pay itself back in the actual running cost and flexibility it provides down the road.

I have made a companion gitlab repository to this article series, which will be referenced frequently as "what we do", so if you follow a slightly different path you may need to fill in the blanks yourself or skip parts.

Let's get started.

Creating the VM

First, we create the digital ocean "droplet" which is really just a virtual machine configurable with various starting templates. We are going for a very basic droplet with docker preconfigured.

Create droplet

After choosing our region, we need to add our public SSH-key.
If you do not have a key yet, run ssh-keygen which will create a private key at ~/.ssh/id_rsa and a public one at ~/.ssh/id_rsa.pub. The private key should never leave your machine as it is used to prove your identity to remote hosts(the VM). The remote host needs the public key in this regard. Find and copy in your PUBLIC key using this command and DO will put it on the fresh VM:

cat ~/.ssh/id_rsa.pub

I also add some platform-level monitoring, because why the hell not.

Droplet SSH

Setting up the VM

We now need to start configuring the VM.We start by ssh'ing in and add a non-root user which we will use for day-to-day

ssh root@<your ip>
adduser devopsuser

You will be prompted for a new password. Add it and regard/disregard the other questions as much as your OCD permits. Next we elevate this user up to admin privileges:

usermod -aG sudo devopsuser

Now we need to add the ssh public key previously added to this user. Copy it from your local machine the same way as earlier. Now we need to switch user,

su - devopsuser

create an ssh folder,

mkdir ~/.ssh
chmod 700 ~/.ssh

create a file for authorized public keys,

cd ~/.ssh
touch authorized_keys

And paste the key into the file in an editor.

nano authorized_keys

I use nano, but any editor will do. (Quick nano user guide: to save do control+O and to exit do control+X)
change the permissions on the file:

chmod 600 authorized_keys

We add our user to the docker group:

sudo usermod -aG docker $USER  

Finally, first exit the devopsuser, the root user and reenter to test SSH setup and continue with the right user.

exit
exit
ssh devopsuser@<your ip>

Nginx

The next step is to simply install Nginx.

ssh devopsuser@<your ip>
sudo apt-get install -y nginx

Then we need to open up our nginx to the internet(Http only at first). We should also familiarize ourself with our firewall, so by checking what is exposed publicly we do

sudo ufw status

There is nothing open on port 80 and 443, so obviously opening your ip in the browser will not work yet.

To open up for http traffic we do:

sudo ufw allow 'Nginx HTTP'

Where 'Nginx HTTP' is default http port 80.

Our server is now hosting something. We can check that out by going to the naked IP in our web browser(same IP as ssh'ed into).

With the way docker disregards UFW we are theoretically vulnerable to bypassing UFW rules, as has been noted by other people.

That's why we should close down this by doing the same as suggested in that article.

We open the docker init file:

sudo nano /etc/default/docker 

Here, change DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4" to DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4 --iptables=false", save(ctrl+o, ctrl+x) and reboot the daemon:

sudo systemctl restart docker

End of part 1

We now have the basic VM set up with nginx. This is a reasonable starting point where depending on your program and hosting needs to you can divert from this multi-parter or truck on.

In the next one we use the example repository and let the node app in it act as a proxy. We also set up DNS and SSL for the page so we get a neat, encrypted hello world.

Platformless Devops with Docker and Nginx in "Just a VM" (4 Part Series)

1) Platformless Devops with Docker and Nginx in "Just a VM" - Part 1 (Intro, VM, Nginx) 2) Platformless Devops with Docker and Nginx in "Just a VM" - Part 2 (web API with Nginx as proxy) 3) Platformless Devops with Docker and Nginx in "Just a VM" - Part 3 (DNS,HTTPS) 4) Platformless Devops with Docker and Nginx in "Just a VM" - Part 4 (Gitlab CI/CD)

Posted on May 25 by:

lucassen profile

Asbjørn Lucassen

@lucassen

Full-stack power-cycler, button-smasher, experienced stack-trace googler, with specialization in stackoverflow copypasta

Discussion

markdown guide