DEV Community

Cover image for Containerized Infrastructure-as-Code with Immutable Clusters
Akatsuki Levi
Akatsuki Levi

Posted on

Containerized Infrastructure-as-Code with Immutable Clusters

Recently, I've received the opportunity to participate on the upcoming HashiTalks 2023, so I've decided to make a presentation about a recent little adventure I had with the field of infrastrucutes and DevOps.

Scaling a mission critical infrastructure and its challenges

Low level choices can impose great challenges when scaling a existing system for reliability.

First and foremost, let me introduce myself. My name is Felipe Angelo Sgarbi, I'm a senior software engineer at SigaMeCar, and today I'm here to share some of my experiences in production about the use of HashiStack, and how a simple change in paradigm resulted in a huge change in architecture that greatly benefits our customers.

Before anything, I want to give a quick briefing of the company I work at.

Well, SigaMeCar is a car tracking, monitoring, and alarm platform aimed to put the safety of your vehicles in the palm of your hand.
We currently attend over 30 cities across the entirety of Brazil, offering nationwide coverage, while also providing support for over 300 vehicle models. From cars, motorcycles, construction vehicles, and even Jet Ski.

All this is boiled down and simplified for the user to make it trivial to manage the security of your vehicle down to be as easy as pressing a single button.

And one of the challenges of our current platform is the sheer volume of data coming in by the second. On a calm day, we receive over 37 thousand positions per day,
containing info like the latitude and longitude, speed, if there was any kind of alarm, and other telemetry data.

With this challenge in hand, we had to re-evaluate how we deal with the architecture and infrastructure of our platform.

For a long time, we have been using standard common servers running mainstream server operating systems such as Fedora Server and Ubuntu Server.

And with the discovery of the HashiStack, we got excited about what we could achieve with it. A more reliable platform that really could improve not only the development cycle, but also scalability.
Yet, as we started to experiment with it, one thing quickly became visible to us from a technical standpoint.

The issue with current servers

Mainstream server OS' can pose hard-to-detect issues

The HashiStack, paired with common servers and the normal way to deal with servers, was nowhere near enough. Difficulty in maintenance and struggles to keep up with the architecture grew nearly exponentially.
A shift in how servers are dealt with was needed, if we wanted to make use of the HashiStack.

Well, most of you must be questioning "Why can't common servers keep up with it?", and the issue with it is their operating systems. The software that runs on the base of everything, the very foundation.
Don't get me wrong, they are great. Yet, they still pose their own flaws.

Mainstream operating systems for servers like Fedora Server, Ubuntu Server, and Debian, are made to be as flexible as possible, allowing you to do virtually anything you require to do with relative ease.
Yet, this means that these operating systems have a lot of moving parts. A lot of 'gears in motion' to make sure everything is working properly.

And this puts a challenge into maintenance and troubleshooting, because the more moving parts there are, the more prone to breaking it is, making one of their biggest strengths into an unfortunate issue that might become bigger as time goes on.

The common pitfalls of traditional servers

A few of the common pitfalls of traditional servers that show these flaws come with their usual operation during maintenance:

  1. Updates introducing incompatibilities and breaking other services
  2. Resource waste for 'bloat' functionalities (eg. CUPS on servers)
  3. Changing settings on one place doesn't change on all others
  4. Inability or difficulty to replicate the server environment
  5. High availability might be hard to achieve

Especially with the misconfiguration factor, it may not be as noticeable or affecting when dealing with only a single server, but the moment you start scaling into multiple servers, it starts showing up.

Forgetting to disable password authentication, not opening some specific port on the Firewall, or something as silly as a typo on some file name.

This changed the way we see and treat servers. It called for an overall change in the way they are dealt with.

It led to a change on how we see servers

In the past, servers were just like pets. Carefully handled, changes and updates were week-long methodically planned time-consuming tasks.

This change, made us realize servers are more like cows. It called for a way to quickly be able to set up a new server and be ready for production in a few minutes. Changes and updates were part of something that closely resembled rolling releases and uptime stopped being a measurement of how successful the server was.

With this in mind, we started to research for new solutions, and what we came across was a new paradigm in the server space that is relatively new, yet the way it approaches server infrastructures came out as a huge innovation.

A new paradigm in server space led to innovation

This shift changed traditional servers and their infrastructure

For this next step, comes a bit of story.

On October 3rd, 2019, a new type of operating system surfaced. An open-source lightweight operating system based on the Linux kernel and designed for providing infrastructure to clustered deployments, while focusing on automation, ease of application deployment, security, reliability, and scalability.

In January 2018, it had been acquired by Red Hat, merged into Project Atomic, and later on was replaced by Fedora CoreOS.

CoreOS is significant because it is a re-imagination of the operating system in the context of a host’s role in the data center at scale. It does this by fully embracing the notion of Linux containers and immutability. It abstracts away the notion of a physical host instead of a pool of compute. Because there is no theoretical limit to the number of hosts in a pool, you can deploy containerized servers simply and without thought to the host they're on.

Essentially it is now possible to reason about your application using 2000GB of memory and 200TB of local storage and not think about the hosts. This is a significant simplification.

It is a different way of thinking about an Operating System. Being much more akin to your mobile phone, for example, iOS, than a traditional Linux machine, and that containers are the effect, not the cause.

The idea here is that you shouldn't be updating files, libraries, and patching stuff in your core operating system, you should be mucking around in your portable Docker container and getting a "guaranteed to be accurate, stable, and up-to-date" Unix OS to launch your services in.

The neat benefits once you have a stable, guaranteed-to-be-accurate OS image are plenty: you can easily cluster machines because you know they'll act the same.

The real power of the HashiStack

With this in mind, the power of HashiStack starts to shine through. Having a rock-solid, guaranteed to be accurate operating system that allows you to scale with minimal effort allows Nomad, Consul, and even Vault to be used to their fullest.
The service-mesh capabilities and K/V store of Consul allow to not only easily communicate between services hosted on these machines, but also Nomad can easily orchestrate what runs where and ensure all containers are running accordingly.

CoreOS paired with the HashiStack unlocks a highly predictable, easy-to-scale environment that can be replicated anywhere, be it bare-bones servers, a cloud provider such as AWS, Google Cloud or Microsoft Azure, or even an external VPS provider such as DigitalOcean or Linode.

Yet, its biggest strength comes when paired with another tool from the HashiStack. Terraform.

It's a bit of an understatement to say that Terraform works great for this. CoreOS makes Terraform feel like a superpower.

Being able to set up every part and corner of not only your cluster but also their individual operating systems. From mount points, services they run, and even the users themselves, the capability of Infrastructure as Code that Terraform provides makes developing cloud-native solutions way easier than normal traditional servers.

A example with traditional servers

The task is simple. Set-up a barebones server with a single user

To give a small idea of how powerful Terraform paired with CoreOS is, let's take a look at a small example:

$ useradd -m user
$ mkdir -p /home/user/.ssh
$ echo "ssh-rsa ..." > /home/user/.ssh/authorized_keys
$ chown user:user /home/user/.ssh -R
$ chmod 0600 /home/user/.ssh/authorized_keys

$ rm -rf /etc/localtime
$ TIMEZONE=/usr/share/zoneinfo/America/New_York
$ ln -s$TIMEZONE” /etc/localtime

$ echo "machine01" > /etc/hostname
Enter fullscreen mode Exit fullscreen mode

In this example, we are setting up a server with a single user.

By itself, it might not seem much, but let's say we are setting this up for multiple servers, let's say 10 servers. Imagine having to redo all of this for every server.

Not only would this be 10 times more time-consuming, but also be prone to errors.
You could end up mistyping the authorized keys file name, or mistyping the permission for the SSH key, or removing the default timezone but forgetting to link the new one in place in one of the servers.

The CoreOS way

The entire setup process is simplified into a single configuration file that greatly improves it

Now, let's take a look at how CoreOS does by itself:

variant: fcos
version: 1.4.0
passwd:
 users:
   - name: user
     ssh_authorized_keys:
       - ssh-rsa ...
storage:
 links:
   - path: /etc/localtime
     target: /usr/share/zoneinfo/America/New_York
 files:
   - path: /etc/hostname
     mode: 0644
     contents:
       inline: machine01
Enter fullscreen mode Exit fullscreen mode

The way CoreOS configuration works is that you don't do it by yourself manually, but instead you supply a configuration file during boot or setup.

This configuration file is generated from a butane config file, which is by itself simply a YAML file.

And to use it, the butane config is compiled into a readable ignition file for CoreOS.

During the first boot, CoreOS automatically provisions all the configurations required, so once it is online, it is configured just the way you'd expect.

Yet, it still suffers from a few of the same problems.

If we were to, for example, set up 10 different servers, we'd need to create 10 different configuration files, and compile their ignition files individually.

It still involves a lot of manual work that could end up leading to mistakes. It is faster, but still prone to the same mistakes.

The reason why Terraform feels super …

And now the reason why CoreOS paired with Terraform feels like a Superpower. Here, we have a really basic Terraform file:

data "ignition_user" "main_user" {
   name = "user"
   ssh_authorized_keys = ["ssh-rsa ..."]
}

data "ignition_link" "timezone" {
   path = "/etc/localtime"
   target = "/usr/share/zoneinfo/America/New_York"
}

data "ignition_file" "hostname" {
   path = "/etc/hostname"
   content { content = "machine01" }
}
Enter fullscreen mode Exit fullscreen mode

Note how the code itself generates the ignition file procedurally, allowing you to use things like functions and variables to change how Terraform generates this config.

And even better, you can connect your favorite provider such as AWS to automatically deploy your CoreOS server with the configuration you generated.

data "ignition_config" "final_config" {
 users = [data.ignition_user.main_user.rendered]
 links = [data.ignition_link.timezone.rendered]
 files = [data.ignition_file.hostname.rendered]
}

resource "aws_instance" "my_server" {
 ami         = "ami-0ce26a467370cadf3" # Fedora CoreOS Stable
 instance_type = "m5.large"
 user_data = data.ignition_config.final_config.rendered
 user_data_replace_on_change = true
}
Enter fullscreen mode Exit fullscreen mode

This allows you to not generate config files you need to manually bootstrap into the servers, but instead bootstrap servers directly, reducing the amount of manual labor and making everything highly predictable.

Need to replicate production into a KVM to debug some nasty bug that doesn't happen in Staging? You can simply connect your KVM provider and point your CoreOS configuration to it.

With this in mind, pairing the HashiStack with CoreOS while using Terraform to bootstrap an entire cluster became such an elegant solution.

A small running joke amongst the team at SigaMeCar is that the HashiStack and CoreOS are a match made in heaven.

The inner works is a series of mutual work

The way it works is that Terraform bootstraps the infrastructure in form of servers running CoreOS, and inside it, installs the HashiStack series of software.

Nomad deals with running all our software via Podman containers, which is the default container technology that CoreOS provides, Consul deals with Networking, Service Mesh, and K/V Storage for things like configuration, and Vault deals with Identity Management and Secrets Storage and Provisioning.

All the HashiStack software is automatically set up by CoreOS during the first boot, so once the server comes online, the node is already ready for production.

And if something needs to be changed? Simply edit the Terraform file and re-apply it!

A few gotchas with CoreOS

As it with any solution, it does come with it’s own trade-offs

Yet, it doesn't come with a few gotchas.
CoreOS is not perfect and does need a few considerations to be used in production.

  1. Single configuration only
  2. Persistent storage must be explicitly set-up by you
  3. Changing the configuration isn’t as straightforward after first boot
  4. Server must reboot to apply updates

The first biggest issue with CoreOS is that, once the configuration is bootstrapped, it cannot be changed. Once you do put a server up and need to do a change to it, you need to rewrite the configuration and replace the installation altogether.

Of course, nowadays cloud providers make leveraging this a breeze, and the natural load balancing nature of Consul Service Mesh makes this way less of a hassle, but still makes this unsuitable for small clusters where only a single server runs a specific software in which all others depend on it.

Yet, if this is not the case, and one server going down for a few minutes won't affect uptime, they can be updated as a rolling release, and Terraform can apply the change on servers one by one, waiting for one to come online before updating the next.

Persistent storage isn’t as straightforward

Yet, CSI storage can better solve the issue

Another issue that might also surface is persistent storage. CoreOS is an immutable filesystem, which means data persistence must be explicitly set up by you.

This is where Nomad shines with its CSI technology. You can set up a Container Storage Interface for providing storage volumes to your containers, such as AWS Elastic Block Storage volumes, Google Cloud persistent disks, Ceph, Portworx, vSphere, or others. The best part of this is that you are not locked by the amount of storage in the server that the application is running, but instead, have all the available space in your CSI provider in all nodes.

Those familiar with Kubernetes might feel at home with this.

OSTree Atomic Updates to the rescue

Leveraging atomic in-place updates, updates on Fedora CoreOS are hassle-free

Well, it has been mentioned but what about updates? After all, CoreOS is an operating system just like any other, at some point the base operating system will need to update.

The beauty of CoreOS is that these updates are leveraged automatically. Since they work via streams such as Stable, Testing, and Next, it leverages automatic in-place updates. And in case of a failure or worst-case scenario, it can be rolled back to the latest working deployment without having to re-format from scratch.

This is thanks to the fact that CoreOS provides atomic updates and rollbacks via OSTree deployments.
For those unfamiliar, OSTree is an upgrade system for Linux-based operating systems that perform atomic upgrades of complete filesystem trees. It is not a package system, rather, it is intended to complement them. A primary model is composing packages on a server, and then replicating them to clients.

The underlying architecture might be summarized as “git for operating system binaries”. It operates in userspace and will work on top of any Linux filesystem. At its core is a git-like content-addressed object store with branches (or “refs”) to track meaningful filesystem trees within the store. Similarly, one can check out or commit to these branches.

One issue it has is that upon downloading and staging a new update, the server must be rebooted for the new update to take place. This can introduce downtime, yet Zincati, the tool that continually checks for OS updates and applies them, can be configured with a "rollout wariness" value to control how eager or risk-averse the node is to receive new updates straight from the configuration.

It also can control when nodes are allowed to reboot for the update finalization, for example, configuring servers to only reboot during weekdays starting at specific time-frames, to reduce the risk of a server update disrupting service or downtime.

Why not the alternatives?

With alternatives such as Ansible with normal servers, why this combo?

A few questions might be lingering on everyone's heads, like "Why, not Ansible, Chef, or Puppet with normal servers?", or even "Why not a Kubernetes cluster?". Well, a few key differences show why HashiStack paired with CoreOS and Terraform might be a better alternative for you.

Different tools for different jobs

For starters, the first question is about why not tools like Ansible or Puppet. And fair, They're really powerful tools that allow you to leverage the configuration of multiple servers with ease, yet what sticks with them is that these tools are Configuration Management tools, while Terraform is an Orchestration and Provisioning tool.

Tools like Ansible are mainly used for configuring servers with the right software and updating already configured resources, while Terraform is the best fit for orchestrating cloud services and setup cloud infrastructure from scratch. The difference with using Ansible and default servers instead of Terraform with CoreOS is that the latter wouldn't require you to install the operating system from scratch before configuring everything. Terraform provisions the servers automatically, while CoreOS configures everything right at installation.

Configuration management tools solve the issues locally rather than replacing the system entirely. Ansible helps to configure each action and instrument and ensures smooth functioning without any damage or error, yet it still requires you to manually bootstrap the servers and configure SSH access manually. Without it, the tool is unable to do its work.

Meanwhile, Orchestration tools ensure that an environment is in its desired state continuously. Terraform is explicitly designed to store the state of the domain. Whenever there is any glitch in the system, Terraform automatically restores and computes the entire process in the system after reloading. It is the best fit in situations where a constant and invariable state is needed. Terraform Apply helps to resolve all anomalies effectively.

Paired with the immutable nature of CoreOS, Terraform allows you to create an infrastructure that you can predict with ease and will always be at the same every time.

Simplicity over Complex power

And about the second question, "Why not Kubernetes?". Well, it boils down to complexity.
Kubernetes is really powerful, at the cost of complexity. Kubernetes has a notoriously complex ecosystem with lots of moving parts, requires the configuration of numerous interoperating components to manage and deploy, and overall, setting up your own Kubernetes Cluster can be a difficulty.

While Nomad and HashiStack in general are significantly more simplistic, requiring way less configuration to be up and running. This is because, unlike Kubernetes which aims to provide everything out of the box, HashiStack is specialized. Nomad only deals with scheduling and running containers, while Consul focuses on Service Mesh and Service Discovery.
With the HashiStack, you can simply skip a tool if you don't use it. Meanwhile, with Kubernetes, all parts are running even if you don't use it.

Immutability and simplicity really shines

The power of HashiStack and the immutability of CoreOS

Well, in conclusion, the HashiStack is a really powerful infrastructure stack, where its real power shines when paired with an immutable approach such as CoreOS for highly predictable environments.

Not only can it be used during development and staging to be able to develop as close as possible to the production environment, but also in Production to ensure everything will work as intended without hiccups or issues that may introduce downtime.

A small token of appreciation for FOSS

For this, we at SigaMeCar have set up a gift for anyone who would like to start playing around with the HashiStack inside CoreOS with Terraform.

We present Harbor, a Terraform configuration that can provision a CoreOS cluster of VMs inside a KVM server, automatically setting up the HashiStack software for ease of use.
It is licensed under the GPL version 3 or later license, so feel free to use and contribute to the project!


Hope this has been useful and could spark curiosity with other alternative deployments for improving the lifecycle of servers with new and innovative paradigms for production.

A huge thank you to Kerim Satirli, Joe Rajewski, Kent Gruber, and the Team at HashiCorp for the opportunity in participating here at HashiTalks, and for the team at RedHat and Fedora Project for the inspiration and the amazing tool that CoreOS is.

I also remain available at any time at my Mastodon alias(@akatsukilevi@mastodon.social) for any questions you might have, feel free to reach out for questions, ideas, or to show what you achieved with this concept!

Until then, I'm Felipe Angelo, see you in the next post!

Top comments (0)