Docker Swarm has matured enough that it's adoption is starting to pickup. Docker Captains are working with real clients rolling it out everyday. That's fine for clustered workloads, but what about locally?
Should you be using Swarm Mode even for local development? In my opinion, TL;DR - Yes.
Swarm Mode has many benefits which only make sense in production multi Docker host scenarios, but used locally it can add value and reduce differences between Development and Production (#win). What does Swarm actually do?
Swarm elevates a random collection of Docker hosts (or just one) into a cluster, which orchestrates (fancy word for automatic management) starting and stopping containers, managing cluster-wide information and providing transparent connectivity between hosts.
With one host, transparent connectivity isn't a win, but the same interface for cluster information management (Secrets, Configs, Labels) and the advanced service management is. In the same way that Compose is an evolution over plain Docker, Swarm is a step forward over Compose. Here are my top 3 reasons why you should be thinking about using Swarm locally.
Reduce Surprises - Simplification and Consistency
Let's face _ one of _ the elephants in the room. Docker Swarm (as at 18.09) doesn't currently have feature parity with docker run
. Excluding concepts that just plain don't make sense for the service level of abstraction (i.e container names, restart policy), there are some of the more 'fringe' options that are not supported (sysctl Kernel tuning, host device mapping) - although some are currently work-in-progress. There is an excellent tracker of gaps (and progress) here.
This means that a compose file that works when targeting docker-compose
isn't guaranteed to work 100% with Docker Swarm without tweaks. Sorry, but nothing is perfect.
If Swarm is still for you in production, then it makes sense to use Swarm locally to avoid you having to ;
- effectively learn two product 'versions' at once,
- have to maintain two or more compose files in parallel causing potential for more mistakes.
Secrets
Docker Secrets allow the secure publishing of sensitive details for an application to consume, taking the recommendations of the 12 Factor App to a new level by providing a much more granular approach compared to environment variables. They are great , but they do have limitations; nothing is entirely secure (the old adage, if you have host access, nothing is secure) and it needs to be supported by the application (or an appropriate entrypoint script).
Accepting these limitations, they are only available in Docker Swarm, as their implementation is tied to Swarm's internal cluster Raft database. That means, if you're using plain docker or Compose you can't access them (see below for an exception). But as we've just agreed above, your application will likely need changes to take advantage of Secrets, so do you really want to start adding if statements to code to handle Dev vs. Prod? (answer is no).
Yes, you can use Docker Secrets with Compose, but they get namespaced by Compose (<stack>_ is prepended) which means that you need to maintain two different references to secrets, depending on environment (again not good, as it causes differences between Dev and Prod) - it is possible, but it is down to you, your development flow and how your code is organised.
This means, in reality, if you want Secrets in your application, you should use Swarm locally. There are ways round it (simulating, injecting environment variables, branching in code), but they all point to the fact you are having to do extra work to get you to the same point.
Next hurdle, Secrets are immutable (they can't be viewed or amended once created). This is not development friendly, these values are likely to change as you iterate. Therefore, there is an option to source secrets from a file. Keeping them in files means it is much quicker and easier to tear down a stack and it's secrets and recreate them if you rebuild your environment or need to change a secret.
For example, on one of my current projects (which has separate development and production compose files - multi-stage builds planned!), I can cleanly reference the same secret in my code, but have a simple workflow to change it in Development.
development.yml
version: '3.6'
services:
appserver:
image: appserver:latest
secrets:
- application_secret
secrets:
application_secret:
file: ./compose/local/secrets/application_secret
production.yml
version: '3.6'
services:
appserver:
image: appserver:latest
secrets:
- application_secret
secrets:
application_secret:
external: true
The only difference here is the 'source' of the secret.
Sourcing from a file kind of defeats the purpose of a secret, but it preserves the interface.
Marking it as external in production requires manual intervention to create the secret, however, in larger organisations the person with the contents of the secret might be very different to the person deploying code. When it comes to deployment where security matters, separation of responsibility keeps the clipboards at bay.
Keeping as much consistency between environments reduces the complexity of the code that handles our secrets and keeps things clean.
Test Scaling
Horizontal scaling of an application requires thought. Just because Docker can scale a service multiple times, doesn't automatically mean your application will handle being run in this manner.
Run your code at scale locally (just because you have one host, doesn't mean you can't run multiple instances of a container) and rattle these issues out early. This will start to weed out issues with concurrent access to databases, message queues and files on shared storage (to name a few). Swarm will still load balance between multiple instances automatically, even on one host.
From the background I come from, predictability, simplicity and ease of handover between Development to Operations, trump novelty. In my eyes adopting Swarm locally helps reduce surprises and make life easier throughout the whole Develop and Deploy lifecycle. That's all for now!
Top comments (0)