By Taurai Mutimutema
As more developers use Kubernetes, a variety of deployment tools are emerging to help them. Three interesting examples are Skaffold, Tilt, and DevSpace. While they all assist in building and deploying on Kubernetes clusters, their approaches are noticeably different.
For instance, if your team comprises fewer experienced developers, then you could benefit from the UI focus of Tilt. The more command-line interface (CLI) focus adopted by Skaffold and DevSpace, on the other hand, may better suit veteran Kubernetes engineers.
In this article, we’ll examine the differences and use cases of each of these three tools, so you can determine which is best for your team.
Skaffold is an open-source command-line interface (CLI) tool that handles the build, push, and even deploy tasks of a Kubernetes workflow. Its Git repository is part of the global Google Container Tools project. Skaffold’s user experience fits better with seasoned Kubernetes engineers because all commands need to be made through the CLI.
Each node along a Skaffold managed workflow handles artifacts, testing + integration, and deployment, treating them like building blocks for your pipeline. Developer teams can customize tools to achieve this workflow.
Skaffold allows you to use a wide range of external tools along your pipeline.
At the build stage, you can utilize either Gradle and Jib Maven, Bazel, Dockerfile, or Cloud Native Buildpacks. You can also create custom scripts developers can run to create predefined environments as forks from the main cluster.
As is customary with every CI/CD workflow, Skaffold facilitates tests that run as soon as the engine discovers new code in your source files. By default, every container image you build will undergo a structure test before it’s pushed to your preferred cluster. You also reserve the option to run custom test scripts. You must opt into the custom route in the
skaffold.yaml file for the engine to look for your scripts along the continuous integration path.
The deploy phase also provides space for you to use one of the following deployers: Kustomize, Helm, and kubectl. You can specify which one in the YAML configuration file.
To grasp fully how Skaffold works, you need an understanding of its core features.
The terminal is a CLI that comes with commands to control your instance’s workflow and encased applications.
Once you’ve installed Skaffold on a development node, you can use the following commands to move through your pipeline:
skaffold init: Launches a wizard with breakpoints along the process of building a
skaffold.yamlfile. You get to set all configuration values but can change them manually at any point in the future.
skaffold dev: Triggers your instance to track changes you make to any application files using file sync. Any edits made are tested and pushed to the deploy stage. On localhost, you’d see them take effect almost instantaneously.
file sync: Does automated checks for updating your local or cloud-hosted clusters each time you make changes to your dev files. Also included in the sync are dependencies that may have been introduced by new changes.
Finally, Skaffold comes with a
skaffold.yaml file that looks like this:
- image: github.com/googlecontainertools/skaffold/examples/custom
Think of this file as the toggle station for any CI/CD configurations and options. You can change the tools in use and how they execute during any of the building blocks along your workflow.
A standard use case for Skaffold is remotely distributed teams contributing to a single project. The lightweight nature of each installation makes it perfect for on-the-fly changes that don’t hinge on local machines’ capabilities. Skaffold lacks a UI, though, which limits usage to technically adept users.
The newest of the three, Tilt offers a client-only user interface that’s easy and intuitive for novice engineers. A key feature of Tilt is that changes made to your code take effect in real-time on the local or remote cluster push destination.
Onboarding a team of developers onto Tilt may require the most steps of the three tools. First, you decide whether you’re targeting a local cluster or one that’s remote. Only then can you select tools that correspond to the Kubernetes experience level of your crew—beginner, intermediate, or advanced.
If you’re planning to use Tilt as a Kubernetes manager locally, then you can choose Docker Desktop, kind, or MicroK8s to run on top of it. These are best for teams with little to no Kubernetes experience.
Mid-level experience developers will find K3s and Minikube equal to their skills. Senior devs can opt for Amazon EKS or the same from either Google or Azure. They can also build custom clusters, using Tilt as their manager service.
Tilt offers developers a faster development experience thanks to the live update feature and an interactive interface that controls the dev environment.
The user interface provisioned for your dev instance is also a log streaming window that follows and records every change you make against any actions taken by Tilt. This helps when debugging any code integrations pushed for deployment.
After you set up Tilt on your local machine, the management interface launches once you run the
tilt up command.
This is the central viewport into manual resource control and environment enhancement through the open-source extensions for Tilt.
While the Tilt and Kubernetes instances we’ve combined in this demo are entirely local, a remote connection option is available by default. You can set that up in the Tilt dashboard.
Once it’s configured, you can collaborate on Tilt-managed clusters with your team in the cloud.
DevSpace is a Kubernetes dev tool for experienced engineers. All interaction with containers is handled through CLIs on the client side. Once you’ve made edits to your code, DevSpace can push them up to Kubernetes clusters across any of the popular cloud services providers: GCP, AWS, Azure, DigitalOcean, Rancher, or Alibaba Cloud.
The installation process for DevSpace seems by far the least strenuous of the three.
Setting up a project in DevSpace gives you four deployment tool options:
- Helm: Use Component Helm Chart QUICKSTART
- Helm: Use my own Helm chart (e.g., local via ./chart/ or any remote chart)
- kubectl: Use existing Kubernetes manifests (e.g., ./kube/deployment.yaml)
- Kustomize: Use an existing Kustomization (e.g., ./kube/kustomization/)
You may have noticed these as the next steps on the installation screenshot above. Once that’s set, you can configure your tool of choice to push changes and images to the cloud automatically. Each image is tagged according to a set schema and traced through your pipeline in case you need to debug it in the near future.
Turning your dev and production environments into a persistent blue/green scenario with live version updates is made possible by the
devspace dev command. It activates a file sync pathway between your local machine and the cluster online.
You can run DevSpace with teams that have the chemistry to mold your workflow’s development, testing, and staging parts into a very short pipeline. These setups often use custom configurations at each node across the pipeline. A potential downside is that comparatively new developers might be less able to contribute to the architecture’s upkeep.
Using any of these tools on top of a Kubernetes implementation gives your instances some worthy enhancements, from instant updates to the production environment clusters to log streaming on handy UI dashboards in the browser.
To select an option, consider each tool’s ease of installation and iteration of dev instances, along with your team’s comfort level.
No matter which of the three you choose, you’ll be able to work with Loft, which provisions self-service namespaces and virtual clusters as development environments in shared clusters. There is a Loft plugin for DevSpace that provides even tighter integration for teams using both tools together.
Loft enhances team collaboration, which will become even more necessary as coding efforts move further from centralized cubicle setups.