DEV Community

rndmh3ro
rndmh3ro

Posted on • Originally published at zufallsheld.de on

DevOps workflows and reliable automation

The company I work for provides a broad scope of IT services to our customers and to be able to offer the best quality services, we rely heavily on automation, especially Red Hat Ansible. In this blog post, we’ll look at what our workflow looks like when automating with Ansible and what tools we use along the way to ensure reliable automation.

Our DevOps process and why we choose Ansible

I am a Systems Architect in one of our operations departments that provides the infrastructure, management, and hosting of our customers’ applications. I try to build solutions to improve and automate our work for all our operations teams. Ansible is one of the tools we rely on heavily in our work.

Since most of the tools we use are open source, I try to publish all our tools as open source as well. They are intended for a wide range of customer configurations, so their code can get quite complex. I need to rely on contributions from my colleagues and outside contributors. To make it easier, all the code is hosted on GitHub.

We rely heavily on standardization and automation to test new contributions, bug fixes, and support for new operating systems or tools.

Tools for maintaining code quality

Fortunately, there is a wide range of tools in the Ansible world that we can use to test our changes and make sure the code is of high quality.

Ansible Lint

The first one is Ansible Lint, a command-line tool for linting playbooks, roles, and collections. Its main goal is to promote proven practices, patterns, and behaviors while avoiding common pitfalls that can easily lead to bugs or make code harder to maintain. Ansible Lint can be run from the command line, but nowadays it can also be integrated into most IDEs via the Language Server Protocol (LSP). LSPs provide features like auto- completion and code hints directly in the IDE. They are supported by IDEs like VSCode, and IntelliJ but also Neovim. I’m a fan of Neovim! So, when I open a file with Ansible code, the LSPautomatically provides hints from ansible-lint about what could be improved. It looks like this: when I write new Ansible code, the LSPhelpfully auto-completes Ansible parameters and their values, so I don’t have to guess which parameters are supported by the module.

Steampunk Spotter

I recently had the pleasure to test Steampunk Spotter, a new Ansible Playbook scanning tool called Steampunk Spotter. In case you haven’t heard of it, this is a new tool that scans, analyzes, and provides recommendations for your Ansible Playbooks to help you increase the reliability and security of your automation. That’s perfect to use in addition to Ansible Lint since it introduces special checks for more complex scenarios such as upgrading to newer Ansible versions.

Spotter may not have as many rules as ansible-lint, but one thing I already love about it is its handling of the dreaded Fully Qualified Collection Names (FQCN). In the past, you could write a file as the module name, nowadays you should use the FQCN ansible.builtin.file. When working with older codebases where FQCNs aren’t used, I could just ignore this rule. However, I prefer to fix things up and that is where Spotter shines. It provides an option to fix some errors and can replace legacy module names with their FQCNs. The cool thing is that Spotter can automatically fix some of the problems and save me time with features such as generating a_requirements.yml_ file or pointing me to the module documentation of a specific version. It provides hints for best practices, for example, to set the mode when using the file module.

Molecule

When my code changes are complete or the contributed code looks good, I test them. Here I use Molecule which provides support for testing with multiple instances, operating systems, distributions, virtualization providers, test frameworks, and testing scenarios.

As I mentioned, my playbooks and roles must work on multiple operating systems. To test this, I use Molecule in combination with Docker. Molecule automatically starts containers with the operating systems I specify and executes my Ansible code. I do this on my local machine for at least one operating system. If the test succeeds, I push the code to GitHub where all other operating systems are automatically tested. I mainly use GitHub Actions for CI/CD pipelines, and in this pipeline, Molecule gets executed for every supported operating system. This way testing locally and in the build pipeline uses the same tools.

Chef InSpec

We do not rely on Ansible alone to verify that the changes the code does are correct. We use Inspec to test the results. Chef InSpec is an open-source testing framework for infrastructure with a human- and machine-readable language for specifying compliance, security, and policy requirements.

We define the desired state of our automation in “baselines.” Baselines are test cases that check if the server or the application is configured the way we want it. These tests run after executing our Ansible code. Only when these tests pass will the changes be accepted and merged.

Our release pipeline

Once the code is merged into the master, we need to release it. This process is also mostly automated using GitHub Actions and it consists of several steps. First, it creates a changelog based on the pull requests and their labels. It then creates a new release draft which contains the changes in the GitHub repository. This release draft is then published manually - this way we can create a release with meaningful changes and not just fixed typos. After publishing the release, another automation process is triggered, which deploys the code to Ansible Galaxy.

And that’s it. Using a combination of the right tools helps us guarantee our playbooks are high-quality, secure, and reliable. This means we can optimize and speed up our automation while trusting it completely.

Top comments (0)