Header image (C) Tai Kedzierski
What is DevOps, really? Straight up: no tool "is" a DevOps-enabling tool, in-and-of-itself.
Various buzzwords containing "DevOps tooling" and "Continuous X" have sprung into being, and whilst they are useful as conceptual terminology, reaching too soon for their corresponding marketing productizations is to leap-frog over the essential ground work of defining what effect does it have on our practices?
In its most abstract form, "DevOps" has been expressed as being "developers and operations working together."
The term comes broadly from the web-oriented/Web 2.0/SaaS space where developers would make change to the core product code, and operations would take these in whatever form and deploy into production, following a basic workflow of production-then-deployment. Observing the resulting bottlenecks, and applying Agile principles of small, frequent iterations, the term DevOps eventually emerged to describe a practice of bringing bothe developer and operations into the activity of refining release cycles. Over the years since the coining of the term, various interpretations have arisen.
A question to keep in mind whilst considering options and the below notes is to ask why, or: "what are we setting out to achieve by implementing a DevOps approach?"
- faster delivery of product
- shorter development cycles
- easier maintenance of release branches
- more maintainable release procedures
- better auditability of builds (easy to identify what changes went where, and "where to get the latest X")
- more automation, less manual operation
- better synchronicity and coordination of deliverables (builds, documentation, release notes, …)
- ... all of the above, and more ...?
One issue has been a focus on defining DevOps as being a set of tools rather than a way of working, and different organisations approach the domain differently, depending on those two ways of thinking.
For the "set of tools" way of thinking, see the myriad Solutions produced to "do DevOps better."
For a "way of working," this is up to the organisation to define, but ultimately boils down to
- first collectively (all members on both sides) agreeing on the architecture and flow and goal, to get from "source code" to "deliverable"
- then deciding what tooling meets that design
As with many things in software engineering, "do one thing and do it well" is a good starting point. It is not an inviolable rule (the primary doctrine is to avoid doctrine), but there should be good reason to introduce complexity inherently into a given system.
Often, when organisations start on a DevOps route, tools are invoked, installed, and setup, and things are strung together ad-hoc. Whilst this can work it usually causes a lot of 🍝 tight-coupling where none should exist, and multiple-self-referential 🪢 knots to weave into the codebase, making for very difficult technical debt to cement itself (mixed metaphors be pardoned).
DevOps has gained wider-spread adoption into non-web development organisations - the end deliverable being not "a deployed website," but some built binary which can then be shipped to another team or process, or indeed to a customer. "To publish" can mean any of
- to push the files to a web server (original scenario)
- to put the final tarball in a artifact server
- to hand a ZIP file download URL to the manufacturing division for installation on devices.
- to produce a DEB or RPM ready for clients to pull
- to place a build output on a filesystem (ready for another process to consume)
Essentially, determine: where do our pipes end? Deliver to there.
In my view, the main aim to thinking about the problem up-front is to ensure good interfacing between teams' responsibilities, and segregating the responsibilities of individual scripts/pipelines. Each script should either:
define an environment (a setup script)
- the deliverable is an environment itself
execute a single build (a build script)
- consume source code and/or dependencies, in the context of an environment
- the deliverable is a build binary
orchestrate builds (a pipeline)
- define the order of builds
- inform later steps of outputs of earlier steps
This might sound like Dev and Ops are not working together - but quite to the contrary. To achieve this, they must work in good coordination. If there is not one single team, then there are at least two (usually more), with distinct capabilities and specialisations. "Working together" (in the interpretation I am musing here) means the team members can reach out to eachother across teams directly to achieve the agreed goal. The earlier statement of "first collectively agreeing" what the goal is, and what the scopes of responsibility are, is prerequisite to this.
Each team should be able to operate on its own responsibilities independently for routine maintenance without affecting eachother, but both should come together to agree on a number of points:
Dev needs to declare 📜 "this is how to invoke the build script in a minimal amount of commands"
make clean && make all(for example)
Dev must define 💻 "these are the environment requirements" ("needs to be a Rocky Linux 9.1 server, with
lxml-devlibraries installed, with
Ops needs to supply the 🌉 infrastructure and pipelines on which to execute the build and delivery mechanisms
Ops needs to provide any 📚 other outputs (like build logs, test reports, etc) as corollary deliverables, through desired channels - package repos, etc - and advise how to access these
Ops may also be directed to integrate 🔀 other operations to the pipeline - running tests written by automation QA; or producing corresponding user manual PDF/HTML/InDesign files
- A further inclusion of another 👋 Dev/Author team!
To achieve this, regular close collaboration is needed. None of what is "supplied" by either team is set in stone - particularly, either team will need adjustments from the other to achieve the goal.
A way of organising the flow of source-code-to-deliverable is:
🌱 Identify base component parts - build them and version them as deliverables themselves
🪴 Identify complex parts - consume base parts as one would consume a third-party library, and produce a new versioned deliverable
🌳 Identify a given product - consume base and complex parts as you would consume third-party solutions, and assemble these to produce a versioned deliverable: the software release.
Each of these comprise the following:
📂 A source of dependencies
- base component: a code repo
- complex components: a code repo, and a binaries repo for pre-built dependencies
- Product: usually pre-built dependencies, and some manifest files from a repo
🛠️ A pipeline
- each level has its own, stand-alone pipeline
📦 A delivery target (file format, and repository location)
- base and complex components: usually a package repository, but an artifacting server/filesystem is equally suitable
- product: usually a package repository, image repo (VMs and OCI/Docker images), cold-storage vault.
There are certainly more considerations than the above, but I wanted to highlight just how much of this is more than just tooling - you could probably put together a full, maintainable solution using just
nginx and a small-hundreds lines of scripting (and no, I am not setting a challenge to do this! but do feel free...).
It is about defining scope collectively, cultivating a shared understanding of the goal, and keeping components simple.