DEV Community

Jonas Brømsø
Jonas Brømsø

Posted on

Can Continuous Deployment Be Considered Harmful?

With improved tooling and toolchains we have over the years seen an increase and rise in popularity of Continuous Deployment (CD). The term and activity continuous deployment is most often used in conjunction with Continues Integration (CI), which is automated testing integrated into your toolchain, so all changes are tested and evaluated for validity and correctness and the next natural step is to have the tested code automatically deployed using continuous deployment, hence the term CI/CD.

These are both great things and in combination an very powerful enhancement to the toolchain and Software Development Life Cycle (SDLC).

And I am all for shipping code. In my opinion unshipped code, meaning:

  • Code that only resides in a feature branch
  • Code that only resides in a version control system

These two categories of code does not bring any value and over time it will become a problem, without having it evaluated in context of real users and real usage, shipping and deployment is essential for this evaluation to be able to take place.

BUT do we have a challenge in regard to the other aspects and activities related to shipping and deploying software, activities like:

  • Release/deployment scheduling
  • Release Announcements
  • Documentation updates
  • Migration plans
  • On-boarding plans

Many of these activities often happen outside the development team and have to take human and non-technical aspects into consideration.

When we release software we normally have to meet the following criteria:

  • The code must be finished and deliverable
  • The code must pass quality requirements in order to be delivered
  • The documentation has to describe the delivered software

A brief note, by documentation I mean all material that describes and depicts the software, that can be: whitepapers, videos, written content, diagrams, tutorials etc. Whatever is required to explain how the software works and how it is expected to be used.

All of the above activities can be an integral part of the CI/CD pipeline if the documentation is a kept close to the source code and are part of the same release process. However often documentation not close to the source and the deliverable, but it is created in parallel and outside of the software development process.

Now let's say we are shipping a new feature, a feature introducing a new capability for our piece of software.

The software team can using CI and good development practices make user that our software lives up to the requirements mentioned above. Perhaps the developers even deliver technical documentation describing the technical facets of the solution - depending on the audience of the software this might not be easily consumable by the users.

Often product people bridge that gap of translating the features to the users. Just as they often initially translate or formulate the original requirements to the software, outlining the requirements of the software to the developers and in this case the feature that we just built.

This sets some requirements to the software development process. Since the documentation writing targeting the non-technical end-users has go hand-in-hand with the actual software deliverable.

It gives us two options, either the documentation is written:

  • in parallel with the production of the software
  • after the software has been produced

The latter is by far the easiest, since you only have to describe what is delivered and examples and screenshots can be produced from the actual product, not sketches and mock-ups supporting the design and requirement phase.

The first bullet is more optimal and balances better with continuous deployment, since the documentation can be developed together with the software and hence be handled just like the software, adjusting to changes and discoveries as the software is developed iteratively.

As you might have spotted we have a challenge that the second bullet actually conflicts with continuous deployment, since it does not take into consideration that the documentation has to be finished before we can deploy the new feature.

Perhaps a new feature could survive this, meaning that, if we do not generate any awareness until we are ready it is not really a problem, but not all features can be considered innocent in this regard, some features are replacing others or are disrupting an existing and well known use case, used and known by our users.

But this is actually solvable by a set of different measures.

  • We could deploy to staging servers and the software is available to users with access to this environment
  • We could deploy to production with feature toggles, this would enable us to monitor for possible regression and users could be moved to the new feature gradually and in sync with the introduction of the documentation

The first part is just make-believe and it is not true to continuous deployment as a concept, since we are actually not shipping to the real users and the validation we are looking for is not provided.

The second option is far more in line with the promise of continuous deployment. We can actually solve the challenge of have the documentation developed in semi-parallel with a controlled continuous deployment setup. With this flexibility and controls to steer the process, we are in a very good place to use continuous deployment.

Now if we challence the premise and look at some of the listed activities:

  • Release/deployment scheduling
  • Release Announcements
  • Documentation updates
  • Migration plans
  • On-boarding plans

We can for most features handle these via the proposed setup and even by using the same means for most of the points.

More complex and disruptive changes, meaning features that tap into migrations of data and complex on-boarding schemes and moving users from decommissioned features to new features might challenge the proposed solution setup. I do however believe that most of the however can again be categorized as technical challenges instead of non-technical challenges.

If a platform is extended with:

  • Dynamically triggered migration mechanisms, where users migrate themselves
  • New features are gradually rolled out
  • Decommissioned features are gradually rolled back

We are in a better place. There will be plenty of challenges and it is not an easy problem area, but it can and has been solved, so look around for inspiration.

Just as for documentation the whole scheduling of things has to be thought into the deployment process and communication between development and product people become very important. The product people can as for the documentation, prepare announcements and inform users in advance, prior to deployment.

By scheduling I do not just mean: "Do not deploy Fridays". When you ship new features, you have to do this in dialogue with the users, since they might have other work to do. Your scheduling might disrupt their plans, road maps etc. if you do not provide sufficient leeway.

For some changes this is hard and calendar scheduling in a agile set up is not easy, since it gives an impression of a waterfall like methodology instead of an agile set up. This of course depends very much on:

  • product
  • users
  • environment

You might be in a situation with a class of product, where you do not have to take this into consideration at all and you can just do as you please at your own time.

The opposite is when you have tight coupling with your customers, that can be in a setup where the customers depend on you and your product, on-boarding of new features, require activities on the user side for example for APIs etc., then scheduling becomes very important. As stated it can be minimized, but perhaps not entirely.

All of this sounds like immense amounts of work and even the smallest changes drown in technical control mechanisms, communication plans and scheduling - where did the continuous part go? "if the tests pass - deploy".

There is one more thing that we have to take into consideration, what is it that we want to deploy?

If we categorize our changes, we are actually able to find some loopholes.

I have very simple model for the elements I work with.

  • Features are specified and described and are changes to the product, meaning they are code that qualify to go through the process outlines in this post
  • Bugs are deviations from what is specified and bug fixes are corrections/changes to align product to specification

Now we have two categories. Let's start with bugs. Bugs are interesting, if we have followed our process and the expected behavior is not observed and we have a bug. Since we already have specified what we expect, we have documented it and it is available to the users, it does not behave as described, but it is there. This mean that we do not need to do complete process again. All we need to do is communicate that a change has been deployed that corrects an observed misbehavior. Depending on the impact of the bug this communication can be a simple release note, direct communication to impacted users or global announcement to all users about the introduced bug fix.

This sound very easy and that part is quite easy, the hard part about bugs is locating them and fixing them and depending on their size and impact, the fix can be small or large, if overly large, it might make sense to treat the bug fix as a feature and make sure that documentation, scheduling and communication is in place just as for a feature.

The other category, features. Just as bug fixes, features can also vary in size. And as can be read from the proposed handling of bugs, features can also be of insignificant in size so just a an announcement of their introduction is necessary, if they are larger in size: documentation, communication might be necessary and if they are disruptive even scheduling might be necessary.

By categorizing bugs and features and doing so conservatively and honestly, a loophole for doing continuous deployment opens up. This requires a setup where small features and bug fixes can be deployed continuously, preferable with a set of release notes. Supported by feature toggles, it will even be possible to control the process and collect feedback more easily.

I am by no means opposed to continuous deployment, but the challenges I have observed in regard to communication and scheduling and a tendency to treat all code changes equally have made me weary of continuous deployment. The process must facilitate communication between developers and product people and to the extend possible the users. You might even be in a setup where the operations is a fourth part, playing an active role in the deployment, which makes it even more complex.

  • Communication is key
  • Understanding your users (and their needs) is key

If your deployment process do not take into consideration that there is somebody in the receiving end and you only focus on features, how much you can automate, how many new features you can ship and how many times a day you can deploy - it is with a high risk for your relation to your users. If you continuously disrupt their work and planning and you do not give them space to adopt and consume your changes.

If you consider continuous deployment, think the solution through from end-to-end, it is not only a technical challenge. Do consider the impact of the process, especially in regard to communication and reception.

The problem described is not only for continuous deployment, but release processes in general, so even if you are not using continuous deployment, I hope there is something from this post, which made you reflect on your process and user relation.

Top comments (0)