DEV Community

Cover image for Releasing features with confidence as a software developer
Alexandru-Dan Pop
Alexandru-Dan Pop

Posted on

Releasing features with confidence as a software developer

Table Of Contents

As a software developer, there's this awesome rush when you see your code bring new features to life. It's like watching your digital creations come alive!

it's alive meme

However, this excitement is often accompanied by the challenge of ensuring a smooth release that doesn't disrupt the user experience or introduce new bugs.

In this post, we'll dive into the art of releasing features with confidence, exploring not only the basics but also when to use feature flags and A/B tests.

Automated tests and CI/CD pipelines

Unit Tests

Before we embark on our journey of releasing features, it's essential to emphasize the importance of unit tests.

Preferably written using Test-Driven Development (TDD), unit tests serve as the bedrock of your codebase, ensuring that individual components of your application function as intended.

With TDD (writing tests before the actual code), you're compelled to think deeply about the code you write and potential edge cases, resulting in more robust and reliable code.

CI/CD Pipelines

Continuous Integration and Continuous Deployment (CI/CD) pipelines automate the process of building, testing, and deploying your code.

Like described in the video above (and with a few twists from my side), the ideal CI/CD pipeline for critical applications should include:
1) Source code integration step with

  • installing dependencies, build or compiling the code
  • running quality checks (for example SonarQube, linting, etc)
  • running unit tests (quality of the unit tests can be checked with mutation testing)
  • check and enforce code coverage

2) Deploy to test environment

  • running integration tests

3) Deploy to production

  • observability
  • alerts

For more critical systems or advanced use cases, a canary deployment strategy can be implemented. This means a gradual roll-out while traffic is gradually shifted from the old deployment to the new one.

Feature Flags and A/B Tests

Feature Flags

Feature flags (FF), also known as feature toggles or feature switches, are a technique that allows you to enable or disable certain features within your application without deploying new code.

There are many types of flags, but the most common would be to just turn a toggle on or off in a certain environment.

Example of a FF: Google_Login (dev - enabled, stage - enabled, prod - disabled). The power here is that we can turn the feature flag off in case we discover a bug with the Google Login, so we don't need to do a rollback or even worse a hotfix. We just disable the flag in production, push a fix then re-enable it.

Managing Feature Flags
It's important to have a system (external or internally built) to easily manage feature flags. While FFs provide flexibility, it's essential to strike a balance.

Keeping the number of active flags to a minimum reduces complexity and avoids potential confusion among developers. Just as you regularly refactor your codebase, it's important to remove old feature flags both from your code and your flag management system.

A/B Tests

A/B testing, or split testing, involves comparing two versions of a feature or webpage to determine which one performs better. By dividing your users into two groups (A and B) and exposing them to different versions of a feature, you can gather empirical data to make informed decisions.

How A/B Testing Works: A/B testing allows you to release a new feature or design change to a subset of your users while keeping the original version for another group. By comparing user behavior and engagement metrics, you can objectively measure the impact of the change.

Primary & Secondary Metrics: When setting up an A/B test, it's essential to define primary and secondary metrics. Primary metrics directly measure the success of the feature, while secondary metrics provide additional insights. Analyzing both types of metrics helps you make well-rounded decisions.

A/B test split

Example: We observe that only 2% of the traffic in our website results in a user signing for our newsletter. We do some UX research and come up with an updated design of the landing page, and we try it out in a A/B test. Our current page (the control group) and the variation are both served to our customers. During a 2 week period we observe the new page results in 4% of the traffic subscribing to thew newsletter. Succes!

Clean-Up After A/B Tests: Just as with FFs, A/B test clean-up is crucial. Once you've collected enough data and made your decision, be sure to remove the code related to the alternative version, and also archive the test in the A/B test management system.


Releasing features with confidence requires a delicate balance of foundational practices and advanced strategies. Automated tests instill thoughtfulness in your code, while CI/CD pipelines ensure consistent and reliable deployments. However, the real magic happens when you embrace feature flags and A/B tests.

And it will also make you look cool, imagine you just discussed with the Product Manager whether to release that new stuff you worked on last 2 weeks, and he tells you to release it. You switch the feature toggle on, and there it is, one minute later.

Magic Meme

So while feature flags empower you to release and iterate on features with flexibility and control, A/B tests provide the empirical data needed to make informed decisions about the usefulness of the changes you are making in the software.

Top comments (0)