Nice post, thanks for sharing! I'm curious about your testing approach. Assuming there are automated tests in place and a number of features being developed in parallel, do you test every permutation of feature flags to ensure there aren't any unexpected side effect or is it a policy to not develop enough at once to necessitate that kind of approach?
I view feature flags as having a short life cycle, ideally. Because of that, I make sure to have tests that cover both when a flag is enabled and disabled, but I try to keep flags cleaned out of the code after they've been deployed and released to reduce the permutation scenario that you're describing.
In practice, I find that this kind of feature flag hygiene means that the number of interacting flags is pretty low so there doesn't have to be too many overlapping tests.
I am a product engineer and have helped build software from small startups, to manipulating hundreds of millions of data points. I write API's and make tools that make developers lives easier.
I would guess that you can handle the flags in the same way as the tests. Master branch would alwaysbtesr against the main code. If a flag wasn't made right that should break production and a test will fail. As it should.
Presumably your tests at least know some basics about toggling feature flags maybe using parameterization. After all testing the new features will get merged into the main tests anyways when they're done. So just like the site when the flags go away the tests will have the code for the new features.
Yeah, good point. I didn't mention it in the article, but Django Waffle has some nice utilities for decorating test methods so it's quick to write tests with flags in on (and off!) states. I use these kinds of utilities at work all the time.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Nice post, thanks for sharing! I'm curious about your testing approach. Assuming there are automated tests in place and a number of features being developed in parallel, do you test every permutation of feature flags to ensure there aren't any unexpected side effect or is it a policy to not develop enough at once to necessitate that kind of approach?
Cool, glad you like it!
I view feature flags as having a short life cycle, ideally. Because of that, I make sure to have tests that cover both when a flag is enabled and disabled, but I try to keep flags cleaned out of the code after they've been deployed and released to reduce the permutation scenario that you're describing.
In practice, I find that this kind of feature flag hygiene means that the number of interacting flags is pretty low so there doesn't have to be too many overlapping tests.
I would guess that you can handle the flags in the same way as the tests. Master branch would alwaysbtesr against the main code. If a flag wasn't made right that should break production and a test will fail. As it should.
Presumably your tests at least know some basics about toggling feature flags maybe using parameterization. After all testing the new features will get merged into the main tests anyways when they're done. So just like the site when the flags go away the tests will have the code for the new features.
Yeah, good point. I didn't mention it in the article, but Django Waffle has some nice utilities for decorating test methods so it's quick to write tests with flags in on (and off!) states. I use these kinds of utilities at work all the time.