You just spent the last few weeks writing the latest and greatest feature for your users. You did your due diligence and wrote entire suite of unit tests and integration tests. Your fellow colleagues reviewed your code and gave their thumbs up. You press commit, your CI and CD processes are green and your feature officially releases. You go home eagerly anticipating the praises and accolades in the following days.
Then it happens.
The complaints start pouring in, tweets about how your new feature doesn't really work, even worst it broke other features in your app. Things don't look good and you spend the next few days to a week fighting fires. You are starting to lose customer satisfaction.
How would we have handled this better? I will describe how committing your feature is just the beginning of your journey to releasing your software.
Hopefully before you begin writing your feature you know what success looks like and what metrics you are anticipating to move. We don't build things in a vacuum, and new features don't automatically imply happy users. Are you looking for increase in new users, high engagement with existing users, better performance, etc... make sure you know what you want and how you will measure it.
Some people might refer to this as an experimental build, sandbox build, alpha build, etc... This is a version of your build for internal use only. The internal build allows developers to safely commit code for internal testing and experimentation. This is the build other engineers, PMs, designers, etc... will go to for an early feel of what the feature is going to do.
Once you have an internal build deployment process, your next phase for ensuring quality is to have dogfooding sessions. As the developer of your feature, setup some time with your team to test your new feature and do your best to break it. Report any bugs, fix, and repeat the dogfooding process a few times. Also, this is a good time to evaluate design decisions, user interaction, performance, etc...
If your team is fortunate enough to have a QA team, this is the time to officially submit a build to them along with a write up on how to use it. Having dedicated QA professionals will help immensely in finding edge cases. The QA team should have additional resources, like multiple multiple devices or configurations to test on. Examples include testing in many different screen sizes, all variations of mobile devices, different deployment environments, etc...
One of the best ways to save yourself in an emergency is to be able to turn your feature off after deployment. Building feature flags into your code before release will give you that safety net.
Internal dogfooding and QA testing can still miss some edge cases. Sometimes the best way to get feedback is from real users. It is best to release your software incrementally. Start with 5% of your users and evaluate any feedback you might get back. Once things are stable, start deploying to 10%, 25%, 50%, 75%, etc... until you finally release to 100% of your users. Find a good cadence you feel necessary for your business. This helps ensure you don't upset too many users if something does go horribly wrong.
Circling back to the first point, this is the time to measure if your feature is providing the value you intended. This is also the time to make sure there aren't any performance issues and regressions through data. That means you need to have dashboards with metrics to monitor the health of your software. Don't always take anecdotes or customer feedback as your only source of feedback. Sometimes the data will tell you a different story.
Following these processes will help you release your software in a safe way. But all the processes in the world won't guarantee 100% bug free and user satisfied software. Keep an eye on your feature and continue to address any feedback and issues as they appear. The success of your software really depends on the amount of love you give it through its life time after committing your code.