Do you ever wonder how anyone knows if what they are building is right, important or adds value to their customers? What about the companies that just starting, how do they know if what they will become will be a success? I don’t think anyone truly knows, or at least not at the beginning of the journey and even more so when you start doing something brand new. After all, if everything was that simple we would not know what “pivot” even means in the technology world.
So what is it that can help you understand your idea better and help and guide your way to success, or at least validate some of the assumptions and hypotheses you may have? In it’s purest form it’s quite simple:
“Build something - Measure it - Learn from it”
This feedback loop is the basic principle of how you can focus on the right things and course-correct as you move along. A lot of what we call agile boils down to it. You have to test your ideas out and get some quantified feedback that you can use to put forward a better idea or dismiss some of the other ones you may have had.
The problem is that this is a rather high-level feedback loop that does not provide us with much practical advice, and let’s be honest it’s not even groundbreaking.
However, if you start digging deeper into modern ways of working, you will find that there are a lot more types of feedback loops. What is more important is if you try and see the purpose they serve you may be able to advance them to serve you better.
So let’s have a look at the different types of feedback loops and how we can try and improve them, starting with the lowest level one.
Purpose: Code behaves as expected
Different types of tests give you different levels of feedback.
- Unit - function behaves as expected
- Integration - number of moving parts interacting and behaving as expected
- End-to-end - the system as a whole is behaving as expected
As long as you are doing some sort of testing you are probably on the right track. Regardless of the level of tests you are writing remember to always test the behaviour and not implementation. If you test implementation this may stop you from refactoring effectively. How do you know if you are testing implementation? Do you have an assertion where a function has called another function with specific parameters? Try and avoid tests like this if possible.
When it comes to test-driven development on unit-level only, it is not a testing strategy, it is a tool that helps you stay laser-focused and navigate the problem in bite-sized chunks.
Speaking of higher-level tests, especially end-to-end tests it is a universal understanding that they are slow to run, but in modern-day technology, you can run them in parallel which can marginally reduce the run time and the feedback loop for you.
Finally, when you run your top-level tests, always run these against a deployed environment, ideally a copy of production or as close as possible. The main reason for this is if you skip certain steps in your infrastructure (for example API Gateway, some proxy or another part of infrastructure) you may let some bugs slip through that could have been avoided.
Your main aim is to have confidence that everything will work fine if your tests tell complete with no failures.
Purpose: This is how I and everyone else on the team is getting on with their current work
Daily progress feedback usually in the form of an early morning meeting to make sure that things are progressing along well without any issues or blockers.
Did you notice how this usually turns into a list of justifications for your existence on the team? If not I am truly happy for you.
The idea of talking about blockers and progress once a day is inefficient and I am pretty sure that the majority of people should and would raise those issues as soon as they occur. You can notify people of changes and progress during the day too without having to wait a whole night for it.
Purpose: Purpose: Is the next set of stories in a good state and ready to be worked on
A refinement is usually happening on a scheduled basis to get together as a team or three amigos to refine a set of stories so they meet the definition of ready. This should allow the team to pick up the tickets and work on implementation.
Instead of scheduled refinement, get into the habit of doing it when needed with a few relevant people. Do the basics well, make sure it meets the definition of ready, that the required designs are linked, then let the teamwork out the details when they work on the feature. If someone has done a spike around some uncertainty help them create those stories and do not bring people that do not need to be there.
And remember: A story is a promise of a conversation, it should capture the problem, not the solution.
Purpose: Look what we have done and learned
Depending on how often you have your demos it is is a good opportunity to show off completed work and talk about the challenges and what you have learned. It creates group feedback and allows the sharing of experience that may help other teams learn along the way.
However often you do demos and how you approach them, there is always room for improvement. You can always demo smaller pieces even just with your product person or team as soon as you have done the feature. There is no need to wait a whole week as they may notice some flaws in it and provide immediate feedback.
Purpose: New code behaves and works with the rest of the system, code works in a deployed environment
Hopefully, the codebase is constantly changing, so you need to make sure the feature you have just completed does not break another feature or take the whole system down. If something does not work or the code has diverged it feeds back to the developer when things break and require their attention.
This makes sure the code behaves as expected in a deployed environment and is ready to be shipped to customers. If you are doing Continuous Delivery it may also provide feedback that the latest features are out for customers to enjoy.
A solid CI/CD pipeline requires a solid testing strategy for whatever that you are building. Following on the advice from the first section on tests, always think about how you can run the tests in a way that you prioritise the most important checkpoints first.
The order could be: Static analysis, unit tests, once the app or container is deployed hit a health-check endpoint, then end-to-end tests, etc. If the health check fails then there is no point in going further as something is wrong.
The more complex your system becomes the more tests you will have, and to have an efficient pipeline, you need to be able to run all sorts of tests fast. Keep track overtime on how long it takes to build, test and deploy. If the time is going up, have a look and review what is going on and see if you can drop some tests that may no longer be as valuable, or run more tests in parallel.
Finally, give continuous deployment a go. Until you have tried it you will not be able to fully appreciate how wonderful and liberating it is. Most importantly, having continuous delivery forces everyone to care about quality because to deploy on the merge of the code your quality and automation have to be solid.
Purpose: Team works
One of the more important feedback loops that highlight the team dynamics and shows how well everyone is doing together.
This is similar to demos in the sense that although the team should have a scheduled time for a collective reflection sometimes you should just raise issues as they occur. You may be able to resolve them before the retro even happens. If you wait weeks for a retrospective you may just forget something that may be truly important.
Purpose: Are your customers happy with the service and experience they receive?
Customer satisfaction feedback is a crucial feedback loop. When customers are not happy you need to address it and attempt to resolve the situation. I cannot stress this enough: You need to be able to speak to your customers and be able to have access to their feedback regularly.
Customer feedback is crucial! You may be lucky to have a good UX department that will do a great job and present findings on some weekly/bi-weekly schedule. However ask yourself - how accessible is all of the feedback that the company generates from users to anyone else within the company?
Is there a filter like a research team - do you need permission to access the raw research data? What happens if they make wrong assumptions? Try and make it easier for people to live and breathe the customer feedback. It should be accessible to everyone. People should be able to easily show interest and attend user interviews, review research outcomes and see direct user feedback. The latter can be implemented as a public Slack channel that has all of the feedback from the product automatically posted to it.
Purpose: How quick team is going
This is feedback on how efficient your team is. It can also provide feedback on the team’s health. If the velocity drops there must be a reason: Impediments, people leaving, general low morale.
Every Scrum master’s dream is to quadruple the velocity of the team. Joking aside I think this metric is not useful. All it tells you in terms of numbers is how much work a team can do in an arbitrary amount of time.
What it does not tell you is what sort of value the team creates. What you should do instead is define and constantly measure better metrics that matter for each of the teams. An example of this can be: The amount of quality leads generated (that probably is better than just a sheer number).
The list of feedback loops could go on and you could include things like: Analytics, Monitoring tools, Log aggregators, etc. A lot of these are just patterns that I have noticed while working with some of the most performant teams in my career so far.
Remember that feedback loops are all around us and all too often we follow a recipe without questioning it. If you take a good look around and keep an eye open for opportunities to improve you may find some interesting ways of improving your team’s flow too.
Best of luck in your improvement and discovery endeavours and please do share your observations with me along the way.