Once, after saying that I had a lot of things to test, I was asked how many developers per QA were in my old job, since I always referred to it as having “ideal teams”.
I replied that they were 4 to 6 devs. It was usually 2 Android and 2 iOS developers, and some teams had 2 more for the backend. I received a comment: “hey, it's the same amount we have here. Why do you complain about a lot of work here and there it was ideal?”. At the time I was asked this question, we had 5 devs.
The difference was that in the reference teams, all members were working on the same functionality. For example a login. In planning, the designer presented the idea of a screen. If it was too complex, we would already adapt it to be able to deliver in a Sprint.
Frontend developers were usually split with one person laying out the screens and the other doing the functions. Unit/instrumented tests were also split. In the planning, the contract with the API was also defined and mocks were delivered before the services, for the frontend devs to be able to start development. Each one was responsible for testing their code. If unit tests were covered, QA didn't have to worry about these tests.
In the “current” team each dev takes an item from the backlog. Taking a bank app as an example, and 4 devs, login, transfer, pay and deposit will be developed in a Sprint. Also, the backend is developed by another team.
To get worse, NO unit testing was done. The phrase “it's ready, test it” or “it's ready, just need to test” always happened. That is, all responsibility for the QA. It was common to receive a “done” where the main action button did not work. Or something that was working break with the merge of another functionality. It was practically impossible to make a decent regression and bugs always went into production. Same for the APIs.
So, in addition to testing the front, it was necessary to test the back. Almost always the frontend devs had to do some workarounds to fit the backend's responses, which usually caused errors if the API responded differently.
In this context, I've heard "you haven't tested this, why?" or “I'm sorry, but your tests are shallow”. My analogy was “it's impossible to deep dive into 4 or 5 pools at the same time. You can only take the leaves off the surface.”
Correct metric: features / QA
With that, the metric I consider correct is features per QA (or tasks/QA). So we have a measure of how much a QA is being “overloaded”. The ideal number will vary for each QA, with the size/complexity of the functionality and for each team.
I would follow the “two is company, three is a crowd”, for a two-week Sprint. Before I want to make it clear that the quote is valid for “total QAs”, that is, those who care about the end-to-end quality in all layers, and not “only do tests”.
Two is company when talking about completely new functionalities, where the backend and frontend are being developed from scratch. In this case, even though it's just one feature, you're looking at two different systems. And I'm not including automation. If we include automation, perhaps one functionality is enough.
The team also has a big influence on metrics. If everyone cares about quality, QAs can play their role much better. On the other hand, task teams, like the one described above, have a high chance of bugs being produced, and not resolved.
Find your number
Over time, in the sprint planning you will be able to measure the size of the task and its complexity. Align the priorities with the PO and choose the two most critical features for a “perfect delivery”, features that you will dive deep into and “clean up” everything possible.
I recommend taking a look at PRISMA (Product RISk MAnagement) technique, from Erik van Veenendaal's. And always try to include automation within the Sprint.
Finally, just because you chose two main features doesn't mean you'll ignore everything else. Unforeseen events happen, bugs in production arise, new requirements are discovered. Focus on the main features but also take a look at the application as a whole.
And you, do you know leaders who use this Devs / QA metric? Tell me how it is working!
Top comments (0)