I had the pleasure to work with Cathy Axais when I was a team-lead at Seedbox. She had the unenviable task of being the sole Quality Assurance resource at the company for a team of 30 developers. There was no way she would have been able to keep up with the QA load placed on her if she tried to perform the traditional Dev -> QA -> Production cycle; she needed a way to scale.
What did she do? She embedded herself with each of the development teams in turn and taught us how to test. What were her requirements? Fairly simple --
- PRs need to be tested before they go to production
- The developer who wrote the code can't be the developer who tests it
She spent about 3 weeks with my team, showing us what she looked for when testing a system, how to spot things that were likely to break, and how to get into the "testing mindset." This wasn't the easiest thing to do, sometimes we (developers) can be grumpy when presented with change, but from my point of view it paid off in spades. After she finished with us, she went to the next team, and so on until she'd gotten all the developers on board. I don't have the numbers, but I know the defect rate dropped substantially once we stopped testing our own code, an optimization that I take for granted now, but hadn't really considered before.
I haven't found a better way to scale a supporting function to the development process. Need your application to be secure? Don't pass it over to a security team, instead get the security team to teach the developers what to look for to spot insecure code, and show them how to fix it. Want performance? Don't call in a bunch of performance experts to fix all your performance problems, call in one, and get her to instil the "performance mindset" in your developers.
This is what full-stack means to me. You get subject matter experts who can help your teams improve the functions they might not be optimized for. Every hand off of the code before it reaches production increases the time it takes to get features out the door, especially if there is a path that returns that code to the developer where the process has to start again. How many of us have worked in environments where getting code to production requires QA sign-off, security review, architecture review, a release manager, a release date, a roll-back plan, and 4 hours of your day (if everything goes well)?
Each of those steps presents the possibility that your code might go all the way back to the starting line, possibly days after you last touched it.
If you want to move fast, it's important to keep your feedback cycles as short as possible. Often that means getting developers to do everything. Ops, security, performance, QA, deployment, site reliability. None of these things should be outside of the ability of the team.
I'm not saying you should get rid or your QA team, or your security team, or your SREs, just that you should think of them as subject-matter experts who can educate your developers on best-practices for their area of expertise. Think of them more as mentors, and tool builders. For example, Ops can build out infrastructure as code so that the development teams can define their own infrastructure needs independent of the ops team. We'll always need the ops team for when we hit the limits of our understanding, or for architecture planning assistance, but we should be able to come to them with 80% of the work already done.
Passing your code off to another group leads to bottlenecks, and there will always be defects that get through. I think it's better to find the defects quickly and push fixes just as quickly. Get the feedback cycle as short as possible and educate your developers in areas where they are weak.
How do you scale your teams?