Recently on Twitter, I ran across a tweet where someone asked "As a tester, when do you give the approval for release?" Few questions make me react in a less-than-positive way upon first reading them. This question did precisely that. I felt a bit agitated with this topic because it digs up some unpleasant memories from previous workplaces.
The most frustrating projects I've worked on as a developer were those with this kind of "QA gatekeeping", where QA decided whether to approve a release or hold it back. Whenever I've been on teams that placed the responsibility for a project's deployment to production solely on the testing team, disputes inevitably happened. Sometimes it happened as the development cycle ended, while other times, it was a silent ticking time bomb that blew up weeks or months down the road.
Online, I've seen many testers talk about their teams handling the responsibility of project releases, and every time it causes me endless frustration. I don't understand why requiring QA or testers to approve a release is still something that organizations do these days. I'm not diminishing the importance of QA in the release cycle, nor blaming testers for creating friction between different team members. When placed in these situations, everyone does their job the best they could and with good intentions.
However, putting testers as the group responsible for a release to occur puts the unfair pressure on them. It also creates silos between product, development, and testing - whether intentional or not. In the end, you have constant finger-pointing, erosion of trust, and someone becoming the scapegoat if things go wrong. These issues serve no one in the organization.
Anti-patterns of QA gatekeeping
In my experience, I've noticed two typical scenarios emerging in these kinds of projects that caused long-term issues to the teams and the organization as a whole.
Scenario #1: QA holds up releases due to stringent bug classification
Some testing teams are more lenient when it comes to classifying defects during the development cycle. The definition of quality seems to vary by team - what's a small issue for a QA group is sometimes a blocker for others. It's a tricky issue to balance since there's no "one size fits all" approach to determining the severity of a bug.
The issue with this comes when a QA team uses strict bug classification to hold up a release, no questions asked. If the team logged bugs with a specific label, the development team was obligated to fix every single one of them before the code could be deployed. I understand the reason why. After all, we shouldn't deploy code with a visible defect that hurts the product. But this system breaks down when bugs are classified incorrectly.
I have one frustrating example that comes to mind immediately. I once had a pull request blocked from being merged and deployed because the QA team found what they labeled a high-priority defect in the feature. The defect? The spacing between an image and the text underneath it was off by one pixel - and only in Internet Explorer 11. It held up the release of the project until the issue got resolved.
Of course, the QA team should log the bug as something they found during their testing. But to this day, I don't understand why they placed this issue as a blocker. The problem was so subtle that most people on the team could barely notice it even pointed out. The spacing wasn't breaking any functionality. And according to our analytics, less than 1% of those using our product used Internet Explorer 11.
This tiny issue caused a lot of back and forth between myself, the tester, and other stakeholders in the project. In the end, the bug was marked as "won't fix". Since the team was distributed across the world and working in different time zones, this back and forth ended up delaying the release. Admittedly, it also took plenty of time to repair the trust between everyone involved.
This example shows that keeping a system where QA can hold up releases because of their bug classification can create situations that harm the overall project. Bugs should be labeled accordingly, but there needs to be some space for re-evaluating these classifications since not everyone sees things the same way.
Scenario #2: Bugs still slip to production, and QA gets chewed out for letting it happen
Another scenario I've seen with organizations that require QA to give the green light before deploying is with the inevitable bug getting through the cracks. In the typical release cycle, development spends some portion of their time doing bug fixes and clearing out the QA backlog. When QA doesn't find any blockers, they'll give a thumbs up, and the project goes out to the customers.
Still, nobody's perfect. Bugs will slip by no matter how much testing is done before a release or how much time developers spend in bug-fixing mode. In organizations with QA gatekeeping, guess who gets the blame when the product team receives a bug report from a user. It'll fail mainly on the testers, every single time.
These issues happen the most when there's a massive time crunch to develop and release a product. Often, the team is overworked, scope creep gets out of control, and no one has a firm grasp on building quality work. When there's little time to develop a product, quality is almost always the first thing to fly out the window.
I have been part of teams that have the mindset of building stuff fast and not worry about writing tests because "QA will handle that". I've never believed in that philosophy. As a developer, I've been scolded by team leads because I refused to push a feature I had been developing until I finished writing some automated tests for it. They wanted me to submit my untested code and let QA test it for me.
On those types of teams, there's a cycle of pushing a ton of work to QA for testing. With a lack of time or resources to fully dedicate to the project, the testers do the best job they can to prevent any deadlines from slipping away. The risk of bugs getting deployed to production increases. Since the project passed last through the testing team, they'll always be the scapegoat, while the development team gets by relatively unscathed. It inevitably kills the team's morale.
How to stop the QA gatekeeping now
If you're in a team with one (or both) of these issues, you should try to help your organization as soon as you can, before your project continues sinking further and further into a hole.
The project teams that I've seen with the most overall success when it comes to delivering quality products on time had a few common traits.
They made testing a whole team effort
The most productive teams running projects with the fewest number of bugs or defects that I've been a part of have made everyone responsible for testing. The responsibility of testing still went to QA, because that's their expertise. But the rest of the stakeholders for the project still did their fair share in their own way:
- Developers took the time to write automated tests for their code. At the very least, they covered their work with unit tests, but I've also seen a few development teams creating end-to-end tests for new features.
- Product managers and designers frequently checked on staging environments to do acceptance testing for new features and make sure the product looked and functioned as expected.
- The people responsible for DevOps made sure to set proper monitoring and alerts, like setting up continuous integration systems that performed various types of testing when new code gets committed.
These teams didn't rely solely on QA to determine whether the project was fit for deployment or if it still needed more work. If any of the teams thought there was something necessary to address before releasing to production, it was discussed between each group. For instance, if QA found an issue, they would talk with product and development to see if it's something to fix now or defer to a later time.
This approach worked exceptionally well because it fostered discussion across different disciplines. Instead of everyone working in their bubble, they would come together to discuss potential problems before they created further delays. It provided additional context to the issues at hand and cleared up any ambiguity about the severity of a defect (like my off-by-one-pixel issue).
If your organization feels like separate silos, it's best to bring it to the organization's attention and foster more unity across job functions. Testing is a cornerstone of excellent products, and everyone needs to work together to get there.
They tested early and often
Besides having everyone involved in the project to do their part with testing, these productive teams also tested as much as they could, as early as possible. They didn't wait until a pre-determined spot in the project's schedule to begin testing - it was part of their regular routine.
Making testing a part of everyone's work is easier said than done, but these teams took steps to make it dead-simple to bake in quality from the start by making testing part of the workflow. For example:
- DevOps and the developers set up systems that automatically generated testing environments when new code or a pull request was committed. This workflow allowed them to create staging servers for new features and help non-technical folks quickly perform acceptance testing.
- Different tests were run at various points throughout the day automatically. When developers committed new code to a branch, it kicked off a process to run a few quick tests. When merging code into the main branch, it ran more thorough tests. At night, a full battery of tests ran and generated reports of the project's current state.
- The QA teams had free reign to run manual and exploratory testing alongside with other responsibilities. This type of testing wasn't a single explicit activity tacked on the timeline. The entire team understood that QA was working alongside everyone else at all times.
With the systems in place to run automated tests and create environments for new features before deploying, testing never had to wait. None of these team members had to overthink about quality during their workday. Whenever they did have to dig in to do testing, they had what they needed to dive in quickly. It helped a lot with minimizing risk, especially at the end of the cycle or sprint.
Summary
Some organizations still do "QA gatekeeping", where the QA team is the group responsible for the release of a project. This practice is harmful in many ways, putting unfair pressure on testers, and it promotes a culture of dissent between different teams working on the same project.
I've seen two anti-patterns occur when organizations have this practice in place. One issue is that projects get delayed due to QA blocking small and sometimes insignificant issues. Another problem is when bugs inevitably slip through to production, QA becomes the scapegoat because they're the ones giving the go-ahead.
The most productive teams I've observed have a few similarities that have helped them avoid these pitfalls. These teams have everyone working on the project do their part with testing instead of solely placing the responsibility with QA. They also test frequently as early as possible, setting systems in place to make it easy for everyone to do their part in ensuring a quality product.
Quality is something that works best when it's baked into the team's workflow, involving everyone who's a part of the project. If your organization practices some of the anti-patterns mentioned in this article, bring it to the attention of those who can help you make a change. QA needs to stop being the gatekeeper for your projects to thrive.
Are you involved in a testing team that serves as a gatekeeper? Has it helped your organization, or has it created problems? Share your story by leaving a comment!
Top comments (2)
QA gatekeeping is an interesting problem to think through. I've been the lead or senior quality assurance engineer for a few companies and I've encountered the scenarios you have written about.
Quality as a culture is an important step in getting whole team buy-in to assess and make changes to the development and release process to ensure there is no QA bottleneck. It is a smart choice and one I always advocate for. For some teams the maturity level of the QA team, the siloed nature of teams outside of QA, and timelines can be a problem if you don't structure the QA team according to the companies needs rather than the software developers needs.
QA should be working with product, business development units, and developers to assess testing needs per code cycle. They should be involved in sprint planning and be a major voice in the discussions. Scope creep is the product of not enough questions being asked during planning, not enough developers and QA engineers assessing risk and timeline to do their jobs appropriately.
For me, an average planning to release window (I also serve as the release manager) cycle looks like this. Attend the planning sessions, bring up any concerns dealing with bug backlog that could slow development down. If not spoken of bring up technical debt that would be required to be worked around. Speak to the amount of time testing will take to be completed for a happy path, assuming no bugs found. Discuss with stakeholders if the features or work are going to be customer focused, used for demos of the site for business, or are silent deploys that upgrade micro services or backend work. From there the tolerance for bugs found verses bugs fixed and prioritized can be found.
Lastly, understanding how many unit tests, in what system are created, if the crunch for time to release code leaves systems vulnerable from lack of tests, and seek to create integration or e2e tests in tandem with the development process to mitigate risk. When all of that is done, testing charters - test cases - automation - UAT - load - and exploratory testing is finished creating a risk profile for the code being released.
Based on the agreed upon risk that product and engineering have assumed we can use bug densities and other quality metrics to track systems that may need post production testing to make up for the slack of a quick release. Reading what you wrote in your post and looking at my own experiences I have acted as a QA gateway but that gateway was constantly being updated to be useful, to solve problems for product and engineering and to speed up the feedback cycle between code and test so that those risk factors were looked at early and often enough to make an impact in code that would be released.
Super nice reply, thanks for this!