DEV Community

Thomas Rooney
Thomas Rooney

Posted on

Introduction to Software QA Tooling in 2022

What is Software QA

We want to build bug-free and high quality software, that satisfies a customer need. Unfortunately, it's not easy!

Software QA is a set of processes and practices to help ensure that the software we produce is of high quality, meets our requirements and is free of defects.

There are no silver bullets, no optimal tools or strategies to achieving good QA in every organization. However, there are often very significant QA inefficiencies and, with founders observing this, there has been significant development effort placed into building new tools to help streamline QA. These tools can drastically drop the cost of high-quality software development, and hence are very worthwhile to evaluate.

In this article I'll summarize 3 QA patterns we've seen inside organizations, how these can be made to work, and how they often fall short. I'll discuss how to evaluate tooling, and give a brief evaluation of the tool I've been building at reflow.io. I'll finally give a list of other emerging tooling in this space, and some ways these tools differentiate from each other.

Popular Strategies for Software QA

Strategy 1: Only developers do QA

The development team is the gatekeeper to code quality. They are responsible for all code changes, and will naturally build automated tests as part of development. Developers build something, hopefully feel responsible for what they build, and hence hopefully that responsibility can convert into product quality when they manage QA exclusively.

When it works

Organizations that can pull this off have a few things in common. They have:

  • A culture that values high quality work output
  • Hiring practices that embed the skills to build automated tests into development teams
  • Procuring tooling that allow the easy creation of automated tests

When it doesn't work

This is a great idea in theory, but it's rarely a practical reality.

  1. Developer prioritization is often tightly controlled, and QA is (often rightly) prioritized behind feature development. It's hard to get automated testing to keep up with the speed of development.
  2. Developers will often have a different view of what is high quality than the end user of the software. They often have a natural bias towards their concept of the happy path, and lack the perspective of a potential customer.
  3. Developer compensation and career growth is often tightly linked to feature development, rarely linked to QA output. Exclusively developer-managed QA leads to a natural tendency to want to "ship the feature" and move on to the next thing.

Strategy 2: A siloed QA team does QA

A team of QA specialists are brought in to test software, often working in an isolated silo away from the development team. They have full control of QA priorities, and are charged with creating automated tests, as well as manual testing as required.

When It Works

  • Should the QA team establish themselves as a trusted advisor to the development team, this can work well.
  • QA teams that can both build that relationship and deeply engage with the product from a customer-centric perspective can offer value not just in building product quality, but in aligning the product with the customer's needs.

When it doesn't work

  1. When the QA team lacks the experience to engage with the product, it can be hard for them to do valuable QA activities.
  2. When the QA team lack the tools to effectively automate testing in a cost-effective way, they can quickly become overwhelmed and grow into a liability instead of an asset.
  3. There is a natural tendency for QA to become a bottleneck to release, and a scapegoat if things go wrong.

Strategy 3: Dev team embeds QA personnel within it.

In this model, the development team embeds QA personnel within it. The QA team is responsible for QA, but they are part of the development team, and are integrated into the development team's process.

This is a good compromise between the developer-only QA model, and the siloed QA model. When QA personnel are part of the development team, they are naturally more invested in the quality of the product, and are naturally integrated into the development process, allowing them to provide input into the process at an earlier stage.

When it doesn't work

  1. When QA teams are embedded in development teams, they are often pulled into the development team's process, and it can be hard for them to maintain their QA focus.
  2. QA Personnel integrated with the development team naturally become influenced by pressures that influence the development, and inherit the biases of the development team, especially if they also work on features.
  3. When QA personnel aren't deeply experienced in automated testing, they can be a drag on development team productivity.
  4. Since QA personnel are often individually allocated to a project, upon their leaving knowledge can be easily lost. This strategy can introduce significant Churn vulnerability.
  5. As the Dev team scales, most teams specialize and silo. This pattern often naturally falls apart as priorities are shifted and teams are split.

Tooling

Whichever strategy is taken, tooling choices are key to successful software QA. Without tooling, QA is manual, and manual QA does not scale anywhere near as well as software.

All QA tooling has a fundamental aim: "Reducing the severity of bugs", and/or "Reducing the likelihood of bugs". Both commercial and non-commercial tool options should be evaluated against these goals in a specific context.

Fast deployment / monitoring: Reduce severity of bugs

If the product can be deployed fast, is heavily monitored, and has resilient data architecture, then QA is fundamentally less important. E.g. if a bug is fixed minutes from when its found, and the severity is such that there's no long-lasting impact from the bug, then resources spent on pre-release becomes less well-spent.

Hence, especially for early-stage products, investments in Monitoring and Continuous Deployment tooling are of higher value than investments in test automation.

This will usually change over time as a product becomes more complex and developers churn. When this happens, investments in Test Automation tooling to enable a team to release faster can be incredibly cost-effective.

Efficiency of Test Automation: Reduce likelihood of bugs

Test Automation is the only foolproof method we have to ensure that bugs are found before a release, that scales to arbitrary product complexity. As such, for any products where the severity of bugs is high, and that undergoes continuous development, this is the only way of achieving a high quality product in the long term.

There are a large (and growing) number of tools that try to improve product team's effectiveness of test automation, focusing on different pain points and user stories. There is a natural efficiency gain by using a commercial tool: by embedding machine-learned application configuration into a database, a commercial tool can provide features that are very difficult to provide by using an open-source tool alone.

By virtue of the vast number of different ways that product teams develop software, be wary of picking the wrong tool, and evaluate carefully based on your unique needs. All SaaS Test Automation tools cause some degree of lock-in. When your product team churns, or your QA needs out-grow the tool, then you might find that your testing effort stagnates.

About Reflow

Reflow is a tool that allows non-technical QA personnel to implement end-to-end tests using a no-code recording UI, supported by engineers importing snippets of their end-to-end test suite.

It is designed to be highly flexible, with developers able to add their own browser interactions into the no-code UI using playwright. It is also designed for record/replay processes to be available both in a web UI at app.reflow.io, and running on a local machine via a CLI tool, including in a Continuous Integration server.

  • "Reducing the severity of bugs": Reflow can be used as a low-code tool for visual monitoring of a software product, to quickly recognize when a site has regressed. Target your production application to use Reflow for Synthetic Monitoring.
  • "Reducing the likelihood of bugs": Reflow can be used to enhance the efficiency of automated testing processes, by enabling non-technical QA personnel to design, develop, and implement test automation strategies; allowing their work to be run automatically by a CI server or on a developer's machine locally.

When It Works

Reflow is best used by teams that have a mix of technical and non-technical staff testing an application. Its workflows focus on enabling them to work together better, by enabling the development team to enhance QA team workflows with snippets from their end-to-end test suite; and the QA team to enhance a developer's workflow by giving them business-specific sequences and tests that they can run as part of their Continuous Integration tooling.

Reflow works best when Engineering teams integrate the CLI tool directly into their codebase; executing QA-managed test sequences against locally running software.

When It Doesn't Work

Reflow will provide little value when a product is entirely tested by a development team. A development team can almost always manage a versioned end-to-end testing framework with greater ease than an invoked tool that they record in a web UI.

The value that reflow can provide to an entirely developer-owned qa-cycle are observability of end-to-end test runs, audit records and screenshot-capture/comparison. However, if these are not important to your product, it's worth looking for other tools.

Reflow is also currently only available via AWS Marketplace under their Standard Terms and Conditions for Subscription Products, meaning paying users must have an AWS account.

Request for feedback

Reflow is a bootstrapped product, built by 1 engineer, with 1 engineer's insights into QA, and a small amount of external feedback from its growing user-base. We'd love to get more feedback to we ensure we build the best product we can.

Tooling Category: Record/Replay Test Automation Tools

These tools are aimed towards product teams that want to use code-free automated testing: this enables non-developers to contribute towards automated testing of a product. They often have features like auto-healing, visual-regression-testing, data-driven-testing and cross-browser-testing/responsiveness-testing.

Differentiating these tools: Which one is best for your needs?

Whilst these tool have many commonalities, they each have a set of capabilities that aligns them to different products and product teams.

  • Local Testing: Some tools work locally, some do not. Some partially support it through the use of a network tunnel.
    • Reflow supports Local testing via a npm package, allowing the entire record/replay process to be executed on your Windows, Mac or Linux system.
  • Mobile Testing: Some tools support mobile device emulation, some use real mobile devices, some do not support this at all.
    • Reflow supports mobile device emulation, but does not run tests on real mobile devices.
  • Cross Browser Testing: Most tools support execution of tests in more than one browser engine, but some only support one browser.
    • Reflow wraps playwright, and thus allows for test execution in Edge, Chrome, Firefox and Safari. However, tests can only be recorded in Chromium based browsers.
  • Selector Stabilisation: Some tools rely entirely on visual mechanisms to stabilize element selectors: some rely entirely on CSS selectors. Some do both.
    • Reflow stabilizes/auto-heals selectors with both CSS and visual data. However due to this it only supports web applications.
  • Customization: Most tools allow for the execution of JavaScript in the browser, but not all allow for deeper configuration of the test execution environment.
  • Data Export: Using a code-free tool usually means your data will be held inside the tool, and not in your codebase. Each tool differs in how to leave it.
  • Integrations: Some tools have vendor-specific integrations. Some are self-contained, and can be executed over a CLI in any server.
    • Reflow can be executed in a CLI runner after just an npm install reflowio command. This means it can be added directly into any existing CI/CD pipeline.
    • Reflow does not have vendor-specific integrations for tools like JIRA, Slack, or TestCafe
  • Cost: Some tools have a trial period, some have a free-tier. Many do not publish pricing details and may provide company-specific quotes after they understand what your business will afford.
  • API Testing: Some tools provide specific workflows for API Testing, and allow for it to be done using a code-free workflow
    • Reflow requires API testing be done via snippets, and lacks a code-free workflow for API testing.
  • Desktop Application Testing: Some tools support more than just Web UIs, also executing on desktop applications
    • Reflow only supports Web UIs
  • Isolation: Some tools are experimenting with Test Isolation: I.e. isolating the test execution from different parts of the application. For example, some may have first-class network mocking/replay capabilities, that enable fully deterministic UI replay.
    • Reflow is experimenting with Unit Test Generation for React components using Runtime Execution Analysis via Source maps. However this isn't yet live. We're not sure on the utility for test isolation yet.
    • Reflow snippets can enable network mocking/replay functionalities via playwright APIs.
  • Visual Regression Testing: Almost all tools have different approaches to visual regression testing. These approaches will work better for some products, but not all.
    • Reflow Visual Regression Testing is implemented with SSIM-weighted pixeldiff. This will minimize picking up minor rendering differences between browsers, and highlight significant rendering differences when they are close together.

Top comments (0)