DEV Community

Cover image for Testing 2.0
Ruslan Akhmetzianov
Ruslan Akhmetzianov

Posted on • Originally published at qameta.io

Testing 2.0

Testing is diverse. Despite the fact that the main goals are the same everywhere, there are many differences from project to project: different stacks, technologies, processes and approaches. Analyzing the processes in many teams, Qameta Software has found one problem. That problem is inefficiency. Currently, testing is one of the most under-optimized processes in the development life cycle.

Today, we will try to understand how it happened and what we can do about it.

Where did it all begin?

Usually, at the beginning of the project there is no tester, and the developer and the tentative manager are responsible for the quality of the product and the bugs found. Everything is fine in this world, because if the manager finds a bug, he sends it to the developer for correction; the developer fixes the bug, and everyone is happy. However, moving forward, the number of developers increases. This means that the amount of code increases. More code means more bugs! At this moment, the manager begins to lack strength, but he tries to cope, since who else should test the system if not the customer? He knows how everything should work, checks that everything is done as it should, and at the same time he also brings in bugs.

Excellent! The problem arises when the manager continues working like that and eventually misses a critical error. At this moment, members of the project make a decision that the manager (as the bearer of sacred knowledge about the product) should test new functionality and not waste time on boring regression tests. So we got our first manual tester. Why manual? Because development doesn’t have any expertise in creating the correct automation of regression yet, and the manager, much easier and calmer, trusts a person than code. Now we have a team, and there are no difficulties: the manager tests new functionality and features, and the tester drives the regression. Simple and clear. But at the moment when a critical bug sneaks into production, a muddle starts. The team will go to the tester and ask:

— Why do we have a bug in production, how did it happen? How did you miss it? To which our specialist will honestly answer
— But it wasn’t there. I definitely checked this place! Most likely, the developers meddled with the code after my testing!

Obviously, having analyzed the situation in retrospective, the manager decides to formalize testing and introduce checklists in testing. Whether the checklists are simple or complex, based on plain specialized software or in Excel, they will clearly show what is being tested, at what time and what was the result. Using this tool, the team will be able to understand precisely if the tester missed a bug or careless developers rolled out a feature without testing after the smoke run. And that’s how we live.

The product keeps growing, acquiring features; the number of checklists keeps growing; our tester is gaining experience and authority while his time for testing is running out.

It’s time to take another tester to back him up! And now a team of testers appears in our development cycle. Now there are two people testing the regression.

The first Test Management System (hereinafter referred to as TMS) appears in the team, because now, in addition to the standard fields “step, status, time”, we need to document a bunch of data from the tester’s head: links, logins, passwords, some additional steps or bypassing the old known little things. So we have test cases that allow us to:

  • easily recruit new people
  • not worry too much about onboarding and transferring product knowledge
  • distribute tests among team members
  • monitor the performance and quality of work of testers

However, there are also disadvantages: unlike checklists, test cases require constant updating and proper organization. And if updating is a matter of scrupulousness and process, then the organization of test cases is a complex issue. The problem arises at the moment when the test server or staging appears in the process of natural growth of the testing infrastructure. Up to this point, the test documentation includes all the details necessary for the run: links, logins and passwords, caches and cookies, etc. But with the introduction of the pre-production test environment, all these details will have to be locked and parameterized, because they won’t match in production and on the testing server. And then the testing team will have to invest twice as much effort in support of all this stuff.

The product keeps growing and getting more features; the number of test cases keeps growing; our tester becomes a lead.

The team is also growing, more testers are joining in. But productivity does not increase linearly, but tends to be a logarithmic function: each new engineer gives a smaller increase in team productivity. There comes a moment when the deadlines begin to grow like an avalanche, and the manager realizes that the team needs…

Automation!

The robot comes in
As the first automation engineer joins the team, they will likely use frameworks and approaches in which the team has no expertise. This means that the team is forced to rely entirely on the experience and knowledge of a new employee, and if there was a mistake at the hiring stage, it would be difficult to detect, let alone to correct.

A man’s meat is a robot’s poison! Automated test is not a human. Automated tests are dumb and they don’t know how to think through steps and step out from scenarios. Therefore, the classic approach to “translating” test cases into a framework language is an anti-pattern that will lead to false fails or missed errors.

Imagine that in 100 tests there is a piece where you need to log in, generate a doc and download it in PDF. If anything has slightly changed, a human will move on, noting that it would be good to update the documentation. And if it is a robot, you will get 100 red tests.

What do testers do when they see 100 red tests? That’s right, they think that the tests are broken and wrong!

Nobody wants to sort out a hundred identical fails, so let’s make an automation engineer deal with them. At this point, the team comes to an evolutionary dead end: there is an automation engineer who writes automated tests himself, and then sorts out the reports himself. He becomes a thing in itself, detached from the team.

Documentation won’t write itself

To somehow control testing automation, the QA lead decides to combine automated and manual tests into a single process. And here are three options on how to do so:

  • Integrate testing automation with TMS. Almost all test management systems have an API so that teams can push test results into the systems to get aggregated reports. The problem here is simple: automation results are usually collected without metadata and details, and all that the testing team can learn from the report is that N autotests failed. Why? You have to dig into the guts of the test run and figure it out by yourself.
  • Separated autotest run followed by manual regression. This option, despite being an “old school”, is more efficient in terms of detecting bugs. The difficulty here is that the testing team will have to work in two interfaces: dealing with automation reports in one tool while manual runs in another. Let’s say testers create a run and run it from their interface; after that the automators run their tests and give an Allure report to manual testers. If there are errors in reports, the team has to deal with Allure, and then they will also try to transfer the detected errors to manual TMS. And then to JIRA. That’s a lot of effort.
  • Automation becomes a completely independent service. Once a test is automated, it is removed from the manual testing registry. Why link manual testing with automation if you can single out automators into a separate department and get a “green pipeline certificate” from them? This is perhaps the most dangerous path, since it makes one unit completely dependent on another, and such connections often end with conflicts and confusion in retrospect if something goes wrong at the crossings.

Despite all the difficulties, the automation engineer starts working more efficiently after a while: running automated tests does not require the time and attention of an engineer. It means that the more tests are written and run, the more time we save on runs compared to manual processes.
The robot dies
When there are ten automated tests, the effect is incomprehensible; when there are a hundred, it is noticeable. And every day, engineers will expand coverage and make autotests more and more convenient for work: more atomic, more stable, more informative. Isn’t that what we expected? At this moment, one may think that automation has come, and it will only get better. As if!

Growth problems

Once automated testing starts to grow, a bunch of new problems will pop up, and the problems will be much closer to development than to existing testing. And if your developers and testers have not established close cooperation and sharing of experience, the solution to each of these complexities will take time and effort. So what are we in for?

Test server stability

The test environment will definitely appear. It will end up in the operation project support team (Ops) pipeline and testing will likely not have managing access to it.

Typically, a non-super-productive machine is used as a test server, and when there are more tests or runs, it starts choking. This often leads to delays in shipping the results of the run and the test flakiness (in this case, it means tests that fail due to broken parallelization or timeouts), adding fuel to the fire of already weak faith in automation from both manual testing and development/operations.

Automation drops out of processes

As a result of such progress, even the most competent automator closes in on his own tasks:

  • Manual testing actively communicates with the team, talking about test results and bugs. Due to the difficulties mentioned above, automated testing remains an internal artifact in the QA team, which does not give the automation engineer the opportunity to “get a sniff of fresh air” from the production team.
  • It is usually hard to add anything to the development team automated tests with metadata (IDs, for example) due to the fact that the automation engineers are considered to be less qualified in terms of coding than the average developer, so they get a firm dismissal.
  • A gap in competencies also stands in the way of going to the ops team; as a result, the admins dismiss the offered help saying “we don’t have time to teach you, we’ll set everything up ourselves, and you use it at home.” Or “do you want a Selenoid cluster? slow down, we don’t have a docker in our pipelines!”

As a result, we get a situation where engineers have to break a lot of walls and silos to “legalize” automation, and without the support of a good manager, such things fail. In the end, automators give up, and they just write autotests in their test environment and show beautiful useless reports. In the end, there are more questions to automated testing than answers:

  1. We don’t know how many autotests are written and running.
  2. It is not clear which functionality is covered by automated testing and which is not.
  3. There is no understanding of where the tests are relevant and where they are not.
  4. There is no clarity on how these tests can be run.
  5. It is necessary to find out where and how to store test results. Manual domination! In the end, we have manual testing, which deals with regression and does not scale. And the automation team, designed to solve scaling problems, is disempowered and, left without the support of colleagues, breeds infrastructure and process complexities. Testing becomes ineffective.

What is to be done? Testing 2.0

If you have a large project and set the goal of figuring out how much it costs you to manually test each release, you will realize that automation is indispensable. That is why any large IT business builds testing around automation.

Some say “testing is not necessary — the development itself will write the tests, and the canary deployment will check everything else”, but this is not entirely true. And here’s the difference:

  • Developers write optimistic tests 90% of the time. Tests that check the happy path: when you enter the right data, you get the correct result; the database accurately stores and updates data; data structures and algorithms are executed correctly.
  • The testers, on the contrary, test with a destructive paradigm. Their task is to find vulnerabilities in both basic and unexpected scenarios. Find them despite the fact that everything usually works fine.

Let’s rewind time a bit to understand when we took the wrong turn. This is the moment when an automation engineer joined the team… No, the decision itself was correct! However, let’s try again to think about where he should have been put in so we won’t end up in a situation when automation is off the grid. The following text may seem to you either the story of Captain Obvious, or utter nonsense. In any case, reach us out on Twitter! One can’t say everything in an article, so let’s complement it with a lively discussion.

Automation in charge!

Let’s flip the paradigm! Now the trust in development teams in almost any business is phenomenal (glory to DevOps!):

  • Development has competencies for evaluating the code and the way it is tested.
  • Development has huge trust.
  • Development has a close relationship with the management and operations team.

If you carefully look at this list, you can point out that each item allows you to fix the corresponding source of test automation problems. Let’s imagine what happens if, at the moment a QA automator arrives, he becomes not a gear in the manual testing department, but leads the movement of the entire team towards automation. This will slightly change the entire vector of his responsibility and tasks from the very beginning of the journey. There were at least three big stories on our radars, when a “change of power” takes place in a revolutionary way due to a cool team and a strong-willed decision on the primacy of automation at the managerial level.

The second option is suitable for teams where there is no super-qualified team of testers, and the developers themselves take care of testing. The decision is clear to put the automator to the Dev team. Such developers have a culture of experience sharing and a testing culture that is high enough so our engineers will have something to learn and someone to show the code for review.

  • You don’t have to worry about competence. These developers will be happy to help the autotester figure out a bunch of new technologies that we didn’t get our hands on in the previous scenario: Docker, git, CI/CD pipelines.
  • Developers don’t like to write automated tests. For them, writing tests for a feature is a chore that is needed only for the pull-request to pass the code review. But it is a great task for the automation engineer. He can both deal with the feature, and write more interesting tests. Just imagine how great it would be to free up to 20-25 % of a developer’s time to develop new functionality.
  • The development team also most often communicates better with the Ops team. If all of a sudden the tests start to fail like crazy, the admins will help you figure it out or allocate more CPU/RAM. Why? Because if testing in development slows down releases, this is a problem for everyone in the pipeline.

Of course, in the long run, this will entail major organizational changes. We put automation at the forefront of testing, rather than making it a manual testing tool. Manual testing will become a tool for automated testing!

Have you ever seen anything like that? But think, do we really need manual testers to run regression manually? They should create new scenarios and pass them to automation, by default, being responsible for exploratory testing and expertise in fundamental testing to ensure maximum quality and coverage. FTW!
a robot and an engineer work together

Please, let me know if the post was insightful or, at least, nice to read! If so, I will keep bringing to the amazing DEV.to community the best posts from the Qameta Software testing automation blog!

In the next article, we will try to define the specific stages at which testing is found. We will try to think of problems for each and offer solutions and tools that will help overcome these problems.

Top comments (0)