DEV Community

Cover image for Three Types of Tests to Automate - and Three Types to Skip
Dennis Martinez
Dennis Martinez

Posted on • Originally published at dev-tester.com on

Three Types of Tests to Automate - and Three Types to Skip

When taking on a new project for test automation, the possibilities seem endless. If you had an unlimited amount of resources, you could spend weeks or months writing automated tests for every nook and cranny in the application that you could think of testing.

Most of us don't have the luxury of limitless resources when it comes to coming up with a plan for test automation. We usually have time and budget constraints that we need to stick to for the project to move ahead. Some of us might even be lucky if we have enough resources to pull off what's asked within a specified timeline.

In real-world environments, we have to pick and choose where to focus on reaching our goals on time and budget. But with lots of paths to take, where can we begin? What tests should we spend time on, and which ones can we push to a later time?

While everyone will have different needs, there are a couple of useful guidelines to follow for getting the most out of your time when diving into test automation. This article first begins with three kinds of automated tests that you don't need to deal with, at least from the beginning of your planning. Then we'll focus on the types of tests where you should spend most of your effort.

Three Types of Tests to Automate - and Three Types to Skip

DO NOT attempt to automate everything

When testers learn about test automation and begin a new project, there's an incredible urge to start writing automated tests for everything. It seems like testers realize how much potential there is in automation, with the possibility of unleashing this power for every potential route in an application. To paraphrase a well-known expression: when you have a hammer, everything looks like a nail.

Attempting to automate everything on an application is, quite frankly, a waste of time. It leads you to an endless cycle of change and complexity. Most applications continuously evolve, receiving updates and bug fixes throughout its lifespan. Most test suites are also changing frequently to keep up with application development. The more tests you have, the more you'll find yourself wrestling to keep your tests working correctly.

For small applications with a limited number of uses that won't change, it's possible to automate everything the app does. But even then, if the app is limited in scope and usage, the effort spent on automation may not be worth the work since the risk of the application breaking is minimal or non-existent.

Avoid the temptation of using your test automation skills for trying to check everything. Just because you can't doesn't always mean you should.

DO NOT automate for the sake of boosting vanity metrics

If you work with a team, you'll have to provide a summary of the work you're performing. Usually, this comes in the form of metrics. Whether it be your boss, your teammates, or for your self-improvement, metrics are a key to progress. They're the portal to your world, allowing others to have a birds-eye view of the value of your work.

However, metrics can also be a detriment if you let them drive your automation work. Sometimes, testers focus on vanity metrics such as the total amount of tests in the test suite, the number of bugs found by the tests, or coverage percentage of the application. These metrics are useless for determining whether your test automation work provides value to the team.

If these kinds of metrics drive your automation work, there's a good chance that you will automate the wrong things to make your metrics look nice. More tests, more coverage, and more captured defects don't mean a better test suite. Adding more tests and coverage can lead to slow and unstable tests, sabotaging your effort.

Measure your work, but don't let metrics cloud your judgment. Focus on making your work and the work of those around you better.

DO NOT require running tests that interact with live production systems

A useful form of testing is end-to-end testing, where you can simulate real-world usage in an automated fashion. These tests offer a lot of value by providing more comprehensive coverage for each test in the suite, ensuring that different components interact well with each other. You can use end-to-end tests to cover entire flows in one shot.

The main advantage of end-to-end tests is the ability to run on the application as if a user were using the app in a production environment. However, one mistake I've seen teams commit is automating end-to-end tests using live production systems instead of a separate test environment.

Executing automated tests on the same systems your users interact with might surface issues your users can come across. But these tests can create problems by performing sensitive actions, like payment processing or data manipulation. A fellow developer once told me he ran a set of tests against a production database without knowing the test suite dropped the entire database during initialization.

There are other ways to test in your production environments, like canary testing or setting up feature flags. Keep your testing environments separate, and run your automation only on those systems.

Three Types of Tests to Automate - and Three Types to Skip

DO automate business-critical functionality first

When coming up with a test plan, the very first action to take is figuring out what is the most critical parts of the application to test. It might sound like a no-brainer, but I've noticed testing teams jump in and create a plan without much thought. They start planning by going through what's in front of them. But just because certain functionality is more evident, it doesn't mean they're something that absolutely needs to work.

One question I like to ask is, "What areas of this application would cause the company to lose money if it broke?" The answers to that question should become the primary focus of the test plan and the tests to automate first. For example, an e-commerce site needs its shopping experience working all the time, so anything related to someone making a purchase - adding items to a cart, checking out, and so on - should receive top billing in the plan. Secondary functionality can wait for automation, like updating a profile picture or adding an item to a wishlist.

Another recommended tactic is to ask other departments what areas should get tested first. Other roles, like software engineers and product managers, have different perspectives on what's most important. Using this information can help guide the testing team's plan with new insight. It's still up to the testers to come up with a testing plan, but a whole team effort makes the plan better.

Focus on what's most important for the application first before taking on less-essential tests.

DO spend time automating long, tedious, annoying flows

It seems like every application I've worked in has one section that's difficult to test for because it involves a lot of moving parts. These sections are often essential bits of functionality. Ironically, due to its complexity, these sections usually don't get the automated test love it requires and deserves. Instead, these flows get shoved away as part of the manual testing process. The problem is since these tests are so tedious, testers will eventually minimize testing these areas or avoid them altogether.

These kinds of flows are excellent targets for automating, for those exact reasons why no one wants to deal with them in the first place. These are high-value, critical flows that need to work. And since these sections take are dull and take lots of time to go through, wouldn't it be nice if a machine did the work for you? Some critical areas that have little automated coverage are testing essential email messages and checking your website's design responsiveness.

A little side effect I've noticed with automating these tedious and long tests is that they expose an area of the application that can use some improvement. The pain that testers feel when handling these sections are a sign that something's not right. Use the opportunity to surface the issue and even come up with solutions on how to make those flows less complicated.

It might take some time to write a reliable test for specific flows, but in the end, the time you'll save will make the entire process worth the effort.

DO automate areas that you don't want breaking again

We've all seen bugs that get reported and squashed, only for the same bug to rear its ugly head back at the most inopportune time. Establishing a practice of creating an automated test for patched bugs and building a robust regression test suite is a lifesaver in these scenarios.

That doesn't mean you should write tests for every bug that the team reports. Not all bugs are created equal. Sometimes there's a minor issue, like a small UI bug that doesn't affect the functionality of the application. These bugs need to get fixed, but you shouldn't write a test case to make sure that it remains fixed. As mentioned earlier, you don't need to automate everything.

However, if it's a bug you never want to see resurfacing ever again, it's essential to have an automated test covering that area as soon as possible. If there was an issue that affected your customers in any way, such as private data accidentally exposed or someone getting double-charged on a subscription payment, you should write tests to ensure it doesn't unexpectedly happen again in the future.

Build trust in your application, and don't have your customers lose confidence in your organization. Spend enough and effort time to ensure big problems don't happen again.

Summary

Jumping on test automation for a new project can get overwhelming quickly. There are many ways to go about it. With time and budget restraints, it's tough to know what's the best way to begin.

Everyone will have different needs for their work, so there's no single right way of coming up with a test plan. However, following a few guidelines from the beginning can help make the process easier for you and your team.

It begins with knowing what not to do. You don't need to come up with a plan that covers every corner of the application. Don't let metrics, especially non-essential vanity metrics, drive your strategy. And don't rely on your tests running on live production systems.

Knowing what to avoid clears the path towards the things where your team needs to deliver its focus. Give top priority to business-critical functionality. Take time to automate the boring stuff, so no one has to deal with it in the future. Establish a practice of writing necessary regression tests to make sure ugly bugs don't pop up again.

Taking on a new project doesn't have to get overly complicated. Clearing the clutter and focusing on what matters the most from the beginning makes performing your test plan smoother down the road.

What kind of guidelines does your team follow when coming up with a test plan? Share yours in the comments below!

Top comments (0)