DEV Community

Cover image for How Can You Tell If Your Automated Tests Are Any Good?
Dennis Martinez
Dennis Martinez

Posted on • Originally published at dev-tester.com on

How Can You Tell If Your Automated Tests Are Any Good?

While browsing the Ministry of Testing forums the other day, I stumbled upon a thread that caught my attention. The thread's title was, "How do you tell how good an Automation implementation is?" As someone interested in checking out how others handle their test automation, I was interested in seeing what others had to say about this topic.

The thread did not disappoint in terms of the quantity and quality of replies. Lots of experienced testers from all over chimed in and gave their thoughts about this question. Many talked about their personal experiences with other organizations when working on automation. Some offered hands-on practical advice, while others provided a more theoretical point of view. The thread contained a nice mix of useful feedback, and we can learn a few guiding principles for our implementations.

Most of the responses were great, and I encourage everyone to read through the thread. It'll likely get you thinking. As I read through every answer, I began noticing that many of the replies shared some common themes between the different testers who took the time to share their thoughts. It feels like there's a shared agreement in the test automation world on what makes an automation implementation effective.

The forum thread reminded me a lot about my personal experiences, and I saw a lot of my own thoughts scattered throughout the words written by other testers. Here are some of the main takeaways I got out of the entire discussion, and where many ideas and feelings overlapped.

It's hard to tell what good implementation is, but it's easy to spot a bad one

As evidenced by the question from the forum thread's original poster, it's tough to know when your automated test implementation is any good. You might have some thoughts about what makes any testing "good", but it's not something you can quantify with hard evidence.

Although it's difficult to recognize a good test implementation, it's super-easy to spot a bad one. You can see it coming from a mile away. We all know the signs of a bad setup. It might be that the tests are so flaky you never know when they'll pass or fail, or the tests leave considerable gaps in coverage. Slow tests that take forever to execute are also a sign.

You can come up with plenty of reasons why a test suite isn't all that great. However, many people use these reasons to gauge their implementation incorrectly. You might think that if your automation efforts don't have any "bad" signs, then it must be good. Not so fast.

Even so-called "good" signs of test automation are unreliable indicators for the health of your implementation. For instance, if you have a stable test suite that doesn't fail, it doesn't mean you're testing the right things. You might have tests that don't fail, but they're not catching regressions either. Another example is when you have tons of coverage for the application under test. Lots of coverage might mean that the team cared more about metrics instead of writing an efficient and effective test suite.

It's simple to fix the issues that slow down your progress; it's also simple to get misled by what looks good on the surface. Handle the apparent signs of bad automation implementation, but don't ignore the good parts either.

Code matters just as much as what you're testing

The vast majority of testing teams often care only about the end result of their work. Typically, the attention lands on whether the tests pass, [and if the tests help the rest of the team catch or prevent issues that break the application.

Of course, we need to show that our efforts pay off for everyone involved in the product. If your tests don't help with the product's quality, they're practically useless. Unfortunately, paying attention only to what's visible can lead to you neglecting what's driving these results - your actual test code.

No matter how diligent you are in caring for your codebase, every team eventually reaches a point where things don't perform as well as they used to. Everyone has to tend to their past work for refactoring or even deleting code that has outlived its usefulness.

Having a fast and stable test suite is excellent, but you also need to ensure that you can keep those tests running in optimal condition for the long haul. The time you spend maintaining tests is an essential factor for a solid test automation implementation. If you have to spend hours or days wrestling with your codebase to modify anything, your implementation will stagnate and eventually stop being useful.

Every organization and team has different time and budget constraints for what they can do with their testing efforts. However, making sure your codebase allows the team to build and grow the test suite rapidly and with few issues will pay off tenfold in increased quality.

"Good" is a team effort

The thread on the Ministry of Testing forum has plenty of excellent comments and suggestions about distinguishing a good test automation implementation from a bad one. It has lots of different strategies and points of view, which are great to learn from and use for your work.

After reading through the entire thread, I noticed a common thread uniting every response. Although most of the answers offered something specific from the person responding to the thread, my main takeaway from the discussion is that everyone has their particular version of what a good test automation implementation is.

Every person and every team will have their unique definition of what's considered good and what isn't, and it will vary greatly depending on who you ask. If you ask this question to ten different testers, you'll get at least eight different responses. It wouldn't surprise me if you had ten entirely different answers.

Everyone's circumstance is unique, so it's not uncommon to have different priorities based on the information we have in our hands at any given time. What I consider a good implementation of a test suite (like clean and maintainable code) might register a low-priority item for you and your team. It might not even be on the "good list" for my own team or a particular project we're working on.

If you're spending too much time pondering if your test automation is any good, you shouldn't make this decision on your own. Bring this question up to the rest of your team and see what discussion brings to the table. One of the posters of the forum thread put it best:

"The only real indication that I have is if the team is satisfied with the automation, then it's probably good. It's at least good enough."

Summary

If you work on implementing test automation for your company, chances are you're wondering if what you're doing is right. As shown by the question posed in the Ministry of Testing forum, you're not alone. It's not always a negative thing, either - It's great to think of ways to improve your work.

Everyone has their thoughts and opinions about this question, but you can pull out a few themes from the responses to help guide your decisions.

One of the first things to realize is that it's almost impossible to know what a good test automation implementation looks like. It's easy to see a lousy test suite: slow tests that often fail, tests that don't cover anything useful, etc. But don't let that fool you - what looks good on the surface might cover potential issues that don't serve the team.

Something you can check to determine the quality of your implementation is if you built the underlying codebase for the long haul. Code maintainability and simplicity go a long way in a good test suite. It can be the difference between a long-lasting stable test suite and one that disappears because no one wants to touch the code.

Finally, remember that figuring out this process isn't an individual exercise. Everyone has their definition of "good", and it can differ by team, person, or project. It's best to take the opinions of those around you and your current circumstances and mix them into your definition of "good" for where you are at any given time.

It's okay to check what other testers are doing, and use their experiences to mold yours. But in the end, it all lies with you and your team. If it's good enough for you, that's all that matters.

What ways do you and your team use to determine if your test automation is good? Let me know in the comments section below!

Top comments (0)