## DEV Community is a community of 754,646 amazing developers

We're a place where coders share, stay up-to-date and grow their careers.

Kevin Lamping

Posted on

# Not All Tests Are Equal

Not all tests are equal. Some hold much more value than others.

While all tests either give you a green checkmark or not, they value of that green checkmark can vary drastically.

It's nice to see a green checkmark on the "About Us" page test letting you know that the page title is correct, but that test won't save the business thousands of dollars.

However, seeing a red X on a test for the 'Get a Quote' page, the one that brings in a significant percentage of the businesses customers and is always forgotten about in testing, now that's a valuable test.

You can't put a specific number on the value of any given test, but, thinking about it mathmatically, I would measure it with the following formula:

Value of functionality to the business x (chance that functionality will break + chance a bug won't be caught before release)

That's a bit of a formula, so maybe some examples will help demonstrate how it plays out.

First, let's think about login functionality. The login flow has a very high value to the business, but it also isn't likely that it will break. This is because it's not the most complex functionality and coding patterns are well established for login flows. To add on to that, if the login page is broken, there's a huge chance that it's caught outside of any automated tests, as normal usage will quickly reveal the bug.

So while the value of the functionality is high, the value of a test for that functionality is low. The left side of the equation is a big number, but the right side is miniscule.

Here's a very rough estimate of what the values for my equation would be:

• Chance it will break: 1%
• Chance it won't be caught: 1%

Plugging that into the formula, you get: `100 x (.01 + .01) = 2`

So you see, despite the large value to the business, the value of the test is much much smaller. This doesn't mean login tests aren't helpful, but just that compared to other tests, they aren't as valuable.

## Registration Woes

Now let's look at the registration flow. It's similar to the login flow, and is also of high value to the business (it's how you get new users). But, like the login page, registration flows are pretty unlikely to break due to the limited complexity of them, along with the fact that registration functionality isn't anything new.

However, unlike th e login page, the registration flow isn't as "high-touch". It's not normal for developers and product owners to be manually using this flow, as they already have their accounts created. So, here are what my numbers would be:

• Chance it will break: 1%
• Chance it won't be caught: 20%

Again, plugging that into the formula, you get: `80 x (.01 + .2) = 16.8`

So while the value is less than that of the login flow, because the flow is less used, the value of automation is much higher.

## What Else Won't be Caught?

Hopefully by now I've demonstrated how my equation works. It's not a perfect equation, and I'm making up these numbers, so take it all with a grain of salt.

That said, here are some examples where that "chance it won't be caught" part really plays a big role:

• Time of day issues (Trying to place a food order after restaurant is closed)
• Edge cases (Users with specific data outside normal usage)
• Calendar issues (Does functionality change between calendar years?)
• Network issues (How does the offline experience work?)
• Special Offers (Coupons, promotions, referals)
• A/B testing (Do specific users get a different experience from majority?)

There's plenty more to list, so let me know in the comments what you think should go there!

## What Else is Likely to Break?

For this part, I'm going to point you to the post "The Five Most Common Bugs you Should be Writing Tests for", as it covers this subject well. But again, mention in the comments anything you find particularly prone to problems.