DEV Community

Cover image for How to write good software tests
Dina Levitan
Dina Levitan

Posted on • Originally published at yourbase.io

How to write good software tests

We’ve all experienced the effects of chaos and entropy. You put your neatly-wired earbuds in your pocket only to pull them out in the most tangled mess imaginable. How a single cable managed to twist itself into 25 knots in three minutes is beyond explanation, right?

Well, the same thing happens in software development. You update a few simple lines of code, run the test suite, and some random test throws a meaningless error. (“PC Load Letter? What the f--- does that mean?!”) You investigate only to find the error is coming from a critical piece of code no one has dared to touch for 10 years.

It’s hard to understand the impact of our code changes via test suites, especially when pushing to complex, highly-integrated repositories. You could suggest refactoring the test suite, but that’s a lot of technical debt your team can’t afford to take on right now. You can already hear the project manager saying, “I’ll put it on the backlog with all the other stuff.”

Organizations across the globe suffer from these types of problems. Why? Because it’s all too easy to build an incredibly complex system without realizing it. And by that time, it’s too late.

The only way out of this scenario is with better testing.

Why software testing matters

Effective software testing allows developers to focus on concerns that are complicated. The greater the complexity, the more challenging it is to model. Therefore, it’s critical to write de-coupled, repeatable tests that provide important, meaningful signals. There’s a lot packed into that statement, so we’re going to break it down in this article.

Let’s start with the simple stuff. There are two main benefits to software testing:

  1. Increase confidence while moving code toward production
  2. Increase the velocity of that movement

The closer code gets to production, the more it needs to be exercised. The more exercise code gets, the more confidence one can have in its production viability. Confidence comes from well-written tests that provide information on where problems exist.

But confidence alone is not enough; speed to production is equally important. Modern businesses cannot tolerate painstakingly-detailed testing of every change. There’s a middle ground between being too laser-focused on small things and writing tests that cover the universe in one go. That’s where the most valuable testing takes place.

So how should a developer determine what testing is valuable?

What is a good software test?

Good software tests have well-defined characteristics and goals:

  • Prove your software works
  • Prove your software does what it’s expected to do
  • Prove your assumptions about the software hold true
  • Prove something relatively complicated works under a variety of scenarios and works with other components as expected
  • Give a strong, meaningful signal that is informative and unambiguous
  • Run quickly
  • Are repeatable
  • Are free of side-effects

Several of these are obvious, but a few are not. We like to think of failed tests like a “Check Engine” light on a car. The Check Engine light doesn’t offer any valuable information on its own. But a car mechanic has equipment to get more meaningful data from the vehicle’s diagnostics system. If the light is on and the diagnostic is “Engine valve #2 is loose,” that’s a meaningful signal and the mechanic can get right to the repair. Otherwise, the mechanic has to run an extensive set of tests to identify the problem.

Software test results need to provide diagnostic information that’s valuable to a developer or quality engineer. It’s not enough to say, “Database error.” A good test should say, “Failed to connect to database” and surface the exact error message coming from the database connector itself. Clear error messages like these are informative, unambiguous, and can lead a developer to the exact lines of code that need correction.

Good tests are repeatable and can run in any order without dependencies. In other words, they’re idempotent. And good tests model a wide variety of use cases. For example, you may not want to send email as part of a system, but you should model it such that all the ways emails can fail are handled.

What is a good test suite?

A test suite is a collection of good tests that model a wide variety of use cases and run as quickly as possible. Whereas good tests provide confidence about isolated instances of change, a good test suite provides confidence that under normal circumstances, the entire context will operate successfully. Good test suites make dependency changes clear, help developers move fast, increase confidence of submitting and approving pull requests, and reveal bad tests.

On the other hand, a bad test suite is executed less and less over time because it does not create or increase confidence in the overall system. People stop running bad test suites because they:

  • Are unmanageable
  • Are slow
  • Serve only a few people
  • Leave developers shrugging
  • Are vulnerable to dependency changes
  • Are overly comprehensive
  • Fake or mock out too much of the environment/context

You have probably heard the saying, “If a tree falls in the woods and no one is there to hear it, does it make a sound?” Our corollary is, “If you write tests and no one runs them until the last minute, does it prove your software is ready for production?”

image

An overly comprehensive test suite tests too much breadth. If you’re testing 2+2=4 and it’s not failed in the last 25 years, then it doesn’t tell you anything when it passes. If you have a test that tests everything in the system, you’ve basically got a Check Engine light. If it takes 25 minutes for a test to execute and it fails at the end, it’s probably a waste of 25 minutes. (Also, we think even 10 minutes is too long to wait for a test suite to finish execution.)

Writing tests that provide value, confidence, and increased velocity

It can be difficult to identify the “middle ground” between being too laser-focused on small things and writing tests that cover the universe in one go. Significant progress can be made by eliminating tests that take too long, provide meaningless information, never fail, serve only a few people, are vulnerable to dependency changes, and lack idempotency.

Developers are best served by tests that focus on code that’s complicated. Confidence can increase with tests that cover a wide variety of failure scenarios and provide strong meaningful signals about those failures. Such tests should be written to run quickly and free of side effects.

The closer code gets to production, the more it gets exercised. Test suites that are chock-full of quick, meaningful tests will increase confidence. When that code is deployed there should be very few questions about production viability.


To learn more about how YourBase Test Acceleration can speed up your test suite run times, check out yourbase.io!

Top comments (4)

Collapse
 
recursivefaults profile image
Ryan Latta

I hope more people see this article.

You hit on some characteristics of tests that too many people miss and wind up with problems in their tests AND code.

Test independence is critical.

Collapse
 
dinamlev profile image
Dina Levitan

Thanks for the feedback, Ryan!

Collapse
 
squidbe profile image
squidbe • Edited

If you’re testing 2+2=4 and it’s not failed in the last 25 years, then it doesn’t tell you anything when it passes.

While that's obviously a rhetorical example, your point is valid. I've seen way too many tests that are testing obvious things that wouldn't fail or things that would be caught simply by linting. The problem is that some managers/teams focus on code coverage and simply think "higher is better". Engineers should consider whether a test is testing something that can break under certain conditions (e.g., unexpected input) or if certain conditions aren't met (e.g., a data contract changes).

Collapse
 
skypy profile image
Kinjal

This is well written and to the point - this can serve as a motivation to write good test cases.