DEV Community

Rishi Baldawa
Rishi Baldawa

Posted on • Originally published at rishi.baldawa.com on

Understanding Systems Through Tests

I'm assuming the reader has knowledge of differnt tests. If not, consider skimming through this blog post or wikipedia.

When I'm in a strange codebase1 and short on time, I've taken shortcuts by going through all the tests first. If the tests are well organized, plentiful and easy to comprehend; I can swiftly understand the verified rules and restrictions of the system (or set of systems) and its neighbours. If they're sparse, messy or in dire need of help, I know my week got way more interesting.

Unit tests

Reading unit tests should help you realize what the team thinks of as a single piece of unit. The input, output values help you guess what kind of data flows around. Mocks / Stubs should aid in figuring out how other components behave. The test names should help you figure out the use-cases / edge-cases around the component. Tests around specific dimensions related edge-cases (like dates, country) tell you that the system, and any changes you make within it, needs to be respectful of them.

Lack of tests or obvious gaps, implies the codebase has not been fully verified. You're in the wild wild west now, boy!

Integration Tests

Integration tests help you figure out the boundaries of the system and services that we interact with. The input / output values help you understand the kind of data being shared between systems. Compared to unit tests, this should help you identify which data stays within the system and what gets passed around.

If there's too many integration tests (esp over unit tests), expect the system to not care as much about latency (maybe even performance in general). It is possible that the system cares a lot more about availability or accurate interactions. If there's too many tests against a particular neighboring systems, I'd bet that service is flaky and, in past, developers have had bad experiences working with such systems. I would watchout for them as well.

If there's too few tests, either the system doesn't have as many interactions or the integration with neighboring systems hasn't been painful yet. It is generally a matter of time. Either the software get deprecated or someone adds the tests after a frustrating outage.

Acceptance Tests

Acceptance tests, when available, are a good way to understand the core functionality that the software supports. There's still a bit of code to business logic translation but is always an useful exercise. I don't focus too much on the inputs and outputs as these tests tend to have a bias towards the "happy path" use-cases. Any odd edge-case handling is probably present to counter the non-deterministic behaviour.

Feature Tests

Generally what the non-programmers wished the code would do. If there's too many of these tests, a QA or Developer went to the town with BDD. Either way, it'll make an honest attempt to deliver the same value as an acceptance tests but with more humane language.

From experience, there's very few accurate feature tests. Most express partial functionality or worse. The code executing the test spec could divulge more information but I've been burned few times in the past.

Comparison Tests

This tells me the code Extracts, Transforms and/or Loads critical datasets and the system needs to focus on accuracy. Also, as the cost of comparing the data is smaller than actually translating into POJOs, you may be dealing with a complicated structure or a very simple one; but nothing that sits in the middle. There may be compliance / regulations involved as well. One big takeaway would be that if you are adding new sources or changing the output, ensuring that the tests are still accurate will be painful the first time. Probably a manual effort.

Generative Tests

These are so rare. When they are present, they help me get a quick grasp of the core assumptions and expectations of the system. The variants help me realize the terminologies / technologies the system deals with. If these are hard to understand then you're dealing with a complex system with many side-effects2.

Fuzz Tests

These tell me that the system is expected to take care of all error handling and should not crash. All apis need to be conscious of random inputs3 and insecure behavior cannot tolerated.

Side Notes

  • If the tests are not part of the deployment chain or aren't carried out on a frequent basis, you can bet they are out-of-date. Don't bother with them and just move on. At the very least, tread with care.
  • There's way more types of tests. Even the ones I described have sub-categories. I picked the ones I come across frequent enough to form opinions.
  • This is partially written during a particularly busy period. I ended up taking many gaps and days to complete this. I apologize for any inaccuracies, in advance.


  1. I was leading a ranger team for more than 2 years so this happened a lot.  

  2. Probably justifies the need for Invariant testing 

  3. Property-based testing helps me know the reduced subset of valid input values that can be passed in. 

Top comments (0)