DEV Community

ChunTing Wu
ChunTing Wu

Posted on

How to Make Example-based Testing Better

I have introduced property-based testing before and mentioned that property-based testing is designed to cover the shortcomings of example-based testing. Since it is difficult to generate examples to hit various edge cases and boundary conditions, we generate a large number of examples in an automated way to cover the tests completely.

Nevertheless, there is a big challenge in property-based testing, how to write a validation condition?

We can't expect what kind of input will be there, so it's like we have to rewrite the original test target in a different way to be able to verify it. For example, if we want to verify the function add by a property-based test, then our verification condition will be as follows.

a + b == add(a, b)

When the test target is complicated enough, to write such a condition is extremely hard, not to mention, if I have a way to write the verification condition, why didn't I write the test target in that way?

Anyway, the example-based test still has its value.

Then, how to write a good example-based test? I'm not sure you have ever encountered a problem when there are too many examples in a unit test, making it almost impossible to modify the unit test.

When we are not the author of a unit test, it is difficult to understand the author's thinking in a complex unit test, and it is also difficult to understand a complex test case completely.

Therefore, this article introduces a few ways to make example-based testing easier.

Table-based Testing

First of all, before simplifying the unit test, we need to understand the structure of the unit test, which I call the 3A principle.

  • Arrange: At the beginning, we need to prepare test data and test preconditions.
  • Act: Then the actual execution of the test target.
  • Assert: Finally, we verify that the execution results are as expected.

No matter what kind of unit test is used, it basically involves these three steps. There are several reasons why unit testing becomes complicated.

  1. Expected results are hard to produce.
  2. The number of test cases becomes larger over time.
  3. Relationship between the arrangement and the assessment cannot be identified.
  4. Arrangement workload is huge.

The root cause of the first one is usually the complexity of the test target, which means the unit itself is too large, so it is necessary to reduce the scope of the unit test.

The second reason is the focus of this article. When writing unit tests, we often add a corresponding test when we encounter a new bug, resulting in a test case getting larger and larger. Here is a simple example.

def test_calculator():
    # 1st
    expression = "2 + 3"
    ret = calculator(expression)
    assert ret == 5
    # 2nd
    expression = "2 + 3 * 4"
    ret = calculator(expression)
    assert ret == 14
    # 3rd
    expression = "(2 + 3) * 4"
    ret = calculator(expression)
    assert ret == 20
    # and so on
Enter fullscreen mode Exit fullscreen mode

Such a case will get larger and it is not difficult to find plenty of duplicate codes that differ only in parameters. So how should we simplify it?

Building a table.

def test_calculator():
    table = [
        ("2 + 3", 5),
        ("2 + 3 * 4", 14),
        ("(2 + 3) * 4", 20),
        # and so on
    ]
    for expression, expected in table:
        ret = calculator(expression)
        assert ret == expected
Enter fullscreen mode Exit fullscreen mode

By building a table, all we need to do for adding new cases in the future is to add the corresponding entries to the table. There is a more elegant solution to this approach in the PyTest framework: parametrize. Here is a PyTest example.

import pytest

@pytest.mark.parametrize("expression,expected",
    [
        ("2 + 3", 5),
        ("2 + 3 * 4", 14),
        ("(2 + 3) * 4", 20),
        # and so on
    ]
)
def test_calculator(expression, expected):
    ret = calculator(expression)
    assert ret == expected
Enter fullscreen mode Exit fullscreen mode

With parametrize, it can handle loops directly, so the code is much more simple. Of course, I feel building a table is enough to achieve our goal.

Building a table also solves problem 3, so we can easily know in the table what are the changing conditions of the test and how to relate them to the expected results.

And what is problem 4? Let's continue with the calculator example as an extension. Suppose our calculator is able to remember the results, how should we test it?

def test_calculator():
    calculator = Calculator()
    expression1 = "x = 5 * 10"
    calculator.calculate(expression1)
    expression2 = "y = 2 + 3"
    calculator.calculate(expression2)
    expression3 = "x / y"
    ret = calculator.calculate(expression3)
    assert ret == 10
Enter fullscreen mode Exit fullscreen mode

From the above example, we know the state of the target will change with time and input, so how to build a table?

def test_calculator():
    table = [
        ("x = 2", "y = 3", "x + y", 5),
        ("x = 2", "y = 3 * 4", "x + y", 14),
        # and so on
    ]
    for *expressions, expected in table:
        calculator = Calculator() # no init params
        for expression in expressions:
            ret = calculator.calculate(expression)

        assert ret == expected
        calculator.reset() # important
Enter fullscreen mode Exit fullscreen mode

It still seems to work, but this simple example already makes the test logic a bit complicated. If our target changes its state not only with the input, but also with the initialization state and many external conditions, then the table will have more and more columns and complexity will arise.

Is there a way to simplify it even further? So that people who need to add new cases in the future will know what to do at a glance?

Yes, I call it config-based testing.

Config-based Testing

Let's continue to look at the example of this calculator.

scenario:
  - calculator_test1:
    expressions:
      - x = 2
      - y = 3
      - x + y
    execute: calculate
    expect:
      result: 5
  - calculator_test2:
    expressions:
      - x = 2
      - y = 3 * 4
      - x + y
    execute: calculate
    expect:
      result: 14
Enter fullscreen mode Exit fullscreen mode

By writing a human-readable configuration file, we can simply describe the test scenario we want, and in addition, we can read the purpose of the past cases. I believe maintaining a configuration is more relaxing than maintaining code.

Of course, at the beginning, we have to use a parser to handle the configuration, but it is worth such an investment. Moreover, when the behavior of the target is affected by many factors, we only need to add descriptive fields to the configuration file.

Frankly speaking, such a configuration file will not be any difficulty even if it is written by someone who does not know how to write programs, and anyone can help us improve the test coverage.

Conclusion

It is not hard to write a unit test, but it takes a lot of effort to write a unit test which is easy to maintain.

When we write a unit test, we often look at the test case from the author's point of view, but in fact, it is often someone else who maintains the case. That other person does not have the original idea of the author, which makes it very difficult for us to maintain the unit test.

Even worse, when the test cases become large, the functional evolution is seriously affected. Have you ever changed a little bit of code and the unit test exploded: a lot of red lights, and you wanted to fix it but found that you couldn't understand what it was testing.

Sometimes, we document the purpose of these test cases in a file, but document synchronization is a major challenge, and the documents are often not up-to-date.

It is better to let the file itself be an executable object. It is far more efficient to write documents as a unit test than to "explore" the concept of testing afterwards.

This article uses an interesting feature of PyTest, but actually, PyTest is very deep and profound, so if you have more great uses for PyTest, please feel free to share them with me.

Latest comments (0)