DEV Community

Namah Shrestha
Namah Shrestha

Posted on • Updated on

Chapter 4: A Quick tour of PyTest

4.1 Test Discover

  • Test discovery is the process of automatically discovering tests within the specified locations.
  • There are a lot more use cases of test discovery than whats mentioned in this article. We can specify the configurations on a much more granular level.
  • However, we are going to look at the default discovery mechanism that we currently have in pytest.
  • The default implementation is as follows:
    • It looks at the tests in our tests/ directory. We specified this directory in the tool.pytest.ini option in pyproject.toml file.
    • It then looks for any module (filename in this case) that starts with test_.
    • Within each module it looks for functions with test_, it assumes that these are the tests and it runs them.
  • To know more about pytest test discovery, we can read the documentation.

4.2 Parameterising our Tests

  • For now, our code has a single test function inside test_app.py.

    import pytest
    from app import do_something_with_somethingelse, SomethingElse
    
    def test_something() -> None:
        somethingelse1: SomethingElse = SomethingElse(1)
        somethingelse2: SomethingElse = SomethingElse(2)
        somethingelse3: SomethingElse = SomethingElse(3)
        assert do_something_with_somethingelse(somethingelse1) == 'some'
        assert do_something_with_somethingelse(somethingelse2) == 'thing'
        assert do_something_with_somethingelse(somethingelse3) == ''
    
  • This test does not indicate the behaviour that we are testing. But later on, we will have multiple tests based on the different tests.

  • Not only that, notice that in the tests, if one assert statement fails. Then the rest of the assert statements are never executed. The test is considered to have failed.

  • In order to prevent this from happening, pytest allows parameterization of tests.

  • It has a @pytest.mark.parameterize decorator using which we can plug in different values for the same use case (same test function).

  • This allows our tests to run multiple assert statements. If one fails, the others will still run.

  • Using this our test could look something like:

    import pytest
    from app import do_something_with_somethingelse, SomethingElse
    
    @pytest.mark.parameterize("dummy_somethingelse_value, dummy_expected_result", [
        (1, 'some'),
        (2, 'thing'),
        (3, '')
    ])
    def test_something(dummy_somethingelse_value, dummy_expected_result) -> None:
        assert do_something_with_somethingelse(SomethingElse(dummy_expected_result)) == \
            dummy_expected_result
    
  • The syntax for parameterize is:

    @pytest.mark.parameterize('label1, label2, ... and so on', [
        (label1value, label2value, ... and so on),
        (label1value, label2value, ... and so on),
        ...
        and so on
    ])
    def test_dummy(label1, label2, .... and so on) -> None:
        pass
    
  • So the main idea behind mark.parameterize is that if one of the cases in parameterize list fails, it still runs the rest of them.

  • Apart from that, it makes our tests look a lot cleaner.

4.3 Skipping Tests

  • We can skip tests in pytest by using the @pytest.mark.skip(reason='Any message') decorator.

    @pytest.mark.skip(reason='Feature not implemented yet')
    def test_skip_sometest() -> None:
        assert some_uninmplemted_feature is not None
    
  • When we do TDD (Test Driven Development), we write tests before we implement the functionality.

  • We usually skip such tests.

  • There is also another skipif variation of skip which lets us skip based on conditions. For example, if the operating system is Windows, or the version of Python is not supported, etc.

4.4 Handling Failing Tests

  • We might have tests that are bound to fail. These tests can be used as:
    • Tests to assure that things that are supposed to fail (like some virtual scenario), do fail.
    • Documentation to better understand the system. To show that certain settings will fail.
  • These tests when executed will show up as failure and fail the entire build.
  • We can avoid that by deliberately marking them as tests that are supposed to fail.
  • We can achieve this with the pytest.mark.xfail decorator.

    @pytest.mark.xfail
    def test_zero_division() -> None:
        assert 1 / 0 == 1
    
  • However, we should not mark those tests that raise errors as failing tests. This brings us to the next topic.

4.5 Handling Tests that raise Exceptions

  • We might want to make sure certain scenarios raise a certain exceptions.
  • This can be handled by using the context manager called pytest.raises(Exception).

    def zero_division() -> float:
        return 1/0
    
    def test_zero_division() -> None:
        with pytest.raises(ZeroDivisionError):
            zero_division()
    
  • As we can see, there is an exception expected when we run zero_division. We can handle it with pytest.raises.

  • This test will now fail if ZeroDivisionError is not raised.

4.6 fixtures in pytest

  • Fixtures are used when multiple tests require a certain amount of setup that they all share in common.
  • For example, tests that require a database connection.
    • In such cases, it is better to create the connection and share it with tests.
    • Rather than creating a connection for every test.
    • So we can create Fixtures that get passed to our tests.
  • In unittest this is done by the setUp method. It creates objects that can be shared throughout the test methods.
  • By convention if we create a file called conftest.py, and create a fixture in that file, it will be available for all of our tests based on thescope of the fixture.
  • So we create conftest.py in our tests/ directory alongside test_app.py and create fixtures there as follows:

    import pytest
    
    @pytest.fixture(scope="session")
    def db_con():
        db_url = "driver+dialect://username:password@host:port/database"
        db = DatabaseLibrary()
        with db.connection(db_url) as conn:
            yield conn
    
  • Notice we have scopes in pytest. We will look at what scopes are.

  • Fixture scopes basically define how many times a fixture will run based on the area it covers.

4.6.1 Understanding fixture scopes

  • To learn more about fixture scope, we can go: https://betterprogramming.pub/understand-5-scopes-of-pytest-fixtures-1b607b5c19ed.
  • Scope basically has to do with the scope of the fixture as in, upto where the fixture is shared. There are five types of scopes: function, class, module, package, session.
  • function scope:

    • This is the default scope without explicitly adding scope='function'.
    • The fixture will be executed per test function.
    • This is very heavy task.
    • This fixture is suitable for single use functions.

      import pytest
      
      @pytest.fixture()
      def only_used_once():
          with open("app.json") as f:
              config = json.load(f)
          return config
      
    • This fixture is suitable when it handles very lightweight operations such as returning a constant or a different value every time.

      import pytest
      from datetime import datetime
      
      @pytest.fixture()
      def light_operation():
          return "I'm a constant"
      
      @pytest.fixture()
      def need_different_value_each_time():
          return datetime.now()
      
  • class scope:

  • There is a special usage of yield statement in pytest that allows us to run the fixture after all the test functions.

    • The code before yield acts as setup code.
    • The code after yeild acts as teardown code.
    • For example, testing a database.

      @pytest.fixture(scope="class")
      def prepare_db(request):
          # pseudo code
          connection = db.create_connection()
          request.cls.connection = connection
          yield
          connection = db.close()
      
      @pytest.mark.usefixtures("prepare_db")
      class TestDBClass:
          def test_query1(self):
              assert self.connection.execute("..") == "..."
      
          def test_query2(self):
              assert self.connection.execute("..") == "..."
      
    • We can yield values to any test that wants it and the remaining code will act as the teardown code.

  • module and package scopes:

    • The scope='module' runs the fixture per module (per file).
      • A module may contain multiple functions as well as classes.
      • No matter how many tests are in the module, the fixture is run only once.
    • The scope='package' runs the fixture per package (directory).
      • A package contains one or more modules.
      • No matter how many modules there are, the fixture is run only once.
    • An example would be:

      @pytest.fixture(scope="module")
      def read_config():
          with open("app.json") as f:
              config = json.load(f)
              logging.info("Read config")
          return config
      
    • We might want to read config only once per module or only once throughout the entire package.

  • session scope:

    • Every time we run pytest, it is considered to be one session.
    • The scope='session' makes sure the fixture only executes per session.
    • Session scopes are designed for expensive operations like truncating a table or loading a test set to a database.
  • We can read more about fixture topics such as autouse and execution order of fixtures. We will look at them when required.

4.7 Monkey Patching in PyTest

4.7.1 Understanding the need to mock

  • Why would we want to mock anything?
  • To understand this you need to understand the principals of unit testing:
    • Unit testing are the fastest tests. Why?
    • It is testing the functionality of a single unit, without any external dependencies.
    • Say you test a function which calls the database.
    • You would only want to test the flow of such a function without making the call to the actual database.
    • How do you do that? You basically, mock the entire database connection object and replace it with a dummy object.
    • You can set whatever values you want that dummy object to return, create whatever dummy functions are required along with their return types. You can mimic the entire database connection’s set of methods if you want.
    • What this allows us to do is set dummy values for all the external calls that are within our function and set default results for them.
    • With this we can setup a fake scenario without actually making external calls.
    • This is why unit tests are fast.
    • Integration testing is on the other hand is testing of the actual values in which case, we would test the actual results from the external calls and evaluate them. Thats why Integration tests are slower than unit tests.
  • This is one of the reasons we use mock.
  • Monkey patching gives us a better interface than barebones mock library.

4.7.2 Monkeypatch syntax overview

  • Fixtures can also depend on other fixtures. monkeypatching is such a fixture which is available to all functions.
  • Let’s create a fixture:

    import pytest
    
    @pytest.fixture(scope="function")
    def capture_stdout(monkeypatch)
        std_output: dict = {'output': '', 'write_count': 0}
        def fake_writer(s) -> dict:
            std_output['output'] += s
            std_output['write_count'] += 1
            return std_output
    
        monkeypatch.setattr(sys.stdout, 'write', fake_writer)
        return std_output
    
    def test_print(capture_stdout):
        print("Hello")
        assert capture_stdout['output'] == "Hello"
    
  • What we did here is replace sys.stdout.write with fake_writer which returns the captured dictionary.

  • Every time a test function uses capture_stdout fixture, the environment within the test changes where sys.stdout.write is replaced with fake_writer.

  • Whenever any function calls sys.stdout within the function scope will be making a call to fake_writer instead. In our example, we call print will calls to sys.stdout.

  • The return value is captured in the capture_stdout variable and can be used to assert results.

  • This is how we mock things.

  • But why use monkeypatching instead of just mocks?

    • At the end of the test, monkeypatching makes sure that everything is undone and sys.out.writer goes back to its original value.
    • In unittest we use mock.patch as decorators and context managers. This also makes sure that everything is undone at the end of the test.

4.8 UNDERSTANDING TEST COVERAGE

  • We have put --cov options in pyproject.toml.
  • This shows us test coverage in percentages.
  • Here 100% test coverage means all our tests combined and touched every single line of executable code in the src.
  • Any percentage lesser means that some executable code was not tested by our tests.
  • However, having 100% test coverage does denote that there are no bugs. It all depends. However, it’s good to have 100% code coverage.

Top comments (3)

Collapse
 
teaglebuilt profile image
dillan teagle

this is a solid article, one thing to add would be setup and teardowns with the db with alembic

Collapse
 
zim95 profile image
Namah Shrestha

I am planning to create an article for SQLAlchemy core. The migrations will be handled by alembic. Please stay tuned until that article is out. I am currently working on the source code. The tests that you request will be there.

Collapse
 
zim95 profile image
Namah Shrestha

Thank you for the feedback. I will try to add this as well.