Outlines
Introduction ๐
Please note that my understanding of Python's unittest module is still in its nascent stages. Consequently, my suggestions might seem radical, especially to seasoned programmers, but they are indeed inspired by a fresh perspective that a newcomer can bring to the table.
Outlines
Please note that my understanding of Python's unittest module is still in its nascent stages. Consequently, my suggestions might seem radical, especially to seasoned programmers, but they are indeed inspired by a fresh perspective that a newcomer can bring to the table.
Hello! As I tread on the enlightening path of understanding unit testing in Python, I recently chanced upon a brilliant tutorial from PythonTutorial.net that sheds light on the concept of skipping tests using Python's unittest module.
The unittest module is a versatile tool that provides us with three options to skip a test method or a test class:
- the
@unittest.skip()
decorator, - the
skipTest()
method of theTestCase
class, or - the
SkipTest
exception.
It also allows us to conditionally skip tests using the @unittest.skipIf()
or @unittest.kipUnless()
decorators. As a novice stepping into this expansive domain, I discovered certain areas that seemed ripe for improvement. So, let's delve deeper into the subject!
Skip Test vs Soft Fail at Method Level ๐
In the world of development, it is common practice to skip tests that are not yet ready or those that fail under specific conditions. However, merely skipping these tests without probing into or communicating their status could lead to an inefficient process. We need a system that ensures that even skipped tests are taken into account and monitored so they do not fall through the cracks.
So, here's a thought:
Instead of completely sidelining these tests, what if we were to handle the exceptions that might cause a test to fail and report these exceptions as "soft fails"?
Let's first take a quick look at the three different ways in which a test can be skipped at the method level:
Skipping a Test Using a Decorator ๐
Skipping a Test Using a Method โ
Skipping a Test Using an Exception โ
All the above methods yield the same result:
test_case_1 (test_skipping_test_methods.TestDemo.test_case_1) ... ok
test_case_2 (test_skipping_test_methods.TestDemo.test_case_2) ... skipped 'Work in progress'
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK (skipped=1)
Now, let's consider the idea of a 'soft fail'. Instead of skipping a test, it allows the test to run as planned and logs an issue if something doesn't go as expected. This approach presents a balanced compromise between allowing a test to fail and skipping it outright. Let's look at two ways to implement this:
Soft Fail Using the .fail()
Method ๐ฉ
In this approach, we use the .fail()
method, which allows us to flag a test case as a failure.
This would return:
test_case_1 (test_soft_fail_test_methods.TestDemo.test_case_1) ... ok
test_case_2 (test_soft_fail_test_methods.TestDemo.test_case_2) ... FAIL
======================================================================
FAIL: test_case_2 (test_soft_fail_test_methods.TestDemo.test_case_2)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/mnt/d/Repository/unit_test/skipping_tests/test_soft_fail_test_methods.py", line 9, in test_case_2
self.fail(f"Soft fail: Work In Progress")
AssertionError: Soft fail: Work In Progress
----------------------------------------------------------------------
Ran 2 tests in 0.019s
FAILED (failures=1)
Take note of the 'fail', 'failed', and 'failures' keywords.
Soft Fail Using an Exception ๐จ
In this approach, we raise an exception when a test case is not yet ready for testing.
This would return:
test_case_1 (test_soft_fail_test_methods.TestDemo.test_case_1) ... ok
test_case_2 (test_soft_fail_test_methods.TestDemo.test_case_2) ... ERROR
======================================================================
ERROR: test_case_2 (test_soft_fail_test_methods.TestDemo.test_case_2)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/mnt/d/Repository/unit_test/skipping_tests/test_soft_fail_test_methods.py", line 11, in test_case_2
raise NotImplementedError("Soft fail: Work In Progress")
NotImplementedError: Soft fail: Work In Progress
----------------------------------------------------------------------
Ran 2 tests in 0.006s
FAILED (errors=1)
Pay attention to the 'error', 'failed', and 'errors' keywords.
Let's compile all of this information into a markdown table that compares the three different ways of skipping a test with the two methods of implementing a soft fail at the method level.
Technique | Result Keyword |
---|---|
Skip using decorator | skipped |
Skip using method | skipped |
Skip using exception | skipped |
Soft fail using method | fail |
Soft fail using exception | error |
In real-world applications, soft failing a test provides similar functionality to unittest.skip(), but with the added advantage of attempting to run the test under all circumstances.
We can then communicate to our team that tests marked with a "Soft fail" are essentially skipped but with the added visibility into any exceptions they raise upon execution. This information could prove invaluable when it's time to revisit and repair these tests.
Skip Test vs Soft Fail at Class Level ๐
So far, we've explored the concept of 'skip test' and 'soft fail' at a method level. However, let's also delve into how we can utilize these principles at the class level.
Similar to the method level, to skip a test class, we can use the @unittest.skip()
decorator at the class level. It denotes that all test methods within the class should be skipped. This could be useful when a specific functionality under test is currently being developed or undergoing major changes.
Consider the following example where the @unittest.skip()
decorator is used at the class level. As a result, all tests within the TestDemo
class are bypassed:
This would yield the following output:
test_case_1 (test_skipping_test_class.TestDemo.test_case_1) ... skipped 'Work in progress'
test_case_2 (test_skipping_test_class.TestDemo.test_case_2) ... skipped 'Work in progress'
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK (skipped=2)
Now that we've explored how to skip tests at the class level, let's examine how to apply a 'soft fail' at the same level. Again, we have two strategies: the first uses the .fail()
method, and the second raises an exception.
Soft Fail Using the .fail()
Method ๐ฉ
The .fail()
method enables us to flag all test cases within a test class as a 'soft fail'. It's invoked within the setUp()
method, which is run before each test case.
Here's a sample implementation:
The output from the above code would be as follows:
test_case_1 (test_soft_fail_test_class.TestDemo.test_case_1) ... FAIL
test_case_2 (test_soft_fail_test_class.TestDemo.test_case_2) ... FAIL
======================================================================
FAIL: test_case_1 (test_soft_fail_test_class.TestDemo.test_case_1)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/mnt/d/Repository/unit_test/skipping_tests/test_soft_fail_test_class.py", line 9, in setUp
self.fail(f"Soft fail: Work In Progress")
AssertionError: Soft fail: Work In Progress
======================================================================
FAIL: test_case_2 (test_soft_fail_test_class.TestDemo.test_case_2)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/mnt/d/Repository/unit_test/skipping_tests/test_soft_fail_test_class.py", line 9, in setUp
self.fail(f"Soft fail: Work In Progress")
AssertionError: Soft fail: Work In Progress
----------------------------------------------------------------------
Ran 2 tests in 0.007s
FAILED (failures=2)
Notice the use of the 'fail', 'failed', and 'failures' keywords.
Soft Fail Using an Exception ๐จ
Instead of using the .fail()
method, we can raise an exception to soft fail our test cases at the class level. This way, we signal that the tests are not ready for execution, even if they are not complete.
Here's how it looks:
And the output would be:
test_case_1 (test_soft_fail_test_class.TestDemo.test_case_1) ... ERROR
test_case_2 (test_soft_fail_test_class.TestDemo.test_case_2) ... ERROR
======================================================================
ERROR: test_case_1 (test_soft_fail_test_class.TestDemo.test_case_1)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/mnt/d/Repository/unit_test/skipping_tests/test_soft_fail_test_class.py", line 10, in setUp
raise NotImplementedError("Soft fail: Work In Progress")
NotImplementedError: Soft fail: Work In Progress
======================================================================
ERROR: test_case_2 (test_soft_fail_test_class.TestDemo.test_case_2)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/mnt/d/Repository/unit_test/skipping_tests/test_soft_fail_test_class.py", line 10, in setUp
raise NotImplementedError("Soft fail: Work In Progress")
NotImplementedError: Soft fail: Work In Progress
----------------------------------------------------------------------
Ran 2 tests in 0.007s
FAILED (errors=2)
Note the use of the 'error', 'failed', and 'errors' keywords.
Let's compile all this information into a markdown table to compare the usage of skipping test vs soft fail .fail()
vs soft fail using raise
exception at the class level:
Technique | Result Keyword |
---|---|
Skip using decorator | skipped |
Soft fail using .fail()
|
fail |
Soft fail using exception | error |
Conditionally Skipping Tests vs Soft Fail in Python ๐
In certain scenarios, we may want to skip a unit test conditionally. This is usually applicable when a specific test is only applicable or valid under certain conditions or platforms. Python's unittest
module offers an elegant way to handle this via decorators like @unittest.skipIf()
and @unittest.skipUnless()
.
Conditional Skipping with @unittest.skipIf()
๐ซ
The @unittest.skipIf()
decorator allows us to skip a test case when a certain condition is met. A typical use case is skipping a test if the test is running on a particular platform, such as Windows.
Here's a simple illustration:
In this code, the test_case_2()
will be skipped if the test suite is running on a Windows platform. The test result will display a message indicating the reason for the skipped test:
test_case_1 (test_skipping_test_condition.TestDemo) ... ok
test_case_2 (test_skipping_test_condition.TestDemo) ... skipped 'Do not run on Windows'
----------------------------------------------------------------------
Ran 2 tests in 0.001s
OK (skipped=1)
Soft Fail with IF Condition ๐ฉ
As opposed to skipping a test, we might want to denote a 'soft fail' based on certain conditions. This can be achieved using the .fail()
method or by raising an exception.
When using the .fail()
method in a conditional statement, it will flag the test case as a 'soft fail' if the condition is met:
In the example above, test_case_2()
will fail softly if the platform is Windows, returning this output:
test_case_1 (test_soft_fail_test_condition.TestDemo) ... ok
test_case_2 (test_soft_fail_test_condition.TestDemo) ... FAIL
======================================================================
FAIL: test_case_2 (test_soft_fail_test_condition.TestDemo)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\Repository\unit_test\skipping_tests\test_soft_fail_test_condition.py", line 11, in test_case_2
self.fail("Do not run on Windows")
AssertionError: Do not run on Windows
----------------------------------------------------------------------
Ran 2 tests in 0.001s
FAILED (failures=1)
Note the use of the 'fail', 'failed', and 'failures' keywords in the output.
Alternatively, we can cause a soft fail by raising an exception if the condition is true:
This would result in the following output:
test_case_1 (test_soft_fail_test_condition.TestDemo) ... ok
test_case_2 (test_soft_fail_test_condition.TestDemo) ... ERROR
======================================================================
ERROR: test_case_2 (test_soft_fail_test
_condition.TestDemo)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\Repository\unit_test\skipping_tests\test_soft_fail_test_condition.py", line 10, in test_case_2
raise EnvironmentError("Do not run on Windows")
OSError: Do not run on Windows
----------------------------------------------------------------------
Ran 2 tests in 0.001s
FAILED (errors=1)
Again, observe the use of 'error', 'failed', and 'errors' keywords in the output.
Conditional Skipping with @unittest.skipUnless()
โ
Now, if we want to run a test unless a condition is met, Python's unittest
module offers the @unittest.skipUnless()
decorator. For instance, we might want to run a test only on a specific platform, like Windows.
Here's how we would use @unittest.skipUnless()
in our code:
In this case, test_case_2()
will only be executed if the test suite is running on a Windows platform. If not, it will be skipped, with a message displayed in the test result indicating the reason for skipping the test:
test_case_1 (test_skipping_test_condition.TestDemo.test_case_1) ... ok
test_case_2 (test_skipping_test_condition.TestDemo.test_case_2) ... skipped 'Do not run unless on Windows'
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK (skipped=1)
Soft Fail with UNLESS Condition ๐ฆ
Similar to the conditional soft fail discussed above, we can set up a 'soft fail' that only triggers unless a certain condition is met. This can be done using the .fail()
**method **or by raising an exception in our test case:
In the code above, if the platform is not Windows, test_case_2()
will return a soft fail, as seen in the following output:
test_case_1 (test_soft_fail_test_condition.TestDemo.test_case_1) ... ok
test_case_2 (test_soft_fail_test_condition.TestDemo.test_case_2) ... FAIL
======================================================================
FAIL: test_case_2 (test_soft_fail_test_condition.TestDemo.test_case_2)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/mnt/d/Repository/unit_test/skipping_tests/test_soft_fail_test_condition.py", line 14, in test_case_2
self.fail("Do not run unless on Windows")
AssertionError: Do not run unless on Windows
----------------------------------------------------------------------
Ran 2 tests in 0.006s
FAILED (failures=1)
Note the use of the 'fail', 'failed', and 'failures' keywords in the output.
Alternatively, a soft fail can be triggered by raising an exception when the condition is not met:
This leads to a soft fail that manifests as an error:
test_case_1 (test_soft_fail_test_condition.TestDemo.test_case_1) ... ok
test_case_2 (test_soft_fail_test_condition.TestDemo.test_case_2) ... ERROR
======================================================================
ERROR: test_case_2 (test_soft_fail_test_condition.TestDemo.test_case_2)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/mnt/d/Repository/unit_test/skipping_tests/test_soft_fail_test_condition.py", line 13, in test_case_2
raise EnvironmentError("Do not run unless on Windows")
OSError: Do not run unless on Windows
----------------------------------------------------------------------
Ran 2 tests in 0.008s
FAILED (errors=1)
Again, note the use of 'error', 'failed', and 'errors' keywords in the output.
Soft Fail: A Closer Look ๐
The use of self.fail()
introduces an informative layer to our testing practice. Instead of merely stating that a test was skipped, it provides a specific reason why the test was not successful. By capturing exceptions as "soft fails", we can evaluate why certain tests may pose issues. This information is immensely valuable when revisiting and refining these tests.
Deciding Between Test Skipping and Soft Fails ๐ค
First, let's consider the situations where we might want to skip tests versus using soft fails:
When to Skip Tests | When to Use Soft Fails |
---|---|
Work in Progress: If a feature or function being tested is incomplete, it's reasonable to skip the test. Running such tests only leads to predictable failures without adding value to our test suite. |
Diagnosing Test Failures: As highlighted earlier, using self.fail() can be instrumental in understanding why a test is failing, which aids in troubleshooting. |
Platform or Environment-Specific Tests: If a test is only relevant to certain environments or platforms, skipping it in incompatible scenarios saves computational resources and keeps the test report clean. | Flaky Tests: For tests that fail inconsistently, using a soft fail can capture the failure context without disrupting the entire test suite. |
External Resource Dependency: If a test relies on resources that may not always be available (like a third-party service or hardware), it should be skipped when these resources are unavailable, ensuring the efficiency of the testing process. | Future Functionality: If we write tests for yet-to-be-implemented features, a soft fail serves as a reminder of the expected behavior, acting as a form of documentation for future developers. |
Experimental or Obsolete Features: It may be wise to skip tests for experimental features or those testing obsolete functionality, allowing focus on more relevant areas. | Conditional Fails: Similar to conditionally skipping tests, soft fails can conditionally allow a test to fail, which is helpful in cases where the test may not always pass due to external conditions. |
The choice between skipping a test or using a soft fail can be a judgment call, dependent on project requirements, team practices, and the context of the test. Our overarching goal should be to create a robust, maintainable, and useful test suite that assists in maintaining software quality.
Understanding Soft Fail Outcomes โก
In a hypothetical scenario where we run 100 tests, which outcome is preferable:
90 successful tests with 10 skipped tests
or
90 successful tests with 10 soft-failed tests?
The answer is largely situational and depends on the specifics of the tests and their context. Key considerations include:
Considerations | Explanation |
---|---|
Nature of the Tests | If the 10 tests are expected to fail or be skipped due to known conditions, such as a pending feature implementation or a specific environment setup, then both outcomes are reasonable. The important thing is that the outcome aligns with our expectations. |
Information Gathered | A soft fail provides more information about the cause of failure. If these tests are failing due to resolvable issues, a soft fail could offer more insight into the problem. |
Team Morale | An abundance of soft-failed tests can be demoralizing, suggesting low code quality. Conversely, skipped tests might be seen as a sign of pragmatism, acknowledging areas not currently under test. |
Ultimately, the ideal outcome would have all tests pass. But in this hypothetical scenario, neither outcome is inherently superior. What matters is how we interpret these outcomes and the subsequent actions we take to improve our codebase and increase successful tests.
Selecting Between Soft Fail Implementations โ
Our two soft fail examples, namely the "Only Exception Suggestion" and the "Only Method Suggestion", each have their benefits:
Implementations | Explanation |
---|---|
Only Exception Suggestion | This version uses Python's native exception handling and does not rely on the unittest self.fail method. It is a direct approach, ideal when we aim to handle exceptions in a Pythonic way without worrying about customizing test result outputs. |
Only Method Suggestion | This version employs the unittest's self.fail method to handle exceptions, suitable when we want to give a specific failure message when a test doesn't pass. The self.fail method is designed specifically for use in unittest test cases, making it a more suitable way to indicate test failures. |
Choosing between these two methods depends on our specific needs and the context of our test writing. If we prioritize Python's idiomatic exception handling, the "Only Exception" approach might be more suitable. If we're more interested in customizing test results outputs, the "Only Method" approach might serve us better.
Distinguishing Soft Fail and Hard Fail ๐ฅ
The terms "soft fail" and "hard fail" aren't standard in testing but for our discussion, we can consider them as:
Fail Types | Explanation |
---|---|
Hard Fail | This occurs when a test fails due to an assertion not being met or an unexpected exception being raised. Essentially, the system under test did not behave as expected, indicating a potential problem. |
Soft Fail | This occurs when a test does not execute as planned due to an unmet external condition. The test does not necessarily fail because of an assertion not being met or an unexpected exception raised; rather, it could not run to completion due to expected or acceptable reasons. |
In our examples, the EnvironmentError
raised when a condition is not met can be considered a "soft fail", whereas if an assertion in a test fails or an unhandled exception is raised, those would be "hard fails". This distinction is subjective and can vary based on our testing approach and test context.
Conclusion ๐คโ
In essence, understanding when and how to use soft fails in our testing can provide valuable insights into why tests fail and guide our problem-solving efforts. The concepts of soft fail and hard fail, while not universally standard, can help us categorize and understand the reasons behind our test outcomes better.
As a novice in unit testing, I believe that sharing these insights might be beneficial for others embarking on their own unit testing journey. Remember, my understanding is still evolving and I am always open to feedback and suggestions. Let's learn and grow together and kindly point out areas where I might be mistaken.
To stay updated with my learning journey, follow me on Beacons for more Python and software development insights.
Top comments (4)
Why not just use xfail?
Just want to explore the built-in first, mate
xfail is built in.
xfail
is not a built-in feature of Python; it's a feature provided by the pytest testing framework.