We’ve written a few tests in this series now. You’ll have seen we decorate methods with Test
to tell the test runner to include them. You might also remember we previously used TestCase
to run them with different data sets. These attributes are just two of the many available in NUnit.
In this article we’ll look at three other attributes you might find useful. (And as a bonus, we’ll also briefly cover a fourth as it’s closely related.)
Preparing for a Test
When we looked at alternatives to mocking we briefly mentioned setup and teardown routines. These are methods that run immediately before and after each test. The methods themselves can have any name, but must be decorated with SetUp.
In the following example, TestSetup
is a setup method.
public class NUnitAttributes
{
[SetUp]
public void TestSetup()
{
Console.WriteLine("Test Setup");
}
[Test]
public void MyTest1()
{
Console.WriteLine("My Test 1");
}
[Test]
public void MyTest2()
{
Console.WriteLine("My Test 2");
}
}
When we run our tests, the output looks like this for MyTest1
.
Test Setup
My Test 1
And here’s the output for MyTest2
.
Test Setup
My Test 2
This is useful when we want to do some preparation before each test. For example, we might need to set up a database with the right conditions. Or, if the tests write to a database, we might want to clear existing data: when we start with a blank slate, we know exactly what’s written.
A Word of Caution
I’ve previously seen setup methods used to reset (state) variables scoped to the test fixture (i.e. class) before each test. Conceptually, the code would look like the following.
public class NUnitAttributes
{
private int _count = 0;
[SetUp]
public void TestSetup()
{
_count = 0;
}
[Test]
public void MyTest1()
{
_count++;
Assert.That(_count, Is.EqualTo(1));
}
[Test]
public void MyTest2()
{
_count++;
Assert.That(_count, Is.EqualTo(1));
}
}
When this fixture is run,
TestSetup
runs first, before any test._count
is set to0
.MyTest1
runs._count
is incremented to become1
.1
is equal to1
.MyTest1
passes.TestSetup
runs again, beforeMyTest2
._count
is reset to0
.MyTest2
runs._count
is again incremented to become1
.1
equals1
.MyTest2
passes.
By default, NUnit tests run one after another. However, it’s possible to run them in parallel to reduce the overall time taken. When deciding to do this, we need to remember that methods marked with SetUp
are run before each individual test, and the tests do not all start at precisely the same time. Some tests might start while others are already in progress. If a setup method modifies a shared variable within the scope of the fixture (or greater), the results of other tests could potentially be influenced.
Clearing up Afterwards
In addition to setting things up beforehand, we might want to do something after a test has run. For example, if our tests use a database, we might want to tidy up by removing data written to it while testing. We can do this with the TearDown attribute. It can be used either on its own, or alongside a setup method as shown in the following example.
public class NUnitAttributes
{
[SetUp]
public void TestSetup()
{
Console.WriteLine("Test Setup");
}
[TearDown]
public void TestTearDown()
{
Console.WriteLine("Test Tear Down");
}
[Test]
public void MyTest1()
{
Console.WriteLine("My Test 1");
}
[Test]
public void MyTest2()
{
throw new Exception("Testing Exception");
}
}
Here’s the output from running MyTest1
.
Test Setup
My Test 1
Test Tear Down
Here’s the output for MyTest2
. Note how the teardown will run even if the test throws an uncaught exception.
Test Setup
Test Tear Down
For Special Cases
We occasionally have a few tests we want excluded when our test fixture runs. They’ll be valid, and we still want to be able to run them when necessary; we just don’t need to (or shouldn’t) run them all the time. We might see this with special or expensive tests, possibly for code that rarely changes.
In these situations, one option is to comment these tests out. But committing commented-out code to source control generally isn’t good practice. It isn’t immediately clear whether it was disabled intentionally. And it’s excluded from compilation, meaning we might have to fix it the next time we want to run it.
Luckily, we can use Explicit to decorate it instead. When a test is marked Explicit
, it continues to appear in the Visual Studio Test Explorer window (provided it’s still marked Test
). However, it only runs if you manually click on the test itself to run it; it’s skipped when we run the fixture it’s contained in. The following shows an example of how we might use it.
[Test, Explicit]
public void MyTest1()
{
Console.WriteLine("My Test 1");
}
Just Say No
As promised, we’ll quickly cover one more attribute. In my experience, requirements can change as projects go on. When this happens, some tests become invalid in their current state. However, we might not want to delete them just yet; we might want to retarget them, extract parts of their logic, or reuse them in some other way. This can be tricky if we need a deployable build as soon as possible, and a requirement of that build is to have all tests passing.
We could be tempted to use Explicit
; marked tests will be excluded when the fixture they’re in is run. However, we can use Ignore to better express our intentions. Unlike Explicit
, tests marked Ignore
won’t run, even when triggered manually. In addition, we can (and must in NUnit 3) specify a reason why the test is ignored, which is perfect for situations like this. We can see how it’s used in the following example.
[Test, Ignore("Ignore attribute demo")]
public void MyTest1()
{
Console.WriteLine("My Test 1");
}
Summary
NUnit has many attributes you can use with unit tests. You can call methods before and after each test, and have tests run only when manually triggered.
By decorating a method with SetUp
, you can have its logic run automatically prior to each test. This might be useful for setting up a dependent system, e.g. preparing a database. If you need to clean up after your tests, you can decorate a method containing the necessary logic with TearDown
.
You might not want every test to always run. Some tests might be expensive or cover areas that change rarely. Whatever the reason, you can decorate these with Explicit
, and they’ll only run when selected specifically. If you have tests that should always be skipped – maybe a requirements-change means they’ll need rewriting – you can use Ignore
, which will also let you specify a reason.
Thanks for reading!
This article is from my newsletter. If you found it useful, please consider subscribing. You’ll get more articles like this delivered straight to your inbox (once per week), plus bonus developer tips too!
Top comments (0)