We maintain the integration of a lot of third party services in my team. Five payment service provider, a tax service, ERP systems, etc. etc.
Besides or unit tests we also have a couple of integration tests (or integration contract tests) for every third-party service, to make sure that our client libraries still match their API. Sadly, the test environments of the third-party systems aren't always reliable. For example, one of the payment service providers fails during one out of 30 requests (some times even more often). For this provider, we have 8 integration tests (one for each function we are using) that will run during our CI pipeline.
This means, that every third or fourth run of our CI pipeline will fail due to this error. Reaching out to the payment service provider had no effect so far. So these 8 tests are classical "flaky tests". There are a lot of good posts here on dev.to about flaky tests and why they are really bad. Two awesome examples are
We Have A Flaky Test Problem
Bryan Lee ・ Dec 9 '19 ・ 18 min read
A lot of posts about this topic will explore ways to avoid flaky tests and that is a good idea. But those "contract tests" (depending on your third party system) might always be flaky. So how to deal with those tests? We cannot delete them, because they still provide value.
Quarantine flaky tests?
When I started to read up on flaky tests, I found that some people proposed to put all flaky tests you cannot delete in a separate test suite. For more information about that, read the before mentioned post about quarantined tests, or maybe the post by Martin Fowler called Eradicating Non-Determinism in Tests. The idea is to move the flaky (or in this case called quarantined) tests out of your CI pipeline to keep it healthy.
I totally understand the point, but deep inside I still wanted to run all the tests during our CI pipeline, because I wanted to see problems as soon as possible. But how do I keep those tests in the pipeline without having the pipeline broken every third or fourth run for "no reason"?
Skip tests for a specific reason?
Wouldn't it be great if the tests would still fail if they have a "real problem", but just ignore the test result if they fail for the connection issue?
That's what I thought. So I tried to find a way to skip the tests if we ran into the known connection issue with the payment service provider. But in xUnit.net there is no way to say Assert.Skip();
or Assert.Inconclusive();
or something like that.
But then I found this library called Xunit.SkippableFact. It does exactly what I need. With it, I can mark my tests (called Fact
in xUnit.net) as SkippableFact
and provide a way to tell it, which result to skip/ignore.
Let's look at a small example. Image a super useful service, that will receive a string and return the length of this string. And of course, the connection to this service is pretty unstable. About every third connection will run into a SecurityConncection
.
public class UnstableServiceConnection
{
public int GetLength(string text)
{
var random = new Random(DateTime.Now.Millisecond);
if (random.Next(10) % 3 == 0)
{
throw new SecurityException("Something is insecure here!");
}
return text.Length;
}
}
The tests might look something like this:
public class UnstableConnectionTests
{
[Fact]
public void GivenShortText_ReturnsCorrectLength()
{
var connection = new UnstableServiceConnection();
var length = connection.GetLength("ABCD");
length.Should().Be(4);
}
}
On average, every third run of our nice test will fail. Sad, isn't it.
But with Xunit.SkippableFact we can do this:
public class UnstableConnectionTests
{
[SkippableFact(typeof(SecurityException))]
public void GivenShortText_ReturnsCorrectLength()
{
var connection = new UnstableServiceConnection();
var length = connection.GetLength("ABCD");
length.Should().Be(4);
}
}
Now, the test will be skipped, if it throws a SecurityException
.
And you are not limited to exceptions. You could check for almost anything inside of your test. Just use Skip.IfNot(myBooleanValueOrExpression);
inside of your test.
[SkippableFact]
public void SomeTestForWindowsOnly()
{
Skip.IfNot(Environment.IsWindows);
// Test Windows only functionality.
}
Example copied from https://github.com/AArnott/Xunit.SkippableFact
The two-step process
In the end, we did both. We copied the flaky tests to a separate test suite that has to run "green" before a deployment. But we also kept the same tests as "SkippableFacts" in the CI pipeline. I think for us this was the best way to address these problems.
Now it's your turn. What do you think about it? Do you have similar problems? If so, how do you handle it?
Top comments (0)