DEV Community


Limitations of xUnit in Cloud Native Testing

kickingthetv profile image Bryan Lee ・4 min read

Written by Daniel Rodriguez

xUnit is the name used for the collection of test frameworks loosely based on the original Smalltalk Test Framework, proposed by Kent Beck in 1998, and later popularized by JUnit. While it has become common nowadays, the idea of systematically checking an application’s correctness in code (e.g. a test) was quite novel at the time.

Another revolutionary idea in those days was the concept of Continuous Integration: every developer merging code on a recurring basis to ensure everything was working as expected. And to help them check for correctness and identify potential regressions, teams leveraged these testing frameworks to test their applications during the continuous integration flow.

But if you stop to think about it, we are talking about practices that were introduced somewhere between 20 and 25 years ago!

With regard to CI, there is no shortage of options nowadays, from free and open-source solutions to commercial SaaS and on-premise products. And with the adoption and proliferation of Docker and containers, it is really easy for developers to run their custom tech stacks on any CI provider. But at the end of the day, CI services are still predominantly dumb runtimes, where a developer can define what to run, but the CI doesn’t actually understand what is being executed. And it is only due to “standardization” around the xUnit XML format that some CI tools are able to parse and report on test data.

xUnit has failed to evolve for the cloud native world

But in the era of cloud native applications, innovation in testing has been stuck in a seemingly alternate universe where XML is the pinnacle of testing innovation. There are sophisticated engineering organizations running thousands of tests at planet scale, and they’re still generating XML reports and uploading them to their CI or reviewing them with a text editor.

Furthermore, there is no XML schema standard to represent this report – every vendor has freedom in how they define their schema, which may have varying levels of compatibility with the most popular CI’s. All of this only to see a summary of your passing and failed tests. It makes no sense.

While this way of testing was sufficient when we only had unit tests, it falls predictably short with current software engineering needs that emphasize integration with other services. Suites of integration tests are now a must-have to reduce the risk of deploying bugs into production. As a matter of fact, Lean Testing, a modern testing philosophy, questions the validity of the traditional testing pyramid and instead advocates for more integration tests, and fewer unit tests — resulting in the “Testing Trophy”.

Testing trophy by Kent C. Dodds

Testing trophy by Kent C. Dodds.

xUnit only provides superficial insights

XML test reports are insufficient to understand what is happening under the hood. Yes, the reports show when a test has failed, but developers won’t know the type of test (unit? integration? benchmark? end-to-end? other?) and if it is an integration test, to which services it is integrating to, or which versions are running, or in which environment. Is a test failing due to a code change or a config change? Or is it failing because of a dependency? With today’s XML reports, we simply cannot figure out the answer to these and many other questions. And consequently, developers end up spending more time trying to understand the problem than fixing it.

Observability in testing: a must-have in a cloud native world

Leveraging observability patterns in our integration tests, we cannot only start providing reliable information about what services are touched by our tests, but we could also start executing controlled tests in production. Modern production observability tools give developers the visibility they need to understand complex systems, but why wait until production to actually understand how our applications are behaving? Given a choice, every developer would much rather catch a bug before it ships to production, where the cost is always much greater. Yet, developers lack the tools to efficiently do this with today’s modern applications.

It’s now clear that with the proliferation of containers, serverless, Kubernetes, and cloud native, both the way we develop and the applications we develop have changed. But developers are struggling to properly test and debug these new applications with the current tools at their disposal. Testing and testing frameworks need to evolve to understand better what is happening in our tests. As is, current methods for debugging integration tests are unreliable, time-consuming, requires a high level of expertise, and don’t always lead to resolution.

Testing is a core competency to build great software. But testing has failed to keep up with the fundamental shift in how we build applications. Scope gives engineering teams production-level visibility on every test for every app — spanning mobile, monoliths, and microservices.

Your journey to better engineering through better testing starts with Scope.


Editor guide