DEV Community

Discussion on: 100% test coverage is not enough...

Collapse
 
elmuerte profile image
Michiel Hendriks

100% coverage is useless. 80%-90% coverage by unit tests is generally good enough, above that you are starting to waste time. Time better spend on other ways of testing, like this article mentions.

Besides these forms of tests

  • Unit
  • Integration
  • End-to-End

There are also other important tests which are often forgotten:

  1. Load/stress tests: To find out how well the system behaves under high load. It builds upon integration or e2e tests, but with 2-3 times the peak volume you would normally expect. It is a test of scalability.
  2. Endurance tests: To find out if your system keeps running for days, weeks, months. Similar to a load test, but instead of stressing the system you want to find out if the system remains stable when it processes a peak load for a day, or week. This helps you to find resource leaks, or performance degradation. You may also want to build is some load trends, like the usual sine of peak load to none, back to peak again. And maybe add a large pause in the middle of every 3rd peak load. To test 100% to 0% and back to 100%, this quite often has side effects.
Collapse
 
destro_mas profile image
Shahjada Talukdar

Thanks for your views! 👋
Yeah, fully agree with Load tests and etc.

Definitely those are important and forgotten stuff.
This Article was mostly written from the initial ground as a first step.
I will try to continue writing more on other detailed ways.

What kind of tools you normally use for Load or Endurance testing?

Collapse
 
elmuerte profile image
Michiel Hendriks

Which tools are available really depends on the software under test. Tools like JMeter or Gatling are good if you can produce load with rather simple sequence of HTTP calls.

Taurus is an interesting "wrapper" around a bunch of test frameworks to turn them into load tests.

The software I work on has rather complex asynchronous "conversations" which mostly happen via message queues. I was able to create an integration test in JMeter, but it was unsuitable for performance load testing as the test framework was the bottleneck. So I need to develop our own tooling. For this I am planning to use the Apache Camel, and maybe re-use parts of the Citrus Framework (we are a Java shop).

So maybe there is no tool available for your software which can produce significant load, you can often use parts of other integration testing frameworks. Most integration frameworks are not geared towards running the same tests over and over again in multiple threads. But there is a chance you can create a wrapper around this framework which will do exactly that (like Taurus has done).

Note, with load and endurance testing you do not have to fully validate everything. Shallow verification are often sufficient enough. You do not want the verification tests to slow how the test tool. Detailed validation should have been covered in the integration tests. If the tooling support is, you could do samples of detail validation (e.g. every 1000th run is verified to greater detail.)

Thread Thread
 
destro_mas profile image
Shahjada Talukdar • Edited

As you said, you are a Java shop, you mostly do API implementation and do load testing for the REST APIs only? or you also do the testing for the whole web app end-to-end?

Thread Thread
 
elmuerte profile image
Michiel Hendriks

It's message oriented enterprise software for business process automation. So the load tests we do are via message queues for specific business processes. We don't really have REST APIs. It's complex data structures with somewhat long complex transactions, often various sequential asynchronous transactions which eventually produce an output in the form of a message on a different queue.

So the load testing is in the form of pumping messages on a queue, with a different process reading messages from a different queue, which is needed to send a follow message. This a few times, to complete a whole "operation". Not the easiest to load test.