It's usually described by scalability and usability. However, there are two more important criteria that we should consider systems' performance and the method to test it.
- The rate at which a system can process information.
- The duration between the initiation of an action and its result.
Those two together represent the system's responsiveness, even though they are not necessarily related.
From the previous two criteria we can see that measuring throughput and latency and comparing them against some values we see fit is basically our testing process.
We have so many variables that can affect our system performance test.
- The test network traffic (if it's run on the organization's network).
- The OS status (is it performing some home cleaning process?)
- The run environment status (is the JVM performing garbage collection?)
We need to take two things into consideration in order to perform informative performance tests.
- We should have a dedicated testing environment which is a clone of our production environment.
- We should control the processes of this environment.
- Network traffic.
- Garbage collection, etc..
Wait, What? why do you want to test your test?
Well, if the test is slow to start and initiate that would affect the actual code performance measurements and we would have failing tests for code that is actually fast.
How to make sure that our tests are fast enough in order not to make our actual code test fail?
- We should test the code using stub methods that do the bare minimum.
- Return hard coded values for example.
- Then, if the test passes (meets the throughput and latency requirements) do we insert and test the actual code.