Introduction
Paul Merrill asked a very interesting question on Twitter last week.
I decided that as a new Software Test Engineer, I wanted to push myself and try and answer that question, pulling from my experience as a marketer to answer the last part of the problem.
About that...
Paul says this in one of his replies.
Paul Merrill@dpaulmerrill@garyst1981 Fixing assumes a problem. It also narrows the scope of what I'm after to solutions for a problem. Many times a slow running test case tells us something, but it's not always a problem and it's not always something we can fix.13:39 PM - 04 Aug 2020
And then we get into a whole discussion about business priorities.
But why is it important that tests are (as) fast as they can (realistically) be?
- The whole development process isn't slowed down waiting for the tests to complete.
- Running tests isn't skipped because they're too slow.
- Developers aren't tempted to context switch to other tasks while they wait, so they should be able to get feedback on whether it is ok within 5 minutes. (A full suite of regression integration tests may take longer.)
Answering the question the first time round
First, I did what the applicants were currently doing, which was to immediately list a set of reasons why the tests could have sub-optimal speed, which I will actually get into later.
Then I realised what I was doing, and took a step back.
I looked through the other responses to Paul's question and his response to that to gauge what content he was looking for in an answer. Some that I found useful in structuring my response are below.
Sam Connelly@bughuntersam@dpaulmerrill The interviewee is nervous. People tend to jump into solution mode and don't practice talking about their thinking process. Walk em through that journey.
Try:
Have you had slow tests before? How did you know they were slow? Did you try some things that didn't work?05:41 AM - 05 Aug 2020
Graham Ellis
Graham Ellis@grhmellis@dpaulmerrill I'd ask you to define what you mean by 'slow'
It's a subjective word
Is the expectation that the tests run to a pre-determined baseline?
This applies to both functional / non-functional
This applies to both presentation layer / service / integration / unit13:53 PM - 04 Aug 2020
Ben Oconis
Ben Oconis@benoconis@dpaulmerrill It's fair. I would say "A test is running at non-optimal levels. If you were asking your team to look at the test, what would you have them do?" They should ask what's meant by non-optimal & layout their process, including seeing if it's slow, poorly written, can be broken up,etc14:20 PM - 04 Aug 2020
Then, I put together a framework of how to answer the question, with some realism from how I approached the problem initially.
@dpaulmerrill So guess I start with the assumption that slow means sub-optimal, because I forgot you can ask the interviewer clarifying questions. Then I talk about how I troubleshoot to see if the issue is one of the any of the above or others, and reasons why it can/cannot be fixed.14:28 PM - 04 Aug 2020
In this blog post, I want to flesh out that framework, but this time, I ask questions first. Some of the questions I would ask are addressed in the below tweet.
@dpaulmerrill From my perspective, it seems you want the interviewee to display knowledge of....
- the types of slow (sub-optimal, all, some, etc)
- reasons tests can be slow (as above, etc)
- how to troubleshoot tests to determine cause(s) of the slow
- reasons slow tests can/cannot be fixed15:01 PM - 04 Aug 2020
Let's get started.
What is slow?
Does this apply only to functional tests, or to non-functional tests as well?
How many test cases? Is it one? Is it a stack of them? Is it all of them?
Does it apply to presentation layer, service, integration, unit?
Are there just too many tests? Do they all need to be run when they’re run?
Is it the tests that are slow, or are they slow because the code is slow, or has testability issues?
It could also be the configuration, for example, the test fixture, or the machine (the test environment).
How do you define slow?
Is there a performance benchmark set for the speed of the tests?
What is the purpose of the tests? Are they smoke tests? Are they regression tests?
How much slower is it than the benchmark?
How do you find out that a test case is slow?
Is there historical data from previous runs of the test?
Is it easily sortable by time taken per test?
What code profiling tools are available?
How do you find out why a test case is slow?
By looking at the code for the tests, the code that it's testing, the environment it’s running in, and various configuration settings and scripts.
If there are multiple test cases that are slow, what do they have in common? Do they have something interesting in common?
Here are some of the reasons test cases can be slower than expected, with more to be added as I understand them.
All tests
Is the test data created smartly? How is it created and cleaned up? Is it cleaned up? Are there race conditions? Are there non-atomised tests? You might find performance tests FASTER than they should be if you're using already-created data.
UI tests
If your E2E tests are using xpaths when they could be using id, name, or even CSS selectors, it's the tests that are slow. I find using accessibility ids that are very unlikely to change is more likely to make UI test code last, without false negatives over time.
Perhaps there are tests run as E2E tests when they would be better as unit, integration, or API tests, which run faster than E2E tests.
Are the tests using generic wait statements where they could be using conditional waits? Are there unnecessary waits in the tests?
Are you using your UI to build up state? It's faster to hit the DB & API.
Is the specific action/event itself taking longer than the benchmark to load?
DB tests
- Are there long-running queries? Are there inefficient queries?
When slow tests aren't a problem
From my currently limited perspective, when total test time still comes in under 2 minutes for smoke tests. This of course does not apply to life-to-death software like medical admin, medical devices, or vehicle AI.
From a business perspective, they're not a problem if it's not causing blocks for the engineering team yet.
How do you fix slow-running tests?
The first question to ask after your troubleshooting is complete is to determine if it's a problem that can be fixed, or that can't be fixed.
This really depends on business priorities, budget constraints, and resource capacity.
How long would it take to refactor or debug?
Is it significantly longer than the collective time that will be spent waiting for the tests to run?
If not, is there an educational benefit?
Has the irritation from customers from slow loading or inaccessible software the potential to become a reputational/revenue-hitting issue?
Past a certain point of optimisation, I think the team as a whole would have to find a point where you all agree to move on. Or to put it a better way, set a goal for how testing in the organisation should work, and aim to reach that goal.
The idea is that the tests as a whole run fast enough to help meet business requirements and achieve a satisfying user experience.
When slow tests are a problem we can fix
Liquibase has a great article on how to make automated tests faster, where it's the tests themselves that are an issue.
For everything else, you'd need to talk to the development team about adding it onto their agenda. As a baby test automation engineer, I am assuming any changes would either factor into refactoring time, or bug-fixing time, depending on the severity of the issue.
Top comments (2)
Interesting post which raises a very important point.
I think the specific points about 'slow' being subjective and defining the purpose of the test are important. It can depend on a lot of factors and there is no definitive answer.
Absolutely. I saw a comment while writing this, that not all unit tests are 0.0001 seconds. If you're a browser? engineer testing 100,000s of CSS permutations some take 5 seconds after optimisation. It's a rare case but it most certainly does happen.