Introduction
If you've ever created automated tests for your project, you’re probably familiar with Jest. It’s no coincidence: Jest is one of the most widely used testing tools in the industry and is highly regarded within the developer community.
However, regardless of the technology, poorly optimized code will perform poorly. And if you’re working on a large project, the simple act of running tests can take as long as making a cup of coffee (or several).
I have nothing against your coffee break; I'm enjoying one while writing this article, but lately, I've dived headfirst into a mission: optimizing tests to make them as fast as possible.
Let’s get to the code!
The Chaotic Scenario
To illustrate this, I'll present a chaotic scenario: an API written in TypeScript using Fastify as the web server. The tests have an optimization problem, and Jest has no configuration beyond the basics to run the tests.
//slow jest config
import { type Config } from '@jest/types';
const config: Config.InitialOptions = {
preset: 'ts-jest',
transform: {
'^.+\\.tsx?$': 'ts-jest'
},
testMatch: [
'**/slow.*'
]
};
export default config;
You might notice that Jest uses another library called ts-jest to read our tests using TypeScript. This is already our first point of improvement. Additionally, we can better define our target, which is where Jest will look for our tests.
As for the tests, I created a file with 300 identical tests.
it('should pass', async () => {
const app = await buildServer()
const response = await app.inject({
method: 'GET',
url: '/check',
})
expect(response.statusCode).toEqual(200)
expect(response.json()).toEqual({ hello: 'world' });
});
Our deliberate mistake here is that each test recreates the Fastify instance for every execution, resulting in 300 instances. You might think this is a very specific case, but it actually serves to illustrate problems that can arise in various forms and cause slowdowns, such as:
- Instances being recreated (like in this example)
- Database modifications
- Mocks
- Unnecessary validations
Otimizando o Jest
Vamos adicionar algumas configurações.
Optimizing Jest
Let’s add some configurations.
- maxWorkers: Defines how many CPU cores will be used to run tests in parallel; here, it’s set to use 50%.
- testEnvironment: Sets the execution environment, which in this case is "node."
- testPathIgnorePatterns: Ignores directories like node_modules and src.
Our star here is @swc/jest. Unlike ts-jest, which transpiles TypeScript using the TypeScript compiler, @swc/jest utilizes SWC, a compiler written in Rust that is significantly faster at transforming TypeScript files into JavaScript. Rust, being a low-level language, offers superior performance, especially in large codebases.
Additionally, we also configure a very specific path for Jest to find our test in testMatch. I know this isn’t a real case, but you could also change it to something like //.test.ts* to have it ignore files with other extensions.
// Fast jest config
import {
type Config
} from '@jest/types';
const config: Config.InitialOptions = {
cache: true,
maxWorkers: '50%',
testEnvironment: "node",
testPathIgnorePatterns: [
'node_modules',
'src'
],
transform: {
'^.+\\.tsx?$': '@swc/jest'
},
testMatch: [
'**/slow.test.ts'
]
};
export default config;
After making these changes, let's compare both Jest configurations by running our tests, which still have the optimization problem.
Attempt | Slow Jest | Fast Jest |
---|---|---|
1 | 3.728s | 2.939s |
2 | 3.701s | 2.982s |
3 | 3.687s | 2.904s |
4 | 3.763s | 2.934s |
5 | 3.730s | 2.911s |
Averages:
- Slow Jest: 3.722s
- Fast Jest: 2.934s
We achieved an improvement of 21.17% with our optimized Jest. Now let's fix the problem in our test.
Optimizing the Test
As mentioned earlier, our case isn’t too complex. What we’ll do here is move the instance creation to beforeAll and reuse it across all tests.
The important point here is to emphasize that different projects can create similar situations, and it's up to you to have the critical sense to identify and correct these flaws.
describe('fast', () => {
let app: FastifyInstance;
beforeAll(async () => {
app = await buildServer()
})
it('should pass', async () => {
const response = await app.inject({
method: 'GET',
url: '/check',
})
expect(response.statusCode).toEqual(200)
expect(response.json()).toEqual({ hello: 'world' });
})
})
With the problem fixed, let's run the tests again.
Attempt | Slow Jest | Fast Jest |
---|---|---|
1 | 3.318s | 2.593s |
2 | 3.316s | 2.552s |
3 | 3.302s | 2.538s |
4 | 3.328s | 2.567s |
5 | 3.278s | 2.566s |
Averages:
- Slow Jest: 3.3044s
- Fast Jest: 2.5632s
Now we see an improvement in the timings. Comparing the average of our worst case (3.722s) with the best current case (2.5632s), we have an improvement of 31.1%.
Can we improve further?
If you can’t switch Jest for another tool, I believe this improvement is excellent for you. But we can go further by using Vitest.
Expanding Horizons with Vitest
If you’ve never heard of Vitest, let me summarize: it’s a testing library compatible with Jest that uses EsBuild under the hood. Thus, you hardly need to refactor tests written for Jest, and the configuration file is very simple and optional in our case. But here are the configurations I added to our project:
In short, I’ll run only the corrected test and compare the results.
Attempt | Slow Jest | Fast Jest | Vitest |
---|---|---|---|
1 | 3.287s | 2.575s | 1.690s |
2 | 3.295s | 2.564s | 1.770s |
3 | 3.279s | 2.589s | 1.711s |
4 | 3.311s | 2.555s | 1.708s |
5 | 3.291s | 2.567s | 1.757s |
Averages:
- Slow Jest: 3.293s
- Fast Jest: 2.570s
- Vitest: 1.727s
Comparing with the best time from Jest, Vitest is 32% faster. When we look at our initial test case, the gain is 50%.
I hope you enjoyed this! Feel free to leave suggestions and share it with your friends and colleagues.
Top comments (0)