In React, a common way to modify object properties is by using the spread operator. Its syntax is simple and easy to understand.
While the spread operator is a convenient way to achieve this, it can be less performant than using a traditional loop. This is because the spread operator creates a new array or object every time it is used, which can be expensive when working with large arrays/objects.
Let's consider an example: we have an input array of objects, and we need to create a new object of key-value pairs from it.
const { faker } = require('@faker-js/faker')
// generate fakeData
const fakeData = []
for (let i = 0; i < 5; i++) {
fakeData.push({ key: faker.datatype.uuid(), name: faker.address.cityName() })
}
/* fakeData:
[
{ key: '0bc9a57c-7fd1-449a-8d98-6396f722535a', name: 'Abilene' },
{ key: '2ac57365-bc80-45a1-8033-9efd33de4a52', name: 'Aloha' },
{ key: 'a7d64eaa-0202-4c18-ade1-f43b0853c29c', name: 'Johns Creek' },
{ key: '129a89a6-490a-48b1-9394-7d143926e7d0', name: 'Chicopee' },
{ key: '2d606536-7727-496d-bbee-9663b89f40b9', name: 'Covina' }
]
*/
// generate an object with reduce method
const object = fakeData.reduce((acc, { key, name }) => {
return { ...acc, [key]: name }
}, {})
/* object:
{
'83e12032-7558-467e-b840-ead992754df4': 'Jackson',
'4fe2ce86-b202-4891-8b2f-7fa154b4b448': 'Idaho Falls',
'de1d95c0-3c25-4b8c-9e1d-a8bc20409d45': 'El Centro',
'b54dd7d7-b021-4fc5-9de6-633ee4e240bf': 'Fort Pierce',
'dbee592f-79a0-461a-b477-40ae53f0ff53': 'Palm Springs'
}
*/
If you run the code, everything will work very quickly. However, the potential problem here is that on every iteration of the reduce loop, the entire object is fully copied.
Let's add some code to simply measure the execution time of the function and increase the size of the array:
...
function funcSpread() {
return fakeData.reduce((acc, { key, name }) => {
return { ...acc, [key]: name }
}, {})
}
console.time('execution time')
funcSpread()
console.timeEnd('execution time')
I have the following numbers:
- For an array length of 5, the execution time was 0.05ms
- For an array length of 1000, the execution time was 115ms
- For an array length of 5000, the execution time was 2.4 seconds!
Such execution times are likely not acceptable for either frontend or backend applications. To speed up the code, we will need to abandon the principles of immutability and simply mutate the existing object, instead of copying it on each iteration.
function funcMutate() {
return fakeData.reduce((acc, { key, name }) => {
acc[key] = name
return acc
}, {})
}
This function will process an array of 5000 objects in just 2.5ms, which is 1000 times faster!
A slightly slower but still quite fast solution for this task would be to use the _.set
method from the popular lodash
library:
const _ = require('lodash')
function funcLodash() {
return fakeData.reduce((acc, { key, name }) => {
return _.set(acc, key, name)
}, {})
}
The execution time was 4.6ms, which is also quite fast.
Let's consider other ways to solve this problem while still adhering to the principle of immutability.
Firstly, I tried using Ramda
:
const R = require('ramda')
function funcRamda() {
return fakeData.reduce((acc, { key, name }) => {
return R.assoc(key, name, acc)
}, {})
}
The execution time was 1.4 seconds, which is still slow, but faster than using the spread operator.
The next popular library is Immutable.js
. This library works with its data structures, so here we will deviate even further from the purity of the experiment and will be creating a Map
from Immutable.js
instead of an object.
const { Map } = require('immutable')
function funcImmutable() {
return fakeData.reduce((acc, { key, name }) => {
return acc.set(key, name)
}, Map({}))
The result was 13 ms, which is very close to our best result. The bottleneck of Immutable.js
is considered to be the conversion of the obtained data into a regular object. However, in this case, it had very little impact on overall performance:
const mapData = funcImmutable()
const object = mapData.toObject()
The same result will be obtained here. Immutable data is very fast!
The last option will be Immer
. First, let's try a straightforward approach:
const { produce } = require('immer')
function funcImmer() {
return fakeData.reduce((acc, { key, name }) => {
return produce(acc, draft => { draft[key] = name })
}, {})
}
...and get a catastrophic 19 seconds! But if we wrap not each step, but the entire function in produce
, we can achieve significant improvement:
function funcImmer2() {
return produce({}, draft => {
fakeData.forEach(({ key, name }) => {
draft[key] = name
})
})
}
The result is 8.5 ms. This is still slower than funcMutate or funcLodash, but quite close.
Conclusions?
It is unlikely that any practical conclusions can be drawn from this mini-experiment. Yes, the spread operator works quite slowly and, in some cases, can be a bottleneck in the work of your application when dealing with large objects or arrays.
Significant performance improvements can be achieved by abandoning the principle of immutable data. If this option does not suit you, then Immer
or Immutable
can be a good solution.
Top comments (2)
Great article, I really didn't know that, thanks for sharing this information!
Thank you!