DEV Community 👩‍💻👨‍💻

Cover image for Javascript Array.push is 945x faster than Array.concat 🤯🤔
Shi Ling for Uilicious

Posted on • Updated on • Originally published at uilicious.com

Javascript Array.push is 945x faster than Array.concat 🤯🤔

TDLR

If you are merging arrays with thousands of elements across, you can shave off seconds from the process by using arr1.push(...arr2) instead of arr1 = arr1.concat(arr2). If you really to go faster, you might even want to write your own implementation to merge arrays.

Wait a minute... how long does it take to merge 15,000 arrays with .concat...

Recently, we had a user complaining of a major slowdown in the execution of their UI tests on UI-licious. Each I.click I.fill I.see command which usually takes ~1 second to complete (post-processing e.g. taking screenshots) now took over 40 seconds to complete , so test suites that usually completed under 20 minutes took hours instead and was severely limiting their deployment process.

It didn't take long for me to set up timers to narrow down out which part of the code was causing the slowdown, but I was pretty surprised when I found the culprit:

arr1 = arr1.concat(arr2)
Enter fullscreen mode Exit fullscreen mode

Array's .concat method.

In order to allow tests to be written using simple commands like I.click("Login") instead of CSS or XPATH selectors I.click("#login-btn"), UI-licious works using dynamic code analysis to analyse the DOM tree to determine what and how to test your website based on semantics, accessibility attributes, and popular but non-standard patterns. The .concat operations was being used to flatten the DOM tree for analysis, but worked very poorly when the DOM tree was very large and very deep, which happened when our user recently pushed an update to their application that caused their pages to bloat significantly (that's another performance issue on their side, but it's another topic).

It took 6 seconds to merge 15,000 arrays that each had an average size of 5 elements with .concat.

What?

6 seconds...

For 15,000 arrays with the average size of 5 elements?

That's not a lot data.

Why is it so slow? Are there faster ways to merge arrays?


Benchmark comparisons

.push vs. .concat for 10000 arrays with 10 elements each

So I started researching (by that, I mean googling) benchmarks for .concat compared to other methods to merge arrays in Javascript.

It turns out the fastest method to merge arrays is to use .push which accepts n arguments:

// Push contents of arr2 to arr1
arr1.push(arr2[0], arr2[1], arr2[3], ..., arr2[n])

// Since my arrays are not fixed in size, I used `apply` instead
Array.prototype.push.apply(arr1, arr2)
Enter fullscreen mode Exit fullscreen mode

And it is faster by leaps in comparison.

How fast?

I ran a few performance benchmarks on my own to see for myself. Lo and behold, here's the difference on Chrome:

JsPerf - .push vs. .concat 10000 size-10 arrays (Chrome)

👉 Link to the test on JsPerf

To merge arrays of size 10 for 10,000 times, .concat performs at 0.40 ops/sec, while .push performs at 378 ops/sec. push is 945x faster than concat! This difference might not be linear, but it is already is already significant at this small scale.

And on Firefox, here's the results:

JsPerf - .push vs. .concat 10000 size-10 arrays (Firefox)

Firefox's SpiderMonkey Javascript engine is generally slower compared to Chrome's V8 engine, but .push still comes out top, at 2260x faster.

This one change to our code fixed the entire slowdown problem.

.push vs. .concat for 2 arrays with 50,000 elements each

But ok, what if you are not merging 10,000 size-10 arrays, but 2 giant arrays with 50000 elements each instead?

Here's the the results on Chrome along with results:

JsPerf - .push vs. .concat 2 size-50000 arrays (chrome)

👉 Link to the test on JsPerf

.push is still faster than .concat, but a factor of 9.

Not as dramatic as 945x slower, but still dang slow.


Prettier syntax with rest spread

If you find Array.prototype.push.apply(arr1, arr2) verbose, you can use a simple variant using the rest spread ES6 syntax:

arr1.push(...arr2)
Enter fullscreen mode Exit fullscreen mode

The performance difference between Array.prototype.push.apply(arr1, arr2) and arr1.push(...arr2) is negligable.


But why is Array.concat so slow?

It lot of it has to do with the Javascript engine, but I don't know the exact answer, so I asked my buddy @picocreator , the co-creator of GPU.js, as he had spent a fair bit of time digging around the V8 source code before. @picocreator 's also lent me his sweet gaming PC which he used to benchmark GPU.js to run the JsPerf tests because my MacBook didn't have the memory to even perform .concat with two size-50000 arrays.

Apparently the answer has a lot to do with the fact that .concat creates a new array while .push modifies the first array. The additional work .concat does to add the elements from the first array to the returned array is the main reason for the slowdown.

Me: "What? Really? That's it? But by that much? No way!"
@picocreator : "Serious, just try writing some naive implementations of .concat vs .push then!"

So I tried writing some naive implementations of .concat and .push. Several in fact, plus a comparison with lodash's _.concat:

JsPerf - Various ways to merge arrays (Chrome)

👉 Link to the test on JsPerf

Naive implementation 1

Let's talk about the first set of naive implementation:

Naive implementation of .concat
// Create result array
var arr3 = []

// Add Array 1
for(var i = 0; i < arr1Length; i++){
  arr3[i] = arr1[i]
}

// Add Array 2
for(var i = 0; i < arr2Length; i++){
  arr3[arr1Length + i] = arr2[i]
}
Enter fullscreen mode Exit fullscreen mode
Naive implementation of .push
for(var i = 0; i < arr2Length; i++){
  arr1[arr1Length + i] = arr2[i]
}
Enter fullscreen mode Exit fullscreen mode

As you can see, the only difference between the two is that the .push implementation modifies the first array directly.

Results of vanilla methods:
  • .concat : 75 ops/sec
  • .push: 793 ops/sec (10x faster)
Results of naive implementation 1
  • .concat : 536 ops/sec
  • .push : 11,104 ops/sec (20x faster)

It turns that my DIY concat and push is faster than the vanilla implementations... But here we can see that simply creating a new result array and copying the content of the first array over slows down the process significantly.

Naive implementation 2 (Preallocate size of the final array)

We can further improve the naive implementations by preallocating the size of the array before adding the elements, and this makes a huge difference.

Naive implementation of .concat with pre-allocation
// Create result array with preallocated size
var arr3 = Array(arr1Length + arr2Length)

// Add Array 1
for(var i = 0; i < arr1Length; i++){
  arr3[i] = arr1[i]
}

// Add Array 2
for(var i = 0; i < arr2Length; i++){
  arr3[arr1Length + i] = arr2[i]
}
Enter fullscreen mode Exit fullscreen mode
Naive implementation of .push with pre-allocation
// Pre allocate size
arr1.length = arr1Length + arr2Length

// Add arr2 items to arr1
for(var i = 0; i < arr2Length; i++){
  arr1[arr1Length + i] = arr2[i]
}
Enter fullscreen mode Exit fullscreen mode
Results of naive implementation 1
  • .concat : 536 ops/sec
  • .push : 11,104 ops/sec (20x faster)
Results of naive implementation 2
  • .concat : 1,578 ops/sec
  • .push : 18,996 ops/sec (12x faster)

Preallocating the size of the final array improves the performance by 2-3 times for each method.

.push array vs. .push elements individually

Ok, what if we just .push elements individually? Is that faster than Array.prototype.push.apply(arr1, arr2)

for(var i = 0; i < arr2Length; i++){
  arr1.push(arr2[i])
}
Enter fullscreen mode Exit fullscreen mode
Results
  • .push entire array: 793 ops/sec
  • .push elements individually: 735 ops/sec (slower)

So doing .push on individual elements is slower than doing .push on the entire array. Makes sense.

Conclusion: Why .push is faster .concat

In conclusion, it is true that the main reason why concat is so much slower than .push is simply that it creates a new array and does the additional work to copy the first array over.

That said, now there's another mystery to me...

Another mystery

Why are the vanilla implementations so much slower than the naive implementations?🤔I asked for @picocreator 's help again.

We took a look at lodash's _.concat implementation for some hints as to what else is vanilla .concat doing under the hood, as it is comparable in performance (lodash's is slightly faster).

It turns out that because according to the vanilla's .concat's specs, the method is overloaded, and supports two signatures:

  1. Values to append as n number of arguments, e.g. [1,2].concat(3,4,5)
  2. The array to append itself, e.g. [1,2].concat([3,4,5])

You can even do both like this: [1,2].concat(3,4,[5,6])

Lodash also handles both overloaded signatures, and to do so, lodash puts all the arguments into an array, and flattens it. It make sense if you are passing in several arrays as arguments. But when passed an array to append, it doesn't just use the array as it is, it copies that into another array, and then flattens it.

... ok...

Definitely could be more optimised. And this is why you might want to DIY your own merge array implementation.

Also, it's just my and @picocreator 's theory of how vanilla .concat works under the hood based on Lodash's source code and his slightly outdated knowledge of the V8 source code.

You can read the lodash's source code at your leisure here.


Additional Notes

  1. The tests are done with Arrays that only contain Integers. Javascript engines are known to perform faster with Typed Arrays. The results are expected to be slower if you have objects in the arrays.

  2. Here are the specs for the PC used to run the benchmarks:
    PC specs for the performance tests


Why are we doing such large array operations during UI-licious tests anyway?

Uilicious Snippet dev.to test

Under the hood, the UI-licious test engine scans the DOM tree of the target application, evaluating the semantics, accessible attributes and other common patterns to determine what is the target element and how to test it.

This is so that we can make sure tests can be written as simple as this:

// Lets go to dev.to
I.goTo("https://dev.to")

// Fill up search
I.fill("Search", "uilicious")
I.pressEnter()

// I should see myself or my co-founder
I.see("Shi Ling")
I.see("Eugene Cheah")
Enter fullscreen mode Exit fullscreen mode

Without the use of CSS or XPATH selectors, so that the tests can be more readable, less sensitive to changes in the UI, and easier to maintain.

ATTENTION: Public service announcement - Please keep your DOM count low!

Unfortunately, there's a trend of DOM trees growing excessively large these days because people are building more and more complex and dynamic applications with modern front-end frameworks. It's a double-edge sword, frameworks allow us to develop faster, folks often forget how much bloat frameworks add. I sometimes cringe at the number of elements that are just there to wrap other elements when inspecting the source code of various websites.

If you want to find out whether your website has too many DOM nodes, you can run a Lighthouse audit.

Google Lighthouse

According to Google, the optimal DOM tree is:

  • Less than 1500 nodes
  • Depth size of less than 32 levels
  • A parent node has less than 60 children

A quick audit on the Dev.to feed shows that the DOM tree size is pretty good:

  • Total count of 941 nodes
  • Max. depth of 14
  • Max number of child elements at 49

Not bad!

Top comments (73)

Collapse
 
mortoray profile image
edA‑qa mort‑ora‑y

Nice investigation.

The concat loop is akin to O(n^2) whereas the push loop is O(n). By creating a new array you need to do the copy, as you showed. This happens for every iteration of the loop*. An array push however is amortized constant time (usually), thus you only copy each element once (well probably three times).

Part of the issue with increased copying is the increased memory access. Your array probably exceeds the bounds of the L0 cache on the CPU, possibly the L1 cache. This means that when concat copies, it needs to load new memory from a more distant cache, repeatedly. The push version never accesses the data already in the list, thus avoiding this extra memory loading.

(*Before somebody gets pedantic, it's more like a K*N loop, where K is the loop iterations. Hwever, since the size of the resulting array is linearly related to K, it means c * K = N, thus ~ N^2)

Collapse
 
dubyabrian profile image
W. Brian Gourlie • Edited on

Talking strictly in terms of the performance of push versus concat (albeit naively as it relates to VM implementation details), this wouldn't really apply.

A meaningful comparison of the two would take two large pre-allocated arrays, and compare the results of a single concat operation and a single push operation, where I'd expect concat to perform better when dealing with sufficiently large arrays, since it's (theoretically) a single allocation and then two memcpys, whereas push copies a single element over one at a time causing multiple reallocations.

To be pedantic, the context in which we're talking about the performance characteristics of push and concat is in terms of merging a large number of arrays, where I'd wager garbage collection is likely the dominating factor in the concat solution, even after taking runtime characteristics into account.

Collapse
 
mortoray profile image
edA‑qa mort‑ora‑y

I'm not following how this means what I said doesn't apply?

push is highly unlikely to allocate memory one bit at a time. If it's anything like a C++ vector, or a contiguous array in any other language, it has amortized O(1) push time by allocating geometrically rather than linearly.

We're also dealing in sizes that are definitely large enough to spill over the CPU buffers, especially in the concat case where we allocate new data.

GC is potentially involved, but the allocation and deallocation of memory, in such tight loops, for an optimized VM (I assume it is), is likely also linear time.

There's no reason a memcpy is faster than a loop copying over items one at a time. memcpy is not a CPU construct, it gets compiled to a loop that copies a bit of memory at a time.

Thread Thread
 
dubyabrian profile image
W. Brian Gourlie • Edited on

I'm not following how this means what I said doesn't apply?

I should clarify: What you said applies to merging multiple arrays, where the input size is the number of arrays being merged. If we're talking strictly about the performance of push vs concat for concatenating one array to another (which the headline sort of implies), then the input size is always one if analyzing in terms of merging multiple arrays.

push is highly unlikely to allocate memory one bit at a time

As you said, it's going to re-allocate similarly to a C++ vector, which for the purpose of runtime analysis we just disregard. I choose not to disregard it since I don't believe runtime analysis tells the whole story here. Or perhaps more appropriately, merging multiple arrays is a specific use of push and concat that doesn't speak to their performance characteristics in a generalized way.

The point is, concat may perform better in scenarios where one is likely to use it, and it would be misleading to advocate always using push instead.

Collapse
 
ap profile image
Aristotle Pagaltzis
Collapse
 
zenmumbler profile image
zenmumbler • Edited on

Note that your tests may be returning biased results as for example Chrome and likely other browsers have special cased arrays that are known to (not) be sparse.

Doing Array.from(Array(50000), (x,i) => i + 1) looks nice but will create a "hole-y" (sparse) Array. That is, the initial array will be 50000 "holes" or empty values. That you then fill them in later will not upgrade the array to be non-sparse later on. These things change quickly but this is what was featured in a session by a Chrome engineer on Array implementations from last year: youtube.com/watch?v=m9cTaYI95Zc

I work with large arrays as well (3D model data), these are anywhere from 1KB to 100MB in size and I have to sometimes append them all together without wasting memory etc. I use the following method to append them together. It uses Array.slice but browsers (seem to) implement slicing as CopyOnWrite so the slice creates a view on the array and doesn't make copies constantly I've found empirically.

const MAX_BLOCK_SIZE = 65535; // max parameter array size for use in Webkit

export function appendArrayInPlace<T>(dest: T[], source: T[]) {
    let offset = 0;
    let itemsLeft = source.length;

    if (itemsLeft <= MAX_BLOCK_SIZE) {
        dest.push.apply(dest, source);
    }
    else {
        while (itemsLeft > 0) {
            const pushCount = Math.min(MAX_BLOCK_SIZE, itemsLeft);
            const subSource = source.slice(offset, offset + pushCount);
            dest.push.apply(dest, subSource);
            itemsLeft -= pushCount;
            offset += pushCount;
        }
    }
    return dest;
}
Enter fullscreen mode Exit fullscreen mode

Linked here in my repo, which also has quite a few other array methods, check 'em out ;)
github.com/stardazed/stardazed/blo...

Collapse
 
picocreator profile image
Eugene Cheah • Edited on

Definately a biased sample for browser point of view - not fully intentional though. We tried to get firefox results for extreamly large arrays.... and err lets say it didnt go well at all for the other browsers when put side by side, so we left them out.

Bigger impact though is probably gonna be on the server side (node.js) of things, which is the case for us.

Despite my involvement in GPU.JS, and all things 3D, which is cool in its own right. For most other sites, I do personally feel that its usually a bad idea to overbloat the UI, especially for those with older devices.

Big thanks for the youtube link =)

Collapse
 
zenmumbler profile image
zenmumbler

Yes, my focus is in-browser (on desktop) high-end 3D scenes, which is very different from web pages and server-side stuff. I've spent a good amount of time benchmarking various approaches to array based data manipulations so certain of my functions have separate paths for diff browsers but in general with my code Safari (since v11) and Chrome are very close with my code usually performing best in STP and Firefox generally coming in 3rd place by about -10% to -30%, unless it faster by a good margin in particular things… JS engines.

This problem also exists for ArrayBuffers as the changes to JS engines to accommodate ES6 runtime features negatively affected many common functions on those types. New patches have come in since then and they're better now, but certain functions on ArrayBuffer, like set, are still very slow on all browsers.

Thanks to Shi Ling for the nice article.

Thread Thread
 
picocreator profile image
Eugene Cheah • Edited on

Actually your time spent on various array benchmarks across browsers, would be a cool article on its own - especially the various other browsers, safari desktop, mobile, edge, ie11? (haha).

Would actually love to see your browser specific implementations 🤔 It is admittingly a lot of work though

Thread Thread
 
zenmumbler profile image
zenmumbler • Edited on

Thanks! A lot of this work is in various test files (not in repos) and quite a bit of it was done over the last 3 years so I'd have to retest all permutations etc. I often feel like writing an article here but instead I just write a long comment on someone else's story as that feels less "weighty." But cool that you'd be interested!

As for browsers, given the scope of my project I have the luxury of requiring essentially the latest versions of desktop browsers, so my project outputs ES2018 code and I only need to check a few different engines.

Collapse
 
shiling profile image
Shi Ling Author

Oh, I get it - you mean those holey arrays will be evaluated as holey arrays forever even if they become packed.

That video is really helpful - I learnt yet another thing about Array implementations in V8 today. Gee I wish more of these good videos are transcribed into searchable online literature.

Hm... I could change the array initialisation so that it's not holey and redo the experiments - concat will still be slower than push, but the results for the vanilla implementations could perform faster.

Huh, that's interesting! I was wondering what are the memory footprints of different implementations to merge arrays!

Collapse
 
merri profile image
Vesa Piittinen

A thing I do wonder though is that if you an extra .slice(0) operation after the array is packed, wouldn't that result into a true fast-to-operate non-holey array? So the cost would be an extra allocation and copy, which would still be better than continuous .push() into a packed array.

Of course this only holds true as long there will be thousands of items to process and you know the final size before you start working on it.

Thread Thread
 
zenmumbler profile image
zenmumbler

This is Javascript engine dependent, but please have a look at my initial comment, I observed that Array.slice creates a view on an array instead of making a new one until it is modified (CopyOnWrite). This behaviour likely will vary for example based on what the VM thinks is best given the size and contents of the array and will again vary per VM implementation. Also, my observation is just that, I did not verify this in the source of each VM, so consider this an unconfirmed opinion.

The video may also give a real answer to this Q as the presenter goes into detail about these things, but I haven't watched it recently. Good to have a look.

Thread Thread
 
merri profile image
Vesa Piittinen • Edited on

I watched the video and it didn't answer that specific question. However if slice creates a view then it still does need to make a new array once any modification is made on the sliced copy. At that point shouldn't it regard it as a new array and be able to optimize it?

It would be awesome though if holey arrays could become non-holey without this kind of VM specific tricking once there are no holes remaining.

Collapse
 
kitanga_nday profile image
Kitanga Nday

So from what I remember, Array implementations in V8 come in two flavours: Fast element and Dictionary element (hash table) arrays. The former is present when the array is below 10000 elements and doesn't have any "holes" in them. The latter, on the other hand, is present when the opposite is true: array.length > 10000 and array has "holes" (e.g. [1, 2, undefined, 4, 5])

Collapse
 
picocreator profile image
Eugene Cheah • Edited on

While not shown in the use case (as the above oversimplifies the actual code). Your right that holes have an impact between the array mode / dictionary mode in V8 code. Having holes does not gurantee a trigger into dictionary mode, but is one of the contributing factor in how the internal engine decides the mode.

Probably worth further investigation, gut feel as of now from how I understand it, is the array in our use case would not have any holes 99% of the time. And from our internal code monitoring, its these 99% that's has the problem. >_<

I can't recall the talk reference for this (but definitely know it's real) so if you know where it's from. It would be good to add here for others to learn.

Collapse
 
kitanga_nday profile image
Kitanga Nday

I replied to this, but I don't see the reply here. Really weird.

Anyways here are two resources that talk about how V8 handles arrays:

Collapse
 
kitanga_nday profile image
Kitanga Nday

So concat might have been slow because of this reason. I'm still not certain about this, mainly because .push was still fast.

Collapse
 
lrpinto profile image
Luisa Pinto

I am sorry, I do not understand what is the big fuss about this... concat() and push() have two different applications and they were not made to compete with each other. You do not need a study to conclude concat() is a slower option. That is very clear by reading the two first lines of the documentation of each method.

developer.mozilla.org/en-US/docs/W...

developer.mozilla.org/en-US/docs/W...

It is important to understand that you use concat() when you are after immutability, for example, when you want to preserve states. One example where immutability is important is when using React - reactjs.org/tutorial/tutorial.html....

Otherwise, you use push when you don't care about changing the arrays (one of them at least).

I am very much more surprised immutability was not mentioned in this article, than with the difference between the performance of each method.

Collapse
 
shiling profile image
Shi Ling Author • Edited on

While that is true, and although it just so happened that in my use case immutability doesn't matter, but it is still worthwhile to consider using other methods to merge arrays instead of concat, while still preserving the source arrays.

If you needed to merge thousands of small arrays in a loop without modifying the source arrays for example, you should create a new result array and use push the source arrays instead to avoid the penalty of copying the first array during each loop when you use concat.

Naive code to merge 1000 arrays with concat (basically what I did, which in my case is 15,000 arrays):

for(var i = 0; i < 1000; i++){
  arr1 = arr1.concat(arr2)   // arr1 gets bigger and bigger, so this gets more expensive over time
}

Faster code to merge 1000 arrays with push w/o modification to source arrays:

let result = []
result.push(...arr1)
for(var i = 0; i < 1000; i++){
  result.push(...arr2) 
}

Also, given by the name, it's not very obvious to people that concat creates a new array instead of modifying the first array. And even though I frequently read MDN docs, I still sometimes forget that concat creates the result as new array.

Collapse
 
johncip profile image
jmc • Edited on

it's not very obvious to people that concat creates a new array instead of modifying the first array

I always forget too 😂

Thread Thread
 
joineral32 profile image
Alec Joiner

FWIW, to me it is very intuitive that concat would create a new array since when I hear the word 'concat' I immediately think of strings. Maybe it's misguided, but that leads me to expect other things to behave the same way strings would.

Collapse
 
brokenthorn profile image
Paul-Sebastian Manole

Might want to mention what the spread operator actually does, which is to spread the one argument to array.push(...arg) to multiple arguments like so: array.push(arg[0], arg[1], arg[2], ..., arg[n]).

Collapse
 
shiling profile image
Shi Ling Author

You're right, I didn't realise that people might not know what the ES6 spread operator does. Updated the article - thanks!

Collapse
 
gmartigny profile image
Guillaume Martigny • Edited on

Very thorough article.

Considering the fame of lodash, I now wonder is there's a place for an equivalent with performance in mind.

I also wonder how many more example like this is there is. Is that only concat or most of Array.prototype that can be optimized ?

I remember my early struggles with JS performance, replacing Math.floor by << 0 gaining considerable amount op/s. (Not true today)

Collapse
 
adamgerthel profile image
Adam Gerthel

Another sample is array.push vs array.unshift, basically for memory reasons. If I remember correctly, a push is simple for most computers because it already has "extra" memory allocated at the end of an array, and will only have to recreate the array when the allocated memory runs out. unshift however will always recreate the whole array because adding an item to the start of an array would require moving ever single item one step. Recreating the whole array is simpler to do than moving each one.

See stackoverflow.com/questions/440315...

Collapse
 
shiling profile image
Shi Ling Author

Huh, interesting!

Collapse
 
mykohsu profile image
mykohsu

There's a major problem that should be mentioned.

push(... arr)

Will stack overflow if your array is larger than the stack size.

Collapse
 
shiling profile image
Shi Ling Author

Wait... why? Stack as in execution stack or memory?

Collapse
 
johncip profile image
jmc

Execution stack. It's the same for apply:

> const a = [], b = new Array(10**6)
> Array.prototype.push.apply(a, b)
Thrown:
RangeError: Maximum call stack size exceeded

The problem is that function call arguments all need to go on the call stack, so that they can be bound to the called function's variables. Put another way, push.apply(a, [1,2,3]) --> a.push(1,2,3), where 1, 2, and 3 are assumed to fit in the call stack.

The max stack size is set by the OS & user, but generally small (a few hundred or thousand kb), since it's only meant to hold function call contexts.

So we can distinguish between cases where the array is actually a list of function arguments, or the push.apply usage, where the array is "just" data, and we're only able to use it with push because that function, for convenience, takes a variable number of arguments.

In order to use apply in the latter case, it's good to know up front that the array will be small.

Thread Thread
 
shiling profile image
Shi Ling Author

Oh I see!

Collapse
 
dabit3 profile image
Nader Dabit

What about:

cont newArr = [...oldArr, newItem]

🤔

Collapse
 
iwilsonq profile image
Ian Wilson

I tried this out in the babel repl, the spread uses concat after es6 code gets transpiled so I can assume its probably the same :O

babeljs.io/repl/#?babili=false&bro...

Collapse
 
shiling profile image
Shi Ling Author

Doesn't that still create a copy of the first array - which makes it the same as the concat?

Collapse
 
dabit3 profile image
Nader Dabit

I don't know actually. If so, that's really insightful thank you, I'll investigate!!

Collapse
 
adyngom profile image
Ady Ngom

Great stuff, I have so much love for jsperf.com that I remember being very upset for a while when it went down for a while. Glad to have it back up, I have recently used it to check the performance on different solutions the Sock Merchant Challenge, please check the comments section if you have a sec

My question though is that since some have expressed possible bias on certain browsers, how much do we trust or should rely on the built in console.time method running in a browserless environment such as node to provide a base for such benchmarks;

console.time('how long');
  plzBenchMarkMe();
console.timeEnd('how long');

If this is an objective and efficient way of testing speed, it should be easy to put a wrapper around it for reuse, if not what do folks recommend as tools to look into for this crucial exercise

Collapse
 
shiling profile image
Shi Ling Author

Huh, I didn't know there's a console.time and console.timeEnd methods.

Been using new Date() all along :D.

I don't know if JsPerf uses console.time and console.timeEnd under the hood, but the underlying implementations of the timer method can't be so sophisticated that it'll make a significant impact on the tests between browsers.

Collapse
 
4mitch profile image
Dima

Seems the problem is not in concat, but in = itself.
As I can see map of concat is even faster than push

console.time('how long');
var arBig = 
[...Array(10000).keys()].map( function(step) {
  return arr1.concat(arr2);
});
console.log(arBig.length);
console.timeEnd('how long');

Plz check jsperf.com/javascript-array-concat...

Collapse
 
merri profile image
Vesa Piittinen

Not comparable: the concat mapper creates 10000 arrays of 20 item arrays, while the other functions create an array of 100010 items.

Collapse
 
4mitch profile image
Dima

Ahh, indeed!
Well, with reduce-concat timing became the same as arr1 = arr1.concat :(

console.time('how long');
var arBig = 
[...Array(10000).keys()].reduce( function(acc, val) {
  return acc.concat(arr2);
}, arr1);
console.log(arBig.length);
console.timeEnd('how long');
Collapse
 
zakius profile image
zakius

according the DOM count: how to properly handle situation when you simply have a lot of items to display? implementing fake scroller to render only the visible elements and remove from DOM these hidden ones doesn't work that well IMO, especially when all of them are still more or less interactive (multiple selection etc.)

Collapse
 
shiling profile image
Shi Ling Author

You mean how does should a front-end engineer improve performance on a page with a high DOM count, or how does our test engine handle such huge pages?

Collapse
 
zakius profile image
zakius

I meant how to handle high item counts while keeping good performance/minimizing DOM count

Thread Thread
 
shiling profile image
Shi Ling Author

Well, the most common solution is pagination. But if your designer fancies the infinite scrolling pattern, then as you said, rendering only visible content is a solution. The Chrome Dev Team proposed the <virtual-scroller> element youtube.com/watch?v=UtD41bn6kJ0 last year, which may be handy when if it becomes standard in the future.

Additionally, we can also check if there are redundant wrapper elements on the page that can be removed.

Thread Thread
 
zakius profile image
zakius

Thanks for the link.
It's not exactly infinite scrolling or anything like that, it's an app (kind of, browser extension), RSS reader to be precise, and just by importing my sources list and downloading currently available articles ir goes to over 3000 articles. Selecting "all feeds" surely renders quite a big list.
It's a legacy project I picked up since it was abandoned and cleaned up a bit, there was even an attempt at handling long lists better but it was actually working slower than naive list everything, approach and was glitching quite often. Currently performance isn't terrible but if there was some easy way to improve it I'd gladly use it, I'll do some more research in this matter to see what can be done.

Collapse
 
dollarakshay profile image
Akshay.L.Aradhya • Edited on

I just tested this myself and found the concat is much faster than push. Benchmarking done with Node 15.7.0 using Benchmark.js

Test Results - Merging 2 10k element arrays
--------------------------------------------------------------------------
Concat                    : 9780.56 ops/sec (+138.97 %)
Spread operator           : 6278.58 ops/sec ( +53.41 %)
Uisng Push                : 4273.39 ops/sec (  +4.41 %)
Push with spread operator : 4092.72 ops/sec (  +0.00 %)
--------------------------------------------------------------------------


Test Results - Merging 2 1000 element arrays
-------------------------------------------------------------------------------
Concat                    : 329820.18 ops/sec (+574.92 %)
Spread operator           : 112274.04 ops/sec (+129.75 %)
Push with spread operator : 98152.51 ops/sec (+100.85 %)
Uisng Push                : 48868.39 ops/sec (  +0.00 %)
-------------------------------------------------------------------------------

Test Results - Merging 2 100 elements array
-------------------------------------------------------------------------------
Concat                    : 2325515.17 ops/sec (+381.10 %)
Spread operator           : 649242.79 ops/sec ( +34.31 %)
Push with spread operator : 608852.43 ops/sec ( +25.96 %)
Uisng Push                : 483378.08 ops/sec (  +0.00 %)
-------------------------------------------------------------------------------
Enter fullscreen mode Exit fullscreen mode
Collapse
 
neolivz profile image
Jishnu Viswanath

I modified the code

var arr1 = [1, 2, 3, 4, 5, 6, 7, 8, 9, 0]
var arr2 = [1, 2, 3, 4, 5, 6, 7, 8, 9, 0]
var arr3 = []

var date = Date.now();
for (let i = 10000; i > 0; i--) {
  var arr  =[];
  Array.prototype.push.apply(arr, arr1);
  Array.prototype.push.apply(arr, arr2);
  arr1 = arr;
}

Date.now() - date;

This gives the exact same performance as the concat function, basically creating the new object is the one which is costing the extra perf.
I mean concat function, actually creates a new array and append existing array into that, that is costly affair considering that it will constantly increase 10, 20, 30,40.... 99990entries in it and append in hte next iteration

Collapse
 
jaguilar profile image
J • Edited on

Your benchmark for 50k length arrays is busted. You are building up arr1 in each run of the test case, which means it is longer and longer on each pass. This is obviously going to favor push, which is only going to copy each power of two, whereas concat has to copy every time.

If you fix the benchmark, concat is faster: jsperf.com/javascript-array-concat...

Collapse
 
ben profile image
Ben Halpern

Great investigation and data!

Some comments may only be visible to logged-in visitors. Sign in to view all comments.

In defense of the modern web

I expect I'll annoy everyone with this post: the anti-JavaScript crusaders, justly aghast at how much of the stuff we slather onto modern websites; the people arguing the web is a broken platform for interactive applications anyway and we should start over;

React users; the old guard with their artisanal JS and hand authored HTML; and Tom MacWright, someone I've admired from afar since I first became aware of his work on Mapbox many years ago. But I guess that's the price of having opinions.