DEV Community

Discussion on: Please don't "overchain" array methods

Collapse
 
abalbanyan profile image
Abdullah Albanyan

While I agree with the content of this article, I think any good article with advice on optimization needs to have some basic profiling to prove their point. Something as nebulous as "loop iterations" isn't really a convincing metric.

Here are the timings of each of the array operations performed in this article as performed on a randomly generated array of 50000 numbers on my machine. This gives an idea of the relative performance of each optimization.

sum:       6.189208984375ms
level1Sum: 4.746337890625ms
level2Sum: 1.995849609375ms
Enter fullscreen mode Exit fullscreen mode
Collapse
 
somedood profile image
Basti Ortiz

Thank you so much for your advice and for taking the time to investigate further! I will definitely add metrics in the next time I write performance-related articles.

Collapse
 
qm3ster profile image
Mihail Malo

That is an insane improvement.

Collapse
 
chuckwood profile image
Charles Wood

Percentage-wise, sure. In practice, is a human being going to notice a 4.2ms improvement?

Thread Thread
 
qm3ster profile image
Mihail Malo

Your phone's battery will notice.

Thread Thread
 
tinussmit profile image
Tinus Smit

I can't upvote this enough. It is such a simple, yet effective way of demonstrating where inefficiencies of code have a real-world impact. I'm going to use this in future discussions. People often see such metrics in isolation, without understanding the larger impact of why it's important to write more efficient code.

I work with server-side code mostly, and the same principle applies. If I can save the server even 1 second of processing on a given request, then it means that every instance of this request has time saved. Which means that many concurrent requests will not use as much of the server's resources.

^ applies to client side code as well, which means if you replace "server" with "client" (desktop / mobile), then you still have the same gains.

Thread Thread
 
tarialfaro profile image
Tari R. Alfaro

I always thought the whole "humans won't notice the performance improvements" thing bogus. Sure, they probably won't see a difference.

However their machines do. Sure, we are building applications for humans, but that doesn't mean we should overlook performance that only machines will notice.

There is a downside. Typically most performance tuned code has greatly reduced human readability, like the solution provided in this article.

Thread Thread
 
somedood profile image
Basti Ortiz

Sometimes, it's a price we have to pay if completely necessary. Indeed, it is frustrating how we can't have both readability and performance. Perhaps transpilers can be used to somehow convert less performant code to their more performant counterparts, but then that's just adding a whole new layer of complexity to the project. It isn't really a "solution" per se.

Thread Thread
 
qm3ster profile image
Mihail Malo

Transpilation seems like the cheapest of the three possible options to me.
Though, I don't see a lot of transpilation optimizing for performance.
I feel like we have the community for such an undertaking now, but those that care about raw performance of their web scripts have now focused on WASM instead. Which is a smart move, and has a lot more potential.
Yay WASM!

Collapse
 
6temes profile image
Daniel

Except that, if you are using JS for web development, as most of us, 99% of the times you will be working with arrays containing tens of objects, and theese optimizations will be irrelevant.

Thread Thread
 
okdewit profile image
Orian de Wit • Edited

I've often heard "there will never be more than 20 items on this page" followed by "let's add this complex metadata to each item" followed by "shit, due to unforeseen circumstances, a big client needs 20000 items on that page, and now it feels sluggish".

I'd say don't spend time on difficult optimizations with negligible real world improvements... But when the optimization is extremely easy, why would you NOT do it?

Thread Thread
 
qm3ster profile image
Mihail Malo

I wish JS runtimes did better inlining and monomorphization.
Then you could have your own utilities like

export const map => (arr, fn) {
  const len = arr.length
  const out = new Array(len)
  for (let i = 0; i < len; i++) {
    out[i] = fn(arr[i])
  }
  return out
}

Which beats out what the spec method does:

export function map(fn, thisArg) {
  const O = Object(this)
  const len = O.length
  if (!(fn instanceof Function)) throw new TypeError(`${fn} is not a function`)
  const A = new Array(len)
  for (let k = 0; k < len; k++) {
    const Pk = k.toString()
    const kPresent = O.hasOwnProperty(Pk)
    if (kPresent) {
      const kValue = O[i]
      const mappedValue = fn.call(thisArg, kValue, i, O)
      Object.defineProperty(A, Pk, { value: mappedValue, writable: true, enumerable: true, configurable: true })
    }
  }
  return A
}

But most of that is lost unless you actually write the upper loop inline, because unlike the built in methods it gets deoptimized if you call the utility with many different fn functions. Especially when some of them throw.