While I agree with the content of this article, I think any good article with advice on optimization needs to have some basic profiling to prove their point. Something as nebulous as "loop iterations" isn't really a convincing metric.
Here are the timings of each of the array operations performed in this article as performed on a randomly generated array of 50000 numbers on my machine. This gives an idea of the relative performance of each optimization.
Thank you so much for your advice and for taking the time to investigate further! I will definitely add metrics in the next time I write performance-related articles.
I can't upvote this enough. It is such a simple, yet effective way of demonstrating where inefficiencies of code have a real-world impact. I'm going to use this in future discussions. People often see such metrics in isolation, without understanding the larger impact of why it's important to write more efficient code.
I work with server-side code mostly, and the same principle applies. If I can save the server even 1 second of processing on a given request, then it means that every instance of this request has time saved. Which means that many concurrent requests will not use as much of the server's resources.
^ applies to client side code as well, which means if you replace "server" with "client" (desktop / mobile), then you still have the same gains.
I always thought the whole "humans won't notice the performance improvements" thing bogus. Sure, they probably won't see a difference.
However their machines do. Sure, we are building applications for humans, but that doesn't mean we should overlook performance that only machines will notice.
There is a downside. Typically most performance tuned code has greatly reduced human readability, like the solution provided in this article.
Sometimes, it's a price we have to pay if completely necessary. Indeed, it is frustrating how we can't have both readability and performance. Perhaps transpilers can be used to somehow convert less performant code to their more performant counterparts, but then that's just adding a whole new layer of complexity to the project. It isn't really a "solution" per se.
Transpilation seems like the cheapest of the three possible options to me.
Though, I don't see a lot of transpilation optimizing for performance.
I feel like we have the community for such an undertaking now, but those that care about raw performance of their web scripts have now focused on WASM instead. Which is a smart move, and has a lot more potential.
Yay WASM!
Except that, if you are using JS for web development, as most of us, 99% of the times you will be working with arrays containing tens of objects, and theese optimizations will be irrelevant.
I've often heard "there will never be more than 20 items on this page" followed by "let's add this complex metadata to each item" followed by "shit, due to unforeseen circumstances, a big client needs 20000 items on that page, and now it feels sluggish".
I'd say don't spend time on difficult optimizations with negligible real world improvements... But when the optimization is extremely easy, why would you NOT do it?
exportfunctionmap(fn,thisArg){constO=Object(this)constlen=O.lengthif(!(fninstanceofFunction))thrownewTypeError(`${fn} is not a function`)constA=newArray(len)for(letk=0;k<len;k++){constPk=k.toString()constkPresent=O.hasOwnProperty(Pk)if(kPresent){constkValue=O[i]constmappedValue=fn.call(thisArg,kValue,i,O)Object.defineProperty(A,Pk,{value:mappedValue,writable:true,enumerable:true,configurable:true})}}returnA}
But most of that is lost unless you actually write the upper loop inline, because unlike the built in methods it gets deoptimized if you call the utility with many different fn functions. Especially when some of them throw.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
While I agree with the content of this article, I think any good article with advice on optimization needs to have some basic profiling to prove their point. Something as nebulous as "loop iterations" isn't really a convincing metric.
Here are the timings of each of the array operations performed in this article as performed on a randomly generated array of 50000 numbers on my machine. This gives an idea of the relative performance of each optimization.
Thank you so much for your advice and for taking the time to investigate further! I will definitely add metrics in the next time I write performance-related articles.
That is an insane improvement.
Percentage-wise, sure. In practice, is a human being going to notice a 4.2ms improvement?
Your phone's battery will notice.
I can't upvote this enough. It is such a simple, yet effective way of demonstrating where inefficiencies of code have a real-world impact. I'm going to use this in future discussions. People often see such metrics in isolation, without understanding the larger impact of why it's important to write more efficient code.
I work with server-side code mostly, and the same principle applies. If I can save the server even 1 second of processing on a given request, then it means that every instance of this request has time saved. Which means that many concurrent requests will not use as much of the server's resources.
^ applies to client side code as well, which means if you replace "server" with "client" (desktop / mobile), then you still have the same gains.
I always thought the whole "humans won't notice the performance improvements" thing bogus. Sure, they probably won't see a difference.
However their machines do. Sure, we are building applications for humans, but that doesn't mean we should overlook performance that only machines will notice.
There is a downside. Typically most performance tuned code has greatly reduced human readability, like the solution provided in this article.
Sometimes, it's a price we have to pay if completely necessary. Indeed, it is frustrating how we can't have both readability and performance. Perhaps transpilers can be used to somehow convert less performant code to their more performant counterparts, but then that's just adding a whole new layer of complexity to the project. It isn't really a "solution" per se.
Transpilation seems like the cheapest of the three possible options to me.
Though, I don't see a lot of transpilation optimizing for performance.
I feel like we have the community for such an undertaking now, but those that care about raw performance of their web scripts have now focused on WASM instead. Which is a smart move, and has a lot more potential.
Yay WASM!
Except that, if you are using JS for web development, as most of us, 99% of the times you will be working with arrays containing tens of objects, and theese optimizations will be irrelevant.
I've often heard "there will never be more than 20 items on this page" followed by "let's add this complex metadata to each item" followed by "shit, due to unforeseen circumstances, a big client needs 20000 items on that page, and now it feels sluggish".
I'd say don't spend time on difficult optimizations with negligible real world improvements... But when the optimization is extremely easy, why would you NOT do it?
I wish JS runtimes did better inlining and monomorphization.
Then you could have your own utilities like
Which beats out what the spec method does:
But most of that is lost unless you actually write the upper loop inline, because unlike the built in methods it gets deoptimized if you call the utility with many different
fn
functions. Especially when some of them throw.