Cofounded Host Collective (DiscountASP.net). Cofounded Player Axis (Social Gaming). Computer Scientist and Technology Evangelist with 20+ years of experience with JavaScript!
I couldn't reproduce the results. I am showing fil running about 35% slower in node 14.8.0 on my machine.
One problem with this type of micro-optimization is it will run differently depending upon a lot of factors, one of them being the engine it is run in.
It is possible that this type of code may run fast in Engine-123 today, but slower in other engines. Also, optimizations to that engine may render this method slower tomorrow.
It is generally recommended to leave this type of optimization up to the compiler and instead optimize your code for readability. Only when the code has been measured to be a bottleneck in the application should an optimization like this be considered.
Interesting, using your example I get massive speed boost on both mac and windows using the loop.
To test properly, run each independently to avoid compiler optimizations of one influencing the other.
Yes a microoptimization would be small changes, like 5 to 10%.
But when it's running many times faster, you want to use the most perfomant.
There is nothing special about the built-in functions that the compiler will optimize better than custom functions. Though some of it may run in C++, it all runs in the V8 sandbox.
In rare cases the V8 team will optimize the V8 engine for some operations on new releases, usually on major releases.
In almost all cases a simple loop will always win out.
However many algorithms will vary in performance due to the data profile.
When you have a set of tools that can test a function you've written in less than a minute, it's worth it.
It's not always about comparing with the built-in function, though you should add them to the set of candidate algorithms if apropos.
Readability is not related to the complexity of the function.
The documentation is.
For example, you're not going to choose not to use memoization just because it's more readable not to, when it could improve the performance of your code 10x or 100x.
By testing the performance of your functions, means you understand your functions better; you understand the compiler better; you understand what idioms work better, and you make your codebase more performant as a whole.
I agree with you that everything has pros and cons that must be evaluated. And that I wouldn't worry about the small differences, and focus on the big difference.
But you should performance test all your functions, and robust test them, and fuzz test them.
Thanks for your input and testing that out. Your input is valuable!
constlt=console.time;constle=console.timeEnd;consta=genNums(10e6);// const a = genRandNums(1, 10e6, 10e6);// fil: 73.986ms - on windows// fil: 33.733ms - maclt("fil");fil(n=>n<8,a);le("fil");// filter: 506.438ms - on windows// filter: 153.095ms - on maclt("filter");a.filter(n=>n<8);le("filter");
if you can use your own code and make it run faster than the native code, that means the native code isn't written so well (usually)... the native code could have been written in C or C++ and running at the speed that is near machine code
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
I couldn't reproduce the results. I am showing
fil
running about 35% slower in node 14.8.0 on my machine.One problem with this type of micro-optimization is it will run differently depending upon a lot of factors, one of them being the engine it is run in.
It is possible that this type of code may run fast in Engine-123 today, but slower in other engines. Also, optimizations to that engine may render this method slower tomorrow.
It is generally recommended to leave this type of optimization up to the compiler and instead optimize your code for readability. Only when the code has been measured to be a bottleneck in the application should an optimization like this be considered.
Cheers 🍻
Interesting, using your example I get massive speed boost on both mac and windows using the loop.
To test properly, run each independently to avoid compiler optimizations of one influencing the other.
Yes a microoptimization would be small changes, like 5 to 10%.
But when it's running many times faster, you want to use the most perfomant.
There is nothing special about the built-in functions that the compiler will optimize better than custom functions. Though some of it may run in C++, it all runs in the V8 sandbox.
In rare cases the V8 team will optimize the V8 engine for some operations on new releases, usually on major releases.
In almost all cases a simple loop will always win out.
However many algorithms will vary in performance due to the data profile.
When you have a set of tools that can test a function you've written in less than a minute, it's worth it.
It's not always about comparing with the built-in function, though you should add them to the set of candidate algorithms if apropos.
Readability is not related to the complexity of the function.
The documentation is.
For example, you're not going to choose not to use memoization just because it's more readable not to, when it could improve the performance of your code 10x or 100x.
By testing the performance of your functions, means you understand your functions better; you understand the compiler better; you understand what idioms work better, and you make your codebase more performant as a whole.
I agree with you that everything has pros and cons that must be evaluated. And that I wouldn't worry about the small differences, and focus on the big difference.
But you should performance test all your functions, and robust test them, and fuzz test them.
Thanks for your input and testing that out. Your input is valuable!
P.S.
I have a post here on how to evaluate what code to use before commiting it to production.
dev.to/functional_js/squeezing-out...
if you can use your own code and make it run faster than the native code, that means the native code isn't written so well (usually)... the native code could have been written in C or C++ and running at the speed that is near machine code