DEV Community

Using arrow functions might be costing you performance

George on May 31, 2019

Oh, and so might implicit returns… Background We all know and love arrow functions, the clean looks, the convenience. But using them do...
Collapse
 
karataev profile image
Eugene Karataev

Actually jsPerf shows that arrow functions with implicit return are the fastest.

Functions comparision

But the difference in performance is so small, that it shouldn't be taken into account when deciding what type of function to use.

Collapse
 
jgaskins profile image
Jamie Gaskins

Agreed. With a difference of 1.5M operations per second, and assuming a screen refresh time of 16.66 (that's repeating, of course) milliseconds, you would have to execute 250K of the faster option changed to the slower one to make a noticeable difference even to a professional Starcraft player.

Collapse
 
pattisano profile image
Philip Attisano

👆 This comment.
You sir just destroyed the entire article . . . take a bow.

Collapse
 
sleeplessbyte profile image
Derk-Jan Karrenbeld

This might be more interesting to compare: jsperf.com/fufufufu/3

Collapse
 
karataev profile image
Eugene Karataev

In my test run the results are almost the same for all three tests (ops/sec varies +-1%).

Thread Thread
 
sleeplessbyte profile image
Derk-Jan Karrenbeld

That is the point ;)

This article is pretty fear mongering and statistically incorrect 🤓.

  • Don't worry about using arrow function expressions
  • Don't worry about using implicit return

In the same line:

  • Don't worry about relying on Automatic Semicolon Insertion
  • Don't worry about using "class"es or (other) prototype-inheritance

The bottleneck of the code is not going to be in the syntax one uses.

Thread Thread
 
karataev profile image
Eugene Karataev

Agree!

Collapse
 
alexsasharegan profile image
Alex Regan

Performance posts help the community the most when they demonstrate good benchmarking rigor. Benchmark.js and jsPerf are two benchmarking tools with the reputation for proper testing. Even still, best not to jump to conclusions as evergreen browsers continue to evolve their JS runtimes and optimize for the code seen in the wild. So I agree with your take-away!

Collapse
 
champi profile image
Champi

Man you almost give me a heart attack with that title 😅

Collapse
 
georgecoldham profile image
George

Sorry, gotta get the clickbait in 😅, need my internet points!

Collapse
 
flrnd profile image
Florian Rand

Can I steal this answer, please?

Collapse
 
adam_cyclones profile image
Adam Crockett 🌀 • Edited

Okay if you ran this console.time 5 times 30 times 100 times you won't get consistant results.

I don't care what console.time or js perf say. Does your program feel faster or slower now that you use arrows?

I thought not. A user perceived speed is all that matters.

If you had some high performance critical needs then JavaScript is not a good choice anyway. C++ or Lua Jit would be a better pick.

Edit: (I was so very drunk when I wrote this)

Collapse
 
zeachco profile image
Olivier Rousseau

This is why micro-benching is dangerous.

If we want to talk about real production runtime cost, it depends what's your compile target:

Yes, arrows function get optimized when the target is the latest browser the dev is probably used, but as you support older browsers (consoles, old apple products, smart TVs, embedded systems, etc) arrow functions are not necessarily supported, and in those cases, they get compiled to standard function but that's bad for two reasons:

  1. the compiler doesn't know if the intention is to use the "sugar syntax" or to bind the context of the function to the parent scope and takes no chances and always bind creating a triple scope in memory and code boiler plate for each function that exists

  2. that boiler plate prevents some jit optimizations done by the browser that speed up javascript execution.

Always measure, but measure for your use case, not the one you can code in 1 minutes aside of the project.

Collapse
 
alexisfinn profile image
AlexisFinn • Edited

First of all hello everyone, I've come to rant a little bit about these results:
I'm not going to say this is totally useless but it is a bit.

Performance has always been somewhat of a concern with Javascript (not withstanding NodeJS) mainly because the code is interpreted/compiled/run on the client side, which means you don't have control over the environment.

So perfs can greatly vary depending on the OS, the Browser (version), the System specs, what other applications are running, what extensions are installed, etc...
So basically if your not planning on running your script server-side, benchmarks either have to encompass an array of usual suspect environments or be pretty much useless (how many times have you coded something really cool and efficient in JS just to have the client tell you it doesn't work in IE-11 or Safari or Firefox-entreprise-edition or the antivirus/proxy/whatnot blocked your code, or that actually it was just his internet connection that was slow ...?!)

So basically the fact that everyone seems to be getting different results, even over multiple runs of the same test on the same environment is exactly what you should expect, it's pretty much impossible to have consistent results and even if you did configure your environment so as to obtain consistent results, this would in no way reflect real-life situations and be completely moat.

Imagine running stress tests on your Debian 9 + Apache 2.4 + 8G ram + Core i5 and then installing your application on a Windows Server + Nginx + 6G ram + Ryzen 3
No one in their right mind would actually think of giving any significance to the tests run because the server was completely different, yet here we are doing the same thing with js scripts.

So anyway, sorry for the ranting but it is really important to realize this simple fact when working with frontend Javascript: you don't know the environment because everything is done client-side.

Collapse
 
therealfoobar profile image
Cugel • Edited

Now you're simply making up issues. The variance in platform is exactly the same as for any other "programming language banchmark" on the internet -- and that does not matter. Language performance is always going to vary depending on what version of interpreter / compiler is being used, what the underlying platform is (operating system or web browser etc.), what the hardware is. Nothing in this made up mess is unique nor limited to JavaScript.

The only problem with the original "benchmark" was that it is simply not a benchmark. Nobody in their right mind would consider "tests" that execute in less than one millisecond to yield accurate results. The author should have realized how dumb it is to do one run each and draw conclusions from that. Usually the benchmarks execute a single test maybe a thousand or ten thousand times and take the peak values as well as averages.

Single run simply makes no sense already considering that all browsers employ a JIT engine which keeps improving performance after multiple runs -- the first run is always going to yield the worst result.

Collapse
 
dotku profile image
Jay Lin

So what’s your thousand run result is?

Collapse
 
tobiassn profile image
Tobias SN • Edited

I think you should share the exact code used to test this, so people are able to confirm it.

Collapse
 
perigk profile image
Periklis Gkolias

I believe dev speed matters way more than a few milliseconds. This was a nice experiment to conduct, but I dont think anyone would abandon the ease of use of arrow functions :)

Collapse
 
dotku profile image
Jay Lin

The keys diff between arrows and traditional functions might be the support of arguments, this and etc