DEV Community

Cover image for Write faster JavaScript
Yaser Adel Mehraban
Yaser Adel Mehraban

Posted on • Updated on • Originally published at yashints.dev

Write faster JavaScript

Most of the times, we write code which is being copy pasted from all over internet. StackOverflow is the main source these days for finding solutions to all sort of problems. But is it OK to blindly copy paste code without really knowing what's happening behind the scenes?

A bit of context

Don't get me wrong when I say StackOverflow should not be used blindly. It's a great source of information for most of the day to day issues and bugs developers face all around the world. It's just that we should be a bit more proactive and decide the best way out of all the available options out there.

Let me show you some examples where a piece of code can be written in multiple ways, and the most obvious choice is not necessarily the best one.

Chaining array loop chaining

Let's assume we have an array with 200,000 objects which include name and age properties. We want to have the name of all people under age of 22 (assume we have 100,000 of them). This the solution most people might use:

const under22People = originalList
  .filter(x => x.age < 22)
  .map(x => x.name)
Enter fullscreen mode Exit fullscreen mode

Since ES5 and ES6 were introduced, a nice set of array methods emerged. ES6s cleaner syntax makes it painless to chain methods in order to produce our desired results. The problem lies in how we use these nice methods without realising how they're performed.

In the first glance, this looks to be a really nice code, but if we look closer, we have a loop which runs 200,000 times when doing the filtering, and another 100,000 times when selecting the name.

This is where we just need to loop through these items once. So let's have a look at how can we rewrite this code:

const under22People = []
originalList.forEach(({ age, name }) => {
  age >= 18 && under22People.push(name)
})
Enter fullscreen mode Exit fullscreen mode

Now this code runs only 200,000 times. It doesn't look as nice as the chained methods, but it sure has much better performance.

You can even use the reduce method to do the same:

const under22People = originalList.reduce(
  (acc, { age, name }) => {
    return age >= 18 ? [...ac, name] : acc
  },
  []
)
Enter fullscreen mode Exit fullscreen mode

It sure looks much less readable, but it does the exact same job for us.

Abusing arrow functions

One of the good questions you could ask yourself when writing JavaScript is whether you know the difference between a traditional function and an arrow function (aka fat function).

From MDN Web Docs:

An arrow function expression is a syntactically compact alternative to a regular function expression, although without its own bindings to the this, arguments, super, or new.target keywords. Arrow function expressions are ill suited as methods, and they cannot be used as constructors.

Before you ask, no they are not JavaScript equivalent of anonymous functions in languages like C#. But for now the only thing you need to care about is that they don't have scope.

Once of the many of their use cases is using them in class field. Consider in React you don't need to manually bind your functions anymore like:

this.handleClick = this.handleClick.bind(this)
Enter fullscreen mode Exit fullscreen mode

Instead you will write your class like:

class MyComponent extends Component {
  handleClick = () => {
    // ...
  }

  render() {
    // ...
  }
}
Enter fullscreen mode Exit fullscreen mode

As you might know, usual functions are defined in the prototype and will be shared across all instances. If we have a list of N components, these components will share the same method. So, if our components get clicked we still call our method N times, but it will call the same prototype. As we’re calling the same method multiple times across the prototype, the JavaScript engine can optimize it.

On the other hand, for the arrow functions in class properties, if we’re creating N components, these N components will also create N functions. You can see by looking at the transpiled version, that class properties are initialised in the constructor. Which means if we click on N components, N different functions will be called.

So consider this approach next time you are writing your new shiny React component.

Nested functions VS IIFEs

Nesting a function inside another function seems like a good idea to isolate some logic from outside world. Consider below code:

function doSomething(arg1, arg2) {
  function nestedHelper(arg) {
    return process(arg)
  }

  return nestedHelper(arg1) + nestedHelper(arg2)
}
Enter fullscreen mode Exit fullscreen mode

The problem with above code is that every time you call doSomething, the nestedHeler is recreated. In order to prevent that you can use an IIFE (Immediately invoked function):

const result = (function() {
  function privateHelper(arg) {
    var result = process(arg)
    return result
  }

  return function(arg1, arg2) {
    return (
      privateHelper(arg1) + privateHelper(arg2)
    )
  }
})()
Enter fullscreen mode Exit fullscreen mode

When this code gets executed, the nested method will be created only once 🤷‍♂️.

Use Sets over Arrays where possible

The most obvious difference between an array and a Set is that array is an indexed collection, whereas, a Set is key based.

So why you should be using a Set?

Searching for an Item

Using indexOf() or includes() to check whether an item exists in an array is slow. In a Set you can find an item really easy using has():

const mySet = new Set([1, 1, 2])

console.log(mySet.has(2)) // true
Enter fullscreen mode Exit fullscreen mode

Deleting an Item

In a Set, you can delete an item by its value. In an array, the equivalent is using splice() based on an element’s index. As in the previous point, depending on indices is slow.

const mySet = new Set([1, 2, 3, 4, 5])

mySet.delete(1)
Enter fullscreen mode Exit fullscreen mode

Insert an Item

It is much faster to add an item to a Set than to an array using push() or unshift().

const mySet = new Set([1, 2])

mySet.add(3) // Successfully added
Enter fullscreen mode Exit fullscreen mode

Storing NaN

You cannot use indexOf() to find the value NaN, while a Set is able to store this value.

Removing Duplicates

Set objects only store unique values. If you want to avoid storing duplicates, this is a significant advantage over arrays, where additional code would be required to deal with duplicates.

const mySet = new Set([1, 1, 2])

mySet.add(3) // Successfully added

console.log(mySet.values()) // 1,2,3
Enter fullscreen mode Exit fullscreen mode

Summary

There are many of these examples where you might want to be careful when writing code in a real world business application. Speed is one of the most important part of every web application and considering some items like above points, will increase your code's performance which can result in happier users 😊.

Hope this has helped you to start thinking about what other ways there are to solve the same issue rather than the first solution you'll find on the web.

Top comments (17)

Collapse
 
karataev profile image
Eugene Karataev

I agree that performance matters. But premature optimization is the root of all evil. In most cases code readability is more important than optimization tricks. Do optimizations when you really need them.

Speaking of your first example, I think the most performant and at the same time readable way to solve the problem would be old good for loop:

const result = [];
for (let i = originalList.length - 1; i >= 0; i--) {
  let item = originalList[i];
  if (item.age < 22) result.push(item.name);
}

Readable and performant.

It is much faster to add an item to a Set than to an array using push() or unshift().

pushing an item to an array is fast, it's complexity is O(1). Appending an element with unshift is indeed slow (O(n)), because it requires reassigning all indexes in the array.

You cannot use indexOf() or includes() to find the value NaN, while a Set is able to store this value.

let arr = [1, NaN];
console.log(arr.includes(NaN)); // true
Collapse
 
yashints profile image
Yaser Adel Mehraban

Thanks for taking the time and pointing these out.
100% agree with your point on readability. But your example is another indicator that there are many ways to solve a problem and we should choose one which helps performance.

Will update the includes part 👌

Collapse
 
karataev profile image
Eugene Karataev

You say that it's good for performance to reuse functions (react classes and nexted functions examples). But in the beginning of the article when optimising chain functions you keep to use arrow functions in forEach and reduce methods.

My example with for loop is faster than using forEach or reduce examples.

I just don't feel consistency between your examples. Also if the focus of your article is performance then please show the most performant way (without sacrificing readabilty) instead of half-performant.

Collapse
 
worc profile image
worc • Edited

i'd be cautious trying to optimize when the benefit is so small like with removing the array chain. i tried the use-case and the .filter().map() version runs in under 10ms while the .forEach() version runs in about 5ms. it's pretty hard to justify that kind of optimization if the user won't notice it, and it's costing you 30000ms or 60000ms every time a developer has to wonder why you did something non-standard in the code base.

Collapse
 
yashints profile image
Yaser Adel Mehraban

This example is not a real world example, in a business application there are many of these scenarios which would add up. If you can save even half of second on an interaction it's worth it. And last, if you become aware of other ways to solve a problem faster why not

Collapse
 
worc profile image
worc

the why not is easy, cpu time is cheap, developer time isn't. if the optimizations are bad for the readability of the source code and imperceptible to end users, you're taking on bad tech debt. the reverse of that is true though too, don't get me wrong. "good" tech debt would mean taking on slightly more obscure code so that you can bring noticeably faster response times directly to the user.

for the real world use, yeah, you definitely can find a lot of inefficiencies that add up to a poorly performing app, but you have a pretty generous time budget before users are going to disengage with the application. if you're doing something like the example (simple sorting, filtering, restructuring on large datasets), you generally have anywhere from 1,000ms to 10,000ms before a user is going to think there's a serious problem here.

in other words you'd have to 200 plus bad chains together before you finally reached a point where users are noticing and reacting negatively.

there's also the case we haven't even considered where usually when someone is filtering then mapping it's because filtering is fast and cheap, while a much heavier lift is in the map. in those kinds of scenarios your advice would actually end up costing more time—both on the wall and on the cpu.

Thread Thread
 
yashints profile image
Yaser Adel Mehraban

I think we're discussing the same thing from two different angle, but let's just say if developers take a bit of time upskilling and finding new ways to solve problems more efficient you'll have a performing app and save time finding where to optimise in a large code base.

Back to your point, a developer spending a bit of time reading will help much more than simply code and wait for a bug or a ticket to be raised later on performance.

Last you're just taking one of the points and generalising it, perf tuning is a broad aread, this is one of the thousands

Thread Thread
 
worc profile image
worc

yeah i guess i fall pretty heavily on the side of legibility before everything else. especially in a high-level language like javascript. most of the time, in most cases, you're just not going to need to exploit the weirdness of the language itself for performance tuning. usually what happens is a hot spot is identified some time down the line when something about the app scales.

and in those cases i would rather a past developer had put their time and energy into making sure their intent is clear before getting clever.

Collapse
 
kleyu profile image
Kuba

I'd include some benchmarks, especially for the difference in perf when using Set and Array methods.

Other than that, great article.

Collapse
 
yashints profile image
Yaser Adel Mehraban

Not a bad idea, will do it

Collapse
 
johncip profile image
jmc • Edited

I like that these tips tend more toward use of the right data structures and algorithms (using the terms loosely -- your use of reducer functions, closures, and sets) instead of the usual "replace each with for i" advice.

Often folks who reach for the latter have missed something more fundamental (using your 200k array example -- it could be something like pagination, or caching intermediate values).

FWIW, the places I've had to optimize the most frequently were db queries. On the front end, mostly rendering of large DOM trees and memory usage. ¯\_(ツ)_/¯

(And I agree with the folks cautioning against premature optimization.)

Collapse
 
javaguirre profile image
Javier Aguirre

Thank you for the article, insightful. :-)

I didn’t get very well your explanation about arrow functions, although your point is don’t use it inside a class cause it has no scope and it will be recreated every time, right?

Thank you!

Collapse
 
yashints profile image
Yaser Adel Mehraban

Apart from recreation of the function itself it has other drawbacks too.

Let me give you two examples, first let's say you have a class with an arrow function and when testing you want to mock it:

class A {
  static color = "red";
  counter = 0;

  handleClick = () => {
    this.counter++;
  }

  handleLongClick() {
    this.counter++;
  }
}

Usually the easiest and proper way to do so is with the prototype as all changes to the Object prototype object are seen by all objects through prototype chaining.

But in this instance:

A.prototype.handleLongClick is defined.

A.prototype.handleClick is not a function.

Same happens with inheritance:

class B extends A {
  handleClick = () => {
    super.handleClick();

    console.log("B.handleClick");
  }

  handleLongClick() {
    super.handleLongClick();

    console.log("B.handleLongClick");
  }
}

Then:

new B().handleClick();
// Uncaught TypeError: (intermediate value).handleClick is not a function

new B().handleLongClick();
// A.handleLongClick
// B.handleLongClick
Collapse
 
imthedeveloper profile image
ImTheDeveloper

I have a side project in node.js which has organically grown over time to become quite large. It essentially works on a middleware pattern checking inbound messages and reacting to them. I have many middlewares that a message can pass through and I'm at the point now where I need to start optimising the order, pinpoint slow code and overall improve efficiency.

At the moment since I'm quite a noob to performance measures are there any recommendations on how I can profile the speed and overall execution areas that bog down my performance? Right now I'm reducing myself to using console.log with timestamps which obviously gets you only so far.

Collapse
 
yashints profile image
Yaser Adel Mehraban

I highly recommend reading this article, also check out performance API

Collapse
 
acutesoftware profile image
Duncan Murray

Thanks for the article - I'm learning JS at the moment and this was interesting.

Cheers

Collapse
 
damirtomic profile image
DamirTomic

You didn't say how much faster it is. Is it 1%, 10%, 30%?