DEV Community

Giles Dring
Giles Dring

Posted on

Optimising Javascript function performance

I recently solved a coding problem that had been near the bottom of my to-do list for ages. I even managed to shoehorn a recursive function into the solution. Yay, computer science! I walked away, feeling like a job well done... but something didn't feel right. I suspected that my excitement at using a cool technique had resulted in a less efficient algorithm. I'd need to run some experiments to see if my suspicion was correct.

As I was using Deno, I decided to use the built-in benchmarker tool, and set to work coding up my test suite.

The reference function

The recursive function that I wrote was to resolve and return a location in a nested object tree, in exactly the manner of the Lodash get method. I wanted to reduce my dependencies on external functions, so decided against just importing, and it turned out to be fairly trivial to do:

export function arrayWise(ref, context) {
  const [key, ...rest] = ref.split(".");
  if (rest.length) {
    return arrayWise(rest.join("."), context[key]);
  }
  return context[key];
}
Enter fullscreen mode Exit fullscreen mode

So far, so good. Grab the reference, split to an array on the . character, check if there are still levels to descend and if so, descend into them, passing the shortened key and resetting the context for the lower level. If not, return the looked up value from the context. This return would bubble up through the call stack and be presented as the result of the original call.

First alternative

That repeated spliting and joining the ref variable bothered me. How efficient is that as an operation?

With some light tinkering, I came up with this alternative, which was a little more surgical in splitting the string:

export function stringWise(ref, context) {
  const cutMark = ref.indexOf(".");
  if (cutMark === -1) return context[ref];
  const key = ref.slice(0, cutMark);
  const rest = ref.slice(cutMark + 1);
  return stringWise(rest, context[key];
}
Enter fullscreen mode Exit fullscreen mode

This time, we find the location of the first . in the string. If there is none, just return the data referenced by the object. If not, determing the key and rest using the slice method. Then return the result of calling the next layer down.

Writing the benchmarks

Let's see how this stacks up in terms of performance.

Writing the benchmarks is a breeze! I've omitted the imports and setting up a test context object for brevity. Note that the arrayWise function is set up as a baseline case.

Deno.bench({
  name: "arrayWise",
  baseline: true,
  fn: () => { arrayWise("look.me.up", context) 
});

Deno.bench({
  name: "stringWise",
  fn: () => { stringWise("look.me.up", context) }
});
Enter fullscreen mode Exit fullscreen mode

This is saved in a file called lookup.bench.ts. When you run deno bench the command prompt churns away for a few moments then produces an output like this:

Screenshot of the output of deno bench, showing the results table, which demonstrates that arrayWise is 3.34 times slower than stringWise. The table includes many measurements that describe the statistical distribution of the results, including average, minimum, maximum and percentile values

My instinct was correct. I've managed a threefold increase in speed of my algorithm. That was definitely a "first, worst" attempt.

A second alternative

At this point, I began feeling uneasy about the recursion. Calling a whole new function felt like a fairly heavy thing to do. Would it be faster to write this as a loop? Back to the code editor and I came up with this:

export function nonRecursive(ref, context) {
  let result = context;
  for (const key of ref.split(".")) {
    result = result[key];
  }
  return result;
}
Enter fullscreen mode Exit fullscreen mode

This time, I'm looping over the keys in the reference within this function, reassigning the result each time. By the time I'm at the end of the loop, I've got my answer, so I return it.

I added the new benchmark into the file, and ran deno bench again. Here are the results this time:

Screenshot of the output of deno bench, showing the results table, which demonstrates that arrayWise is 3.13 times slower than stringWise, but 5.28 times slower than nonRecursive. The table includes many measurements that describe the statistical distribution of the results, including average, minimum, maximum and percentile values

Another speedup, nearly double the speed of the first optimisation. Turns out recursion, whilst a cool technique, is sometimes overkill, and generally best avoided in the case of linear processing such as this example.

A final alternative

I thought I'd see what happened when I placed my typical (very terse) javascript writing style on the bench.

export const extremeGilesMode = (ref, context) =>
  ref.split(".")
     .reduce(
       (result, key) => result[key],
       context
     );
Enter fullscreen mode Exit fullscreen mode

A one-liner! I'll be the first to admit, that this is much less readable than the previous example. It splits the ref string into constituent parts, and then calls the reduce array method on the resulting array. This iterates over the array, returning the key property from the previous result each time. The second argument to reduce sets the initial value of the result. In this case, we set it to context. This is the pretty much the same technique as the nonRecursive function.

It's terser, but is it any faster?

Screenshot of the output of deno bench, showing the results table, which demonstrates that arrayWise is 3.33 times slower than stringWise, but 5.5 times slower than nonRecursive and 5.38 times slower than extremeGilesMode. The table includes many measurements that describe the statistical distribution of the results, including average, minimum, maximum and percentile values

Not really faster, and looks like it might be slightly slower. The nonRecursive implementation is the fastest.

What did I learn?

This was an informative experiment to run. In this case, It would probably have been fine to leave the initial implementation, given the savings per iteration were measured in nanoseconds. Mind you, had that been called many times, it could have been a significant performance slowdown.

Worth noting also that each run gives a different time per iteration, so careful about citing definitive numbers. When analysing performance, it's worth looking at the percentile figures (p75, p99 and p995) to check for variability.

Overall, I was really impressed with how easy it was to set up benchmarks in Deno. There are some limitations for those who are used to benchmarking, but as a starter set, it's pretty powerful.

I've bundled up the examples in a Github repo. Let me know via the comments what performance improvements you've make to your code.

Top comments (0)