DEV Community

Cover image for Measuring Performance in Node.js with Performance Hooks
Mark Michon for Bearer

Posted on • Edited on • Originally published at blog.bearer.sh

Measuring Performance in Node.js with Performance Hooks

📣 This post originally appeared as Measuring Performance in Node.js with Performance Hooks on The Bearer Blog.

Measuring performance in Node.js applications can sometimes be a challenge. Due to the nature of the event loop and asynchronous code, determining the actual time a piece of code takes to execute requires tools built into the platform. First added in Node.js v8.5, as stableas of v12, the Performance Measurement APIs are stable and allow much more accurate monitoring than earlier implementations. In this article we'll look at the basics of Node.js performance hooks and how to use them to time the execution of functions.

Why are Performance Measurement APIs important?

Sometimes called Perf Hooks, in part because they are imported from perf_hooks in Node applications, these APIs allow developers to set various markers that make measuring the run-time of an application easier. Node's implementation is an adaptation of the W3C's Web Performance APIs, but with changes that make more sense for Node apps rather than browser javascript.

With these APIs, you can measure the time it takes individual dependencies to load, how long your app takes to initially start, and even how long individual web service API calls take. This allows you to make more informed decisions on the efficiency of specific algorithms, the effects of API choices on application performance, and establish baselines for "normal" operation to help identify anomalies when they happen.

In the past, this may have been done using Date.now() and some basic operations to find the duration. There are some flaws in this technique, as occasionally you can end up with a zero value or negative numbers. A slightly more accurate approach is to use process.hrtime(), but it still has limitations and needs to be manually set everywhere you end to use it.

To better understand how these newer APIs work, let's look at an example.

Using perf hooks

Imagine we have an asynchronous function called someAction, and we want to know how long it takes to run.

await someAction()
Enter fullscreen mode Exit fullscreen mode

To track it's performance, first we need to:

  1. Import the perf_hooks module from Node.js
  2. Establish and observer to watch for performance events
  3. Initialize the observer
  4. Mark the appropriate areas with start/stop markers, and measure the difference.

Let's start by importing the module and setting up the observer—steps 1-3.

const { performance, PerformanceObserver } = require("perf_hooks")

const perfObserver = new PerformanceObserver((items) => {
  items.getEntries().forEach((entry) => {
    console.log(entry)
  })
})

perfObserver.observe({ entryTypes: ["measure"], buffer: true })
Enter fullscreen mode Exit fullscreen mode

The observer code can look intimidating at first if you haven't used a similar API (like IntersectionObserver, for instance). In the code above we establish a new PerformanceObserver and pass it a callback function. Each time one of our performance events fires (more on that shortly), the entry is added to a list of Performance Entries (items). items.getEntries() is a bit of work that needs to happen to get the entries into an iterable format, which we then loop over with forEach and isolate each entry into the entry argument in the callback function.

Finally, perfObserver.observe tells our new observer what to look for and how to act. In this case, we want to watch for measure events (more on this shortly), and we set the buffer to true. This buffer setting just means that the observer will wait until all events are finished before running the PerformanceObserver callback. If it were set to false, items would always have a single entry and the callback would run every time a matching entryType occurred.

This boilerplate handles our setup, so let's actually measure the example function we spoke of earlier.

// ...setup code from previous example

performance.mark("example-start")
await someAction()
performance.mark("example-end")

performance.measure("example", "example-start", "example-end")
Enter fullscreen mode Exit fullscreen mode

In the code above, we are using performance.mark and performance.measure. The mark method is used to place a performance timestamp in our code. The name can be anything, but using some form of start/end or similar suffix can help avoid user error. The measure method takes three arguments. A label for the measurement, the starting marker, and the ending marker.

With this done, our observer from earlier will pick up the measure type, add it to the callback argument, and when our code finishes we'll see the entry logged to the console. It will look something like this:

PerformanceEntry {
    name: "example",
    entryType: "measure",
    startTime: 3869.689664,
    duration: 122.123131
}
Enter fullscreen mode Exit fullscreen mode

The startTime can be useful for organizing the data in logs, but we mostly care about duration as it indicates how long the operation took.

Monitoring function performance

In our last example, we set markers for the start and end of a code snippet. We did this in part because our function was asynchronous, and we were using the await keyword. If you're measuring the performance of a synchronous function, there is a helper available to handle the markers for you. Instead of setting a start and end with performance.mark, you can wrap the function in performance.timerify and change the observe code to watch for function entries.

// ...perfObserver = new PerformanceObserver...
// ...

perfObserver.observe({ entryTypes: ["function"] })

const perfWrapper = performance.timerify(syncFunction)

perfWrapper()
Enter fullscreen mode Exit fullscreen mode

By changing the entryTypes to function and wrapping the synchronous function in timerify, we can avoid the need to set start and end marks.

Practical application for API calls

Let's imagine a scenario where we have calls to third-party APIs and we want to keep track of how long each call takes. We can use this data to create a baseline, track performance, etc.

Using the technique from our first example, we can start logging the performance. Here's what that looks like with the full setup code, and the addition of an external call using the Axios library.

const { performance, PerformanceObserver } = require("perf_hooks")
const axios = require('axios')
const customLogger = require('our-custom-logging-solution')

const perfObserver = new PerformanceObserver((items) => {
  items.getEntries().forEach((entry) => {
    customLogger(entry) // fake call to our custom logging solution
  })
})

perfObserver.observe({ entryTypes: ["measure"], buffer: true })


try {
    performance.mark('swapi-start')
    await axios.get('https://swapi.dev/api/people/1/')
} catch(err) {
    console.error(err)
} finally() {
    performance.mark('swapi-end')
    performance.measure('https://swapi.dev/api/people/1/', 'swapi-start', 'swapi-end')
}
Enter fullscreen mode Exit fullscreen mode

In a scenario like this, we can imagine a codebase that manipulates and formats the performance data, and sends it off in batches to a monitoring tool or logging solution. Then, over time, establishes the normal behavior of an API in order to detect when anomalies occur—kind of like what Bearer does.

Using performance measurement to your advantage

Measuring and parsing the individual performance of all API calls can be cumbersome. That's part of the reason we built a tool at Bearer to monitor APIs using techniques similar to those mentioned here, along with the extra niceties of a full SaaS solution.

A similar addition to Node.js to keep an eye on is the Async Hooks API. It's still experimental, but can allow you to apply the features of the performance measurements API to asynchronous functions more easily. There's still room for improvement in our examples above and in similar methods. We also need to take into account any lag or pauses in the event loop.

We'll dive into async hooks and ways to use the performance monitoring API to monitor event loop lag in the future so subscribe for updates on The Bearer Blog.

Top comments (2)

Collapse
 
chewbacca_w0w profile image
cool.internet.guy

Are here any advantages of using this ?

Collapse
 
Sloan, the sloth mascot
Comment deleted

Some comments may only be visible to logged-in visitors. Sign in to view all comments.