Being a Performance Engineer in the Node.js Platform Team at Netflix means I oftentimes have to troubleshoot performance issues in applications my team doesn't own. It's unfeasible to expect all applications to have instrumentation code that allows them to run V8 performance analysis tools (such as CPU Profiles, Heap Profiles and Heap Snapshots), so I wrote a CLI that allows users to run those performance analysis tools without making any changes to their code: @mmarchini/observe. I realized I never wrote anything about this CLI, so this is a lil blog post about it :)
As mentioned before, sometimes you need to troubleshoot an application that is already running. The Inspector Protocol allows you to call V8 troubleshooting tools without making any changes to your application. It's the same protocol used by Chrome DevTools, and it exposes similar troubleshooting features as the ones available on Chrome DevTools. It is, therefore, a powerful protocol.
Unfortunately, using the protocol by itself can be overwhelming for new users. Existing tools that use it are generally aimed at development environments, with either GUIs or REPLs as their only interfaces, which prevents automation.
There are libraries that make usage of the inspector protocol more straightforward, but most libraries I found had to be loaded within the app, therefore requiring code change and a redeploy, which can take time to happen and hinder performance investigation for time sensitive issues.
I also wanted something with as few direct and transitive dependencies as possible. Most packages I found had more dependencies than I wanted.
The closest I found to what I wanted was chrome-remote-interface, which is a great package with few dependencies and a programmatic API that allows users to connect to running applications and run inspector protocol commands on it. It doesn't abstract away the complexities of the Inspector Protocol though, and I wanted something that could be run as a one-liner most of the time.
Since the tools that Node.js/V8 provide for performance analysis via Inspector Protocol are well defined, it seemed like a good use case for a CLI. Its execution mode is somewhat inspired by Java Flight Recorder (but without the need to start an application with specific flags), dtrace, bpftrace, and similar tools. The resulting CLI depends only on
commander, which resulted in only two direct and no transitive dependencies. Small install size and lower vulnerability surface area ftw!
@mmarchini/observe can be installed via your package manager of choice or can be run with
npx. I'll be using
npx in the examples below as that's my personal preference, but you should use whatever you have available and feel comfortable with.
A list of available commands can be seen by running:
npx @mmarchini/observe --help
Options for each command can be seen by running those commands with
npx @mmarchini/observe heap-profile --help npx @mmarchini/observe heap-snapshot --help npx @mmarchini/observe cpu-profile --help
As of right now, there are three tools available:
cpu-profile. Heap Snapshots and Heap Profiles are memory analysis tools, with the former taking a "snapshot" of V8's memory, and the latter sampling stack traces every X allocations. Heap Snapshots are expensive but comprehensive and good to determine where most of the memory is currently being used.
Heap profiles (known as "Allocation Samples" on CDP interface) are a lightweight, sample based profiler that captures stack traces on allocations, and is a good tool to understand how memory is growing over time. CPU Profile, as the name suggests, is your typical CPU profiler and it samples stack traces at regular intervals.
memleak usage with Node.js applications, and I might write an updated guide on Linux perf in the future.
You should pass the
-f <filename> option to save the output from each tool into a file. Those files can then be loaded on Chrome DevTools (chrome://inspect and open the "dedicated DevTools for Node"), or with any other tools that can process these files. Since DevTools don't have FlameGraphs for CPU Profiles, I usually use Speedscope since it has a nice interface and features, and has no server-side processing. I won't go into how to interpret the results from each tool in this blog post, but I plan on writing individual interpretation guides for each tool.
@mmarchini/observe, a tool intended to be simple and portable, yet powerful. Contributions are welcome, I have some features in mind for future updates but don't know when I'll get to it. As mentioned above, I only showed how to use the tools here. I'll be writing follow up blog posts on how to successfully interpret the results from each tool.