DEV Community

Cover image for Node vs Go: API Showdown
Caio Borghi
Caio Borghi

Posted on • Updated on

Node vs Go: API Showdown

Disclaimer and Introduction

This blog is primarily for fun and educational exploration. The results here should not be the sole basis of your technical decisions. It does not mean that one language is better than the other, please do not read it so seriously.

In fact, it does not make much sense to compare such different languages.

Cool, with that being said, let's have some fun, compare some metrics, and get a better understanding of how both languages deal with some key aspects (RAM, CPU, Open File Descriptors Count & OS Threads Count) when under severe pressure.

How were the metrics gathered?

If you want to know how this benchmark was profiled, expand the below section, otherwise, you can skip directly to the results 🤓.

Behind the scenes

Tech Stack

It was created using the following technologies:

  • 1x EC2 t2.micro (the API's command center)
  • 1x EC2 t2.xlarge (the request-launching gun)
  • A Postgres RDS for data persistence
  • Vegeta for unleashing HTTP load
  • Golang 1.21.4 and Node.js 21.4.0 for the API showdown
  • Open Tofu for automagically spinning up our servers

Important: The API server has only a 1-core processor with 1GB of RAM. This post shows the efficiency of both approaches in a very limited environment. You can check the full code here.

Flow Diagram

How does the communication happen?
Flow Diagram

Both servers are located within the same VPC in AWS, ensuring minimal latency. However, the RDS, although situated in the same AWS Region (sa-east-1), operates in another VPC, introducing a more realistic latency.

This is good because, in a real-world scenario, there will be latency.

Manual RDS Setup

Unfortunately, I wasn't able to set up the Postgres RDS with OpenTofu, (cough cough: skill issue) so I had to manually craft it on AWS and execute the following script:

CREATE TABLE IF NOT EXISTS users (
    id SERIAL PRIMARY KEY,
    email VARCHAR(255) NOT NULL,
    password VARCHAR(255) NOT NULL
);
TRUNCATE TABLE users;
Enter fullscreen mode Exit fullscreen mode

Ok, with everything in place, it's showtime!

Environment Initialization

Start the environment with:

tofu apply -auto-approve
Enter fullscreen mode Exit fullscreen mode

Full main.tf file here

What does it do?

  • Launches 1 VPC + 2 Subnets
  • Boots up 2 Ubuntu Servers, executing specific scripts
  • Install Node + Golang on the API server
  • Sets up Vegeta on the Gun server
  • Deploys the API and load-tester code
  • Generates 2 SSH scripts for connectivity (ssh_connect_api.sh & ssh_connect_gun.sh)

Monitoring Process

With this setup, I can access the API server and initiate either the Node or Go API.

Concurrently, start the monitor_process.sh to snag metrics like RAM, CPU, Threads Count, & File Descriptors Count and save to a .csv file.

All is done based on the process ID of the running API.

Monitoring flow

Check the script here!

Script Parameters

  • Process ID
  • The number of requests per second (to name the CSV file correctly)

Once the API is running, I get the process ID using console.log(process.pid) on Node or fmt.Printf("ID: %d", os.Getpid()) on Golang.

Then, I can simply run:

./monitor_process.sh 2587 2000
Enter fullscreen mode Exit fullscreen mode

This command monitors our process, updating a .csv file named process_stats_2000.csv every second with fresh data.

Ok, now let's analyze the results, compare both APIs, and see what learnings can we squeeze from it, let's get started!


2,000 Requests per Second

Alright, for this first step, I ran this Vegeta script that fires 2,000 requests per second over 30s to the Server API.

This was done inside the Gun Server by running

./metrics.sh 2000
Enter fullscreen mode Exit fullscreen mode

Which produces the following output:

Starting Vegeta attack for 30s at 2000 requests per second...
Enter fullscreen mode Exit fullscreen mode

Then, I combined the results in some beautiful charts, let's take a look at them:

Latency x Seconds

By looking at the latency chart, we can see that Golang struggled a lot initially, taking ~5s to stabilize.

2,000 reqs/s latency over seconds

This may have been a one-time anomaly, but as I won't redo the test and the metrics are all correct, I'll call this one a lucky shot for Node.

Node kept a consistent latency throughout most of the test, with spikes at 12s and 20s.

Golang, on the other hand, had some trouble stabilizing the latency at the beginning, costing the pole position. However, It went well after that, by keeping the latency around 230ms.

File Descriptors Count

This one is interesting.
FD Count 2,000 reqs/s

In Linux, a new socket and a corresponding File Descriptor (FD) are created for each incoming server connection. These FDs store connection information.

On Ubuntu, the default soft limit for open file descriptors is 1024.

However, both Go and Node ignore the soft limit and always use the hard limit. This can be verified by accessing the /proc/$PID/limits FD after the node/go process has started.

You can use the command ulimit -n to see the OS soft limit of open file descriptors of the current shell session.

Node limits

Ok then, this means that the OS does not interfere with the number of open FDs; the programming language manages it.

In this test, Node kept a lower, but irregular, number of open FDs while Golang spiked until 8.000, stabilized, and remained consistent until the end.

Threads Count

Wasn't Node.js single-threaded? 🤯
Threads Count 2,000 reqs/s

Well, no.

By default, Node starts a few threads:

  • 1 Main Thread: Executes JavaScript code and handles the event loop.
  • 4 Worker Threads (default libuv thread pool)
    • Handles blocking async I/O such as DNS lookup queries, crypto module, and some file I/O operations.
  • V8 Threads:
    • 1 Compiler Thread: Compiles JavaScript into native machine code.
    • 1 Profiler Thread: Collects performance profiles for optimizations.
    • 2 or more Garbage Collector Threads: Manages memory allocation and garbage collection.
  • Additional Internal Threads: Number varies, for various Node.js and V8 background tasks.

I noticed that, at the startup of the Node Process, it created 11 OS threads and once the requests started arriving, the count jumped to 15 OS threads and stayed there.

Go, on the other hand, kept 4 stable OS threads.

RAM

TL;DR
  • Node.js:
    • Consistently low RAM usage, between 75MB and 120MB.
    • Utilizes an Event-Loop for I/O operations, avoiding new threads.
    • More about Node's Event Loop: Exploring Asynchronous I/O in Node.js.
  • Go:
    • Higher initial RAM usage, stabilizing at 300MB.
    • Spawns a new goroutine for each network request.
    • Goroutines are lighter than OS threads but still impact memory under load.
    • Insight into Go's Runtime scheduler: Go Runtime Scheduler Talk.

RAM Usage 2,000 reqs/s

Explanation
Node kept a lower and more stable RAM usage, between 75MB and 120MB, throughout the test.

Meanwhile, Go's RAM usage increased in the first seconds until it stabilized at 300MB (almost tripling Node's peak).

This difference can be explained due to how both languages deal with asynchronous operations, like I/O database communication.

Node uses an Event-Loop approach, which means it doesn't create new thread. In contrast, Go often spawns a new goroutine for each request, which increases memory usage. The goroutine is a lightweight thread managed by the Go Runtime.

Even though lighter than an OS Thread, it still leaves a memory footprint when under heavy load.

For insights on the Node Event Loop, check this blog post I wrote.

To better understand the Go Runtime scheduler, please watch this phenomenal talk - one of the best I've ever watched.


CPU

Node was able to use less CPU than Go at this one, this may be because the Go Runtime is more complex and requires more steps/calculations than the libuv's Event Loop.

CPU Usage 2,000 reqs/s

Overall

I must be honest: I was surprised by this result.

Node won this one 🏆.

It showcased:

  • Superior p99 latency, responding in under 1.2s for 99% of requests, compared to Go's 4.31s
  • Faster average latency, clocking in at 147ms versus Go's 459ms, 3.1x faster!
  • Significantly smaller maximum latency, peaking at just 1.5s against Go's 6.4s, which was 4.2 slower. (c'mon Gopher, you're looking bad!)

Go vs Node 2,000 requests/s

3,000 Requests per Second

Now let's redo the test, send 3,000 requests/s over 30s for each API, and see the results.

Latency x Seconds

While Go was able to keep a really stable latency with only two small spikes, Node was in some deep trouble and showcased a very inconsistent latency throughout the test.
3,000 reqs/s latency over second

File Descriptors Count

Remember I told you that neither Node nor Go respects the soft limit of the Open File Descriptors and both languages manage it by themselves?

Here's a fun fact:
FDs count 3,000 requests Node vs Go

Golang was able to process, handle, and deliver more requests, in a shorter time, using fewer resources by setting a "hard" limit of open FDs at each period of the test (based on some metric that I'm not sure which one).

This is super cool!

Look at how Go managed its FDs:

  • 8 FDs: In the first 0-3 seconds
  • 1,590 FDs: Between 4-17 seconds
  • 2,225 FDs: Between 18-31 seconds

Node, on the other hand, didn't interfere with the open file descriptors like Go did. You can see that on the chart.

Empirically, it seems that Go is pre-allocating (or pre-opening) File Descriptors at some rate and reusing them instead of generating one for each connection at the time they arrive.

I'm not sure exactly how they do that, though, feel free to comment if you have some hint 😄

I found some good reads about this:

Threads Count

Ok, something worth noticing happened on this test.
Threads Count 3,000 requests Node vs Go

Node: peaked from 11 to 15 OS Threads as the requests started arriving. I believe this is due to the DNS Lookup operations, as briefly mentioned in this issue.

Go: Stepped up its game from 4 to 5 OS Threads. It's the Runtime Scheduler orchestrating the show, Go is smart enough to pack multiple Goroutines into each OS Thread. When it gets clumsy, it smoothly starts a new OS thread.

This approach is not just efficient; it's a masterclass in resource optimization, squeezing every last bit of performance from the hardware. It is Amazing! 🚀

RAM

At this time, you probably noticed that the Node.js line lasts longer than the Go line, well, this is because the API took more time to answer all the requests it received.

This has also impacted the RAM usage. Remember that for the first test Node's RAM usage was way below Go's one?

That's not the case when you have tons of connections hanging on the server waiting to be processed.
RAM usage 3,000 requests Node vs Go

CPU

This time, Node required much more CPU Usage than that and was able to keep the usage below 35% while Node peaked at 64%.
CPU Usage 3,000 requests Node vs Go

Overall

Overall 3,000 req/s Node vs Go

🎉 We have a fight! 🎉

The dispute is open and Golang was way superior at this one, let's look at the numbers:

Golang had:

  • Lower Latencies
    • p99: 738.873ms against 30.001s, 40 times lower than Node.
    • Average: 60.454ms versus 7.079s - 118 times faster
    • Maximum: Go peaked at 1.33s, while Node reached the sky with 30.0004s.
  • Perfect Success Rate (100%)
    • Against 91.93% from Node, which had some requests failing.

That was a massacre, it was like comparing a new sports car with a Fusquinha.

Detailed comparison

Node.js Performance Metrics:

  • Total Requests: 86,922 with a rate of 2,897.33 per second.
  • Throughput: 1,449.29 requests per second.
  • Duration:
    • Total: 55.135 seconds.
    • Attack Phase: 30.001 seconds.
    • Wait Time: 25.134 seconds.
  • Latencies:
    • Minimum: 3.458 ms.
    • Mean: 7.079 seconds.
    • Median (50th Percentile): 6.068 seconds.
    • 90th Percentile: 9.563 seconds.
    • 95th Percentile: 26.814 seconds.
    • 99th Percentile: 30.001 seconds.
    • Maximum: 30.004 seconds.
  • Data Transfer:
    • Bytes In: 2,077,556 (average 23.90 bytes/request).
    • Bytes Out: 7,351,352 (average 84.57 bytes/request).
  • Success Ratio: 91.93%.
  • Status Codes: 7016 failures, 79,906 successes (201 code).

Golang Performance Metrics:

  • Total Requests: 90,001 with a rate of 3,000.09 per second.
  • Throughput: 2,999.89 requests per second.
  • Duration:
    • Total: 30.001 seconds.
    • Attack Phase: 29.999 seconds.
    • Wait Time: 2.035 ms.
  • Latencies:
    • Minimum: 1.371 ms.
    • Mean: 60.454 ms.
    • Median (50th Percentile): 4.773 ms.
    • 90th Percentile: 194.115 ms.
    • 95th Percentile: 453.031 ms.
    • 99th Percentile: 736.873 ms.
    • Maximum: 1.33 seconds.
  • Data Transfer:
    • Bytes In: 2,430,027 (average 27.00 bytes/request).
    • Bytes Out: 8,280,092 (average 92.00 bytes/request).
  • Success Ratio: 100%.
  • Status Codes: All 90,001 requests were successful (201 code).

  • Throughput: Go had a higher throughput compared to Node.

  • Latencies: Node exhibited significantly higher latencies, especially in the mean, 95th, and 99th percentiles.

  • Success Rate: Go achieved a 100% success rate, whereas Node had a lower success rate with some failed requests.

5,000 Requests per second

Final round, let's see how both languages deal with severe pressure.

Latency x Seconds

Go was able to keep a very low, stable latency until ~20 seconds, when it started to present some troubles, that caused peaks of 5s, which is very slow.

Node presented problems throughout the entire test, responding with latencies between 5-10s.

It's nice to notice that even in a very stressful test, Go could be stable over the entire test.
5,000 requests/s latency

File Descriptors Count

Once again we can see how stable are the Open File Descriptors of Golang versus how unmanaged, linear-growing they are for Node.js

I believe that this is directly related to the Go Network Poller that reuses (and maybe pre-creates) File Descriptors instead of creating one at the time each request arrives.

I wonder if Node could benefit from such an approach, will definitely check this out 😅
5,000 requests/s Go vs Node FD Count

Threads Count

In this chart we can see that Node started with 11 OS Threads and jumped to 15 whenever connections started arriving, while Golang kept 4 OS threads for the majority of the test, increasing to 5 at the end.

Go's strategy seems to be more stable under heavy loads.
5,000 requests/ Go vs Node Threads Count

RAM

Node.js showed a linear increase in RAM usage, whereas Go's increase was step-like, similar to climbing a ladder.

This pattern in Go is due to its runtime actively managing resources and setting limits for go routines, OS threads, and open file descriptors.
5,000 requests/s Go vs Node RAM Usage

CPU

The CPU usage pattern is very similar for both languages, suggesting that this may be outside of the language control, being delegated to the OS.
5,000 requests/s Go vs Node CPU Usage

Overall

Go excels again with a higher Success Rate and lower p99, average, min, and max latency.

Given that Go is a compiled language, and Node.js (JavaScript) is interpreted, this outcome is expected.
Compiled languages typically have fewer steps before executing machine code.

Despite its inherent challenges, Node.js managed to successfully process 89.38% of the requests.
Go vs Node 5,000 requests/s overal

Final Considerations

Thank you for taking the time to read this blog post 🙏

It's no surprise that Go, a compiled language focused on concurrency and parallelism by design, came out on top. Still, it was interesting to see how it all played out.

It was cool to see how Go and Node.js handle tasks differently and how that impacts the computer's resources.

I've summed up the key points below.

Open File Descriptor Management

  • Go: Demonstrates a strategy of pre-allocation and reuse for File Descriptors, thanks to its intelligent network poller and resource management. This approach contributes to efficient handling and scalability under heavy network loads.
  • Node.js: Shows a dynamic, maybe unmanaged pattern in File Descriptor usage, reflecting its approach to handling server connections and opening FDs one by one.

Thread Management and Node.js

  • Go: Maintains a stable, low OS thread count, highlighting the efficiency of its runtime scheduler in optimizing thread usage, especially under heavy stress 🤯.
  • Node.js: Contrary to popular belief, Node.js uses multiple threads for tasks like DNS lookups, Garbage Collector (Hi V8), and blocking async I/O ops, it's not just a single thread.

Top comments (34)

Collapse
 
adesoji1 profile image
Adesoji1

Node js 😃😃😃😋🥳

Collapse
 
fullstackchris profile image
Chris Frewin

I don't know, looks more like 🤡🤡🤡🤡 to me.

Don't get me wrong, I love Node for a nice quick script or something, but if you're going to build an API that is gonna have any traffic at all, might as well go with the strongly typed & compiled backends (.NET, go, etc.)

I disagree with comments in here claiming that just because JavaScript is popular it would by definition speed up development. You have to consider bug fixes due to lack of type system, or if trying to avoid those issues with TypeScript, which brings in extra tooling overhead. Other languages are just as easy, if not easier, to get a beta API up and running. I mean just look how far .NET has come in making it truly as simple as possible: learn.microsoft.com/en-us/training...

Or a framework like Gin with Golang: gist.github.com/ezaurum/5b4803114d...

These examples are easily equivalent in complexity to any express or native node 'hello world' set of endpoints

Collapse
 
piboistudios profile image
Gabriel Hayes • Edited

We use JSDoc at my shop, and only use Typescript for d.ts files (which usually we have generated out by some tool relevant to the task at hand)

Type issues are not the problem. If you're facing type issues solely because a compiler won't stop you, that's indicative of your own ability, not the language.

Biggest issues you run into with JS are null/undefined errors (which you'll face basically in any language if you fail to null check, happens to the best of us).

I have never once been writing code and been like "wow, I have zero clue what this type is. I am at a total loss because a compiler won't tell me"

Usually it's like... Writes code expectations a Number

Gets a Promise "Ohhh noooooo"

Just kills me when people falsely exaggerate the usefulness of a type system.

It's more performant. It definitely does not save you time.

I JSDoc function signatures where it's useful, but it's super nice that I don't have to type every arbitrary function, especially when you get to super complex generic types (Promise>>>, yayyyyyy)

Thread Thread
 
adesoji1 profile image
Adesoji1

i totally understand

Collapse
 
adesoji1 profile image
Adesoji1

i totally understand

Collapse
 
jmir17 profile image
Josep Mir

What about development times, in the long run, at enterprise level software that realistically have to deal with such big loads? Any insights 😄

Collapse
 
ocodista profile image
Caio Borghi

Thank you for this comment!

To me, it depends on the focus of the project. If it's an enterprise focused on performance where milliseconds matter, I would choose Go.

If not, Node will be faster to implement, as JavaScript is way more popular than Golang.

It'll also be easier to hire and scale the team because there are way more JS developers than Go developers.

But that's a tough question, and I may be biased, I've been a JS developer for the past 5 years, I love Go and I really hope it thrives in the long run, but at the moment, I would still choose JS over any other language because of the scalability issue.

Hope this changes though, Go it's an awesome language 😁

Collapse
 
blinkinglight profile image
M
Collapse
 
ocodista profile image
Caio Borghi

Go is awesome 😁

Collapse
 
m3rashid profile image
MD Rashid Hussain

You take multiple metrics over and over again and then gather data points on the basis of aggregated results from all the points.
Also, the warmup time is crucial. You should not test the server all of a sudden after starting up. It should be warmed up first to get going with the threads
Whatever be the case, interesting results 😅

Collapse
 
wuya666 profile image
wuya666

Both Node and Go are good, that's why I choose Rust :p

Collapse
 
oseifrimpong profile image
Obed Osei Frimpong

🤣🤣🤣🤣🤣

Collapse
 
shivampatel17 profile image
Shivam Patel

nice post!

Collapse
 
gabrielsclimaco profile image
Coffee

very nice insights!

Collapse
 
ocodista profile image
Caio Borghi

Thank you!

Collapse
 
webjose profile image
José Pablo Ramírez Vargas

Compare Go with .Net. I bet for the latter, being in the top 10 for last year's benchmarks.

Collapse
 
ocodista profile image
Caio Borghi

I've been a .NET Developer for ~4 years, love the framework, maybe in the future, it's way laborious to right such a post 😅

Collapse
 
nemopeti2 profile image
Péter Kovács

Nice, I suggest you to test .Net core too. 😉

Collapse
 
yourrewardcard profile image
Your Reward Card

wow

Collapse
 
adaptive-shield-matrix profile image
Adaptive Shield Matrix • Edited

Only is small toy examples like this can node beat go.

If you write more complex projects -> go will nearly always beat nodejs, beause go has much better primitives, native data types, more efficient error handling, etc. Node and JS/TS has to use inefficient number types, has use/copy/map inefficient json/object data types, etc -> these things are fine and provide good developer experience but are horrible if you want to optimize for performance.

Persistent performance is key. If your server is only performant some times then thats only asking for trouble / headaches / suffering: how do you debug perf issues? How do you know because of that your CPU usage spiked? Why does you server started choking responses left and right? Did the garbage collector kicked in ? Etc

Collapse
 
ocodista profile image
Caio Borghi

Go read the Disclaimer again.

Collapse
 
adaptive-shield-matrix profile image
Adaptive Shield Matrix

Biggest reason to use a js runtime like nodejs/bun/deno is not the performance.
Its that you can develop in a language you know, JS or TS.
Its impossible to do complex websites web apps with serverside only languages like go, rust or scala.

Collapse
 
abrahamn profile image
Abraham

Unless you call in WebAssembly, HTMX and really any server side generated templates

Collapse
 
chiroro_jr profile image
Dennis

Impossible?

Collapse
 
adaptive-shield-matrix profile image
Adaptive Shield Matrix

Not physically impossible but painful/cumbersome enough that no sane/experienced developer will do that. Why? because you lose any client state with live reload that does not have HMR (Hot module reload).
Having to click 10 times to recreate client state is not a fun endeavor -> do that 100 times a day while editing the ui and will quickly start to understand why. So practically speaking the same thing -> impossible as of now (January 2024).

Collapse
 
daniel15 profile image
Daniel Lo Nigro

Node.js (JavaScript) is interpreted

Node (and the V8 engine in general) is actually a hybrid model. Hot code (code that runs a lot) gets JIT compiled to machine code.

It waits until the code runs a few times before compiling it. This is because for code that only runs once (like the startup code for the app), it's quicker to run it in interpreted mode than to compile it.

Collapse
 
nasyliis81 profile image
Pietro • Edited

This may have been a one-time anomaly, but as I won't redo the test and the metrics are all correct, I'll call this one a lucky shot for Node.

Appreciate the effort sir but, you should definitely re-do the test, multiple times, on multiple machines with different operating configurations, architecture and operating systems to provide any form of credible data to base any sensible decision.

Collapse
 
ocodista profile image
Caio Borghi

Hey Pietro, the code is Open Source, feel free to replicate this study and run the test how many times you find reasonable.

Collapse
 
swape profile image
Alireza Balouch

In real life you are connected to a database. And none of those speeds matter when db is usually the bottleneck. But nice work.

Collapse
 
ocodista profile image
Caio Borghi • Edited

Please check the Flow Diagram and Tech Stack again, the APIs are connected to a database.

Collapse
 
wuya666 profile image
wuya666

well, to be fair, one simple insert is far from real life applications. I think this test is nice to test the "extreme" performance of these two programs (but then I doubt anyone would expect a JS runtime to run faster than compiled native binaries, barring maybe some bugs and/or optimization issues), however for "non-extreme" real world applications with more complex database logic and transactions, I do agree with Alireza that most of the time database is the bottleneck.

For example for the mid-sized company I'm currently working with, operating a mobile app with about a couple millions active users, running multiple backend services in different languages and frameworks including Elixir, PHP, Node, Java, Go and Rust, it's always the database that's under the highest stress during most promotion events (but then they never use a single-core server for any of the backend services... the servers are also quite cheap compared to database and data bandwidth costs anyway)