DEV Community

Cover image for Frustrations with Node.js
Brian Neville-O'Neill
Brian Neville-O'Neill

Posted on • Originally published at blog.logrocket.com on

Frustrations with Node.js

Written by Kasra Khosravi✏️

Introduction

Just to clarify, I don’t hate Node.js. I actually like Node.js and enjoy being a full-stack JavaScript developer. However, it does not mean I do not get frustrated by it. Before I get into some frustrations with Node.js, let me say some of the things Node.js is awesome at:

However, there are some quirks about Node.js you should know:

  • Type checking — Node.js inherits the dynamic type checking from JavaScript. But, sometimes writing Node.js code in a real-life application makes you wish for stricter type checking to catch bugs sooner. You might have used one of the static type checking tools like Flow or TypeScript, but Flow frustrates a lot of developers with performance, compatibility, and intelliSense issues and TypeScript, despite its appeal in the community, tends to be heavy and can cause issues in places you never imagined
  • Debugging — I am not an expert on this but I always had issues with properly debugging my Node.js applications. I am not saying debugging is not supported or possible, but code inspections and breakpoints tend to be ignored from time to time and you can get frustrated with lack of support on this important task, compared to other frameworks. I usually end up placing console.log and debugger statements all over my code for this purpose, which is not ideal

The above pain points are not limited to Node.js by any means. However, in my experience with Node.js as of today, I came to have two prominent frustrations that I think need to be clarified in more detail. Please also comment if you felt similar or additional frustrations with Node.js and how you manage to cope with them.

LogRocket Free Trial Banner

Error handling

Frustration

Throwing errors in Node.js is not as straightforward as other languages (and frameworks). We have a lot of asynchronous code in Node.js and it requires you to pass the error in your callbacks and promises, instead of throwing exceptions or simply using try/catch blocks. Debugging the true nature of the error becomes much more difficult when you have to go a few callbacks deep or cannot figure out how an unhandled exception can cause your app to silently fail, and it is then when you wish for a smoother error handling process.

Background

Before diving into error handling, we need to define some basics.

Node.js is built on top of JavaScript which is a single thread language. You get something called a call stack when having function calls. If any of your function calls take time to get resolved, we have to block the whole thread while we are waiting for the result to come back, which is not ideal in scenarios when we have to interact with a web application in browsers. The user still wants to work with the app, while we are waiting for some data to come back to us.

Here is where we get to the concept of asynchronous JavaScript, which helps us handle blocking code. To put simply, this is a mechanism to assign a callback to be performed when your registered function call is resolved. There are few options to handle this:

  • Using function callback— the idea is simple. You pass a function called callback to your async function call. When the result of the async function call comes back, we trigger the callback. A good example of this is the async addEventListener which takes a callback as the second parameter:
function clickHandler {
  alert('Button is clicked');
}

btn.addEventListener('click', clickHandler);
Enter fullscreen mode Exit fullscreen mode
  • Using promise— when using a promise on async function, you get an object representing the state of the operation. We don’t know when the promise will come back to us with either a result or error, but we have the mechanism to handle either scenario. For example, calling node-fetch would generate a promise object which we can handle with its methods:
const fetch = require("node-fetch");

fetch("https://jsonplaceholder.typicode.com/todos/1")
  .then(res => res.json())
  .then(json => console.log(json))
  .catch(error => console.log("error", error));

// { userId: 1, id: 1, title: 'delectus aut autem', completed: false }
Enter fullscreen mode Exit fullscreen mode

We have other options like async iterators and generators or new async/await feature in ES2017 which is just syntactic sugar on top of the promise. But for simplicity, we just stick with the above options. Let’s see how error handling is maintained for both callbacks and promises.

Asynchronous error handling

Function callback — error handling with this approach is done using a Error First Callback method. When the async function gets back with a result, the callback gets called with an Error Object as its first argument. If we have no error, this will be set as null. Let’s look at an example:

// setTimeout is faking an async call which returns an error after 0.5 seconds
const asyncFunction = (callback) => {
  setTimeout(() => {
    callback(new Error('I got an error'))
  }, 500)
}

// callback for our async function
const callbackFunction = (err, data) => {
  if (err) {
    console.error(err);
    return;
  }
  console.log(data);
}

asyncFunction(callbackFunction);
Enter fullscreen mode Exit fullscreen mode

When we call asyncFunction above, it approaches setTimeout as the first thing and cannot handle it synchronously. Therefore, it asks window API to resolve it and continues the program. When the result comes back (which in this case is an Error Object), it will call the function callback. Here come the frustrating parts.

We cannot use a try/catch in the context of asynchronous function calls to catch errors. So we cannot just throw an error, in our Error First Callback approach:

const callbackFunction = (err, data) => {
  if (err) {
    throw err;
  }
  console.log(data);
}

try {
  asyncFunction(callbackFunction);
} catch(err) {
  // we are not catching the error here
  // and the Node.js process will crash
  console.error(err);
}
Enter fullscreen mode Exit fullscreen mode
  • Forgetting to return in our callback function will let the program continue and cause more errors. The main point here is there are so many quirks to remember and handle here that might cause the code to get into a state that is hard to reason about and debug
if (err) {
    console.error(err);
    return;
  }
Enter fullscreen mode Exit fullscreen mode

Promises are amazing at chaining multiple async functions together and help you avoid callback hell that can be caused by using the previous method. For error handling, promises use .catch method in the chain to handle exceptions. However, handling errors in them still comes with some concerns:

  • You might get swallowed errors if you forget to use .catch methods in your promise chain. This will cause such an error to be categorized as unhandled error. In that case, we need to have a mechanism in Node.js to handle promise rejections that are not handled. This is done when unhandledRejection event is emitted in Node.js:
const fetch = require("node-fetch");
const url = "https://wrongAPI.github.com/users/github";

const unhandledRejections = new Map();
process.on("unhandledRejection", (reason, promise) => {
  unhandledRejections.set(promise, reason);
  console.log("unhandledRejections", unhandledRejections);
});

const asyncFunction = () => fetch(url);

asyncFunction()
  .then(res => res.json())
  .then(json => console.log(json))
Enter fullscreen mode Exit fullscreen mode
  • Another issue is the traceability of large async function chains. In short, what was the source, origin, and context of thrown error? For example, if you have a long chain of async function calls to handle an API fetch request and several higher-level components that depend on it. These higher-level components also have several children underneath them. An error thrown in any of them can make the traceability of the issue difficult

It is not straightforward how this needs to be handled in Node.js, but one common pattern is to add an immediate .catch methods to the async task in higher-level components and re-throw the error in them again. This helps massively in tracing an error in case it happens in any of their children, since we chain another .catch to the instances that calls the higher-level async task. Let’s see this with an example:

const fetch = require("node-fetch");
const url = "https://wrongAPI.github.com/users/github";

// higher level async task
const asynFunction = () => {
  return fetch(url).catch(error => {
    // re-throwing the error
    throw new Error(error);
  });
};

// error thrown in this intacen 1 is much bette traceable
// returns: instace 1 error: invalid json response body at https://wrongapi.github.com/users/github reason: Unexpected token < in JSON at position 0
try {
 return await asyncFunction();
} catch(error) {
  console.error("instace 1 error:", error.message)
}
Enter fullscreen mode Exit fullscreen mode

Package manager

Frustration

There are several tools for package management in Node.js like npm, yarn, and pnpm, which help you install tools, packages, and dependencies for your application to make the process of software development faster and easier.

However, as it is usually with the JavaScript community, defining good and universal standards are happening less and less compared to other languages and frameworks. Just Googling “JavaScript standards” show the lack of standard as people tend not to agree on how to approach JavaScript, except in few cases like Mozilla JS reference — which is very solid. Therefore, it is easy to feel confused which package manager you need to pick for your project in Node.js.

Additionally, there are complaints about the low quality of packages in the Node.js community, which makes it harder for developers to decide if they need to re-invent the wheel and build a needed tooling themselves or can they trust the maintained packages.

Finally, with JavaScript’s rapid changes, it is no surprise that a lot of packages that our applications are dependent on are changing as well. This requires a smoother package version management in Node.js which sometimes can be troublesome.

This, by no means, indicates that Node.js is any worse than other frameworks when it comes to packages and package management, but just a mere reflection of some frustrations that comes with Node.js package managers. We will discuss few of these frustrations like lack of standards, quality of packages, and version management in more detail, but first, we need to have a background on some of the most famous Node.js package managers.

Background

  • npm— This is the official package manager for Node.js. Through its repository, you can publish, search, and install packages. Specifically, in the context of a Node.js project, it does also help you with a CLI command and package.json document to manage your project dependencies and handle version management for them
  • yarn— Consider YARN as an improved version of NPM CLI with the same model of package installation. In addition, it has some other advantages:
    • It is more reliable. Unlike NPM, it uses dual registries by default (npmjs.com and https://bower.io/search/) to make sure the service is still available if any of the registries are down
    • It is faster. It can download packages in parallel instances and caches all of the installed packages, so it can retrieve them much faster the next time it wants to download. Even though NPM has also done some performance improvements with NPM cache
  • pnpm— This is the newest player among the three. PNPM officially describes itself as “fast, disk efficient package manager” that seems to be working more efficiently compared to the other two by using symlinks to build your dependencies only once and reusing them

Dealing with package managers

  • Lack of standards— As we have seen above, there are multiple options when it comes to package managers. It is common that when you want to start a project, you might get a bit confused about which one to pick. They have their similarities in 99% of the scenarios, but also possess little quirks in 1% of the instances that can cause issues down the road for maintaining the project. Having worked with all of the above options in production applications, I wish there was a bit more consistency in this area
  • Quality of packages— Even though you can find a lot of useful packages in Node.js, there are an equivalent number of options that are out-dated, poorly tested, or not maintained. Since publishing packages in NPM registry is not that difficult, it is on us developers to make sure we choose the right packages for our projects. We can simply vet a package by checking its GitHub repo and check the overall status and maintenance of it. This can be in the form of checking a good balance between a number of issues and open pull requests, good communication from maintainers in the reported issues, and overall usage of the package and its popularity reflected in a number of stars and forks. To make this job even easier, you can type in the name of your package in NPMS, and you get an overall overview of it
  • Version management— Package managers use semver to handle versioning of packages. With this approach, a sample package versions look like this Major.Minor.Patch, for example 1.0.0. Let’s see an actual package.json and list of dependencies and their versions in action:
{
  "name": "app",
  "version": "1.0.0",
  "description": "Node.js example",
  "main": "src/index.js",
  "scripts": {
    "start": "nodemon src/index.js"
  },
  "dependencies": {
    "node-fetch": "~2.6.0"
  },
  "devDependencies": {
    "nodemon": "^1.18.4"
  },
}
Enter fullscreen mode Exit fullscreen mode

This is already confusing as we get two different symbols in front of package versions. What do they mean?

~ or tilde shows a range of acceptable patch versions for a package. For example, we are gonna update the app to all of the future patch updates for node-fetch ranging from 2.6.0 to 2.7.0

^ or caret shows a range of acceptable minor/patch versions for a package. For example, we are gonna update the app to all of the future patch updates for nodemon ranging from 1.18.4 to 2.0.0

This already seems like a lot of hassle for such a simple task. Additionally, we need to consider the fact that making a mistake in defining the correct range of dependency versions can break the app at some point. However, concepts like package.json.lock or yarn.lock are formed to help avoid making such mistakes by helping to make consistent dependency installs across machines. However, I wish there were more standard approaches in making sure severe problems do not happen due to flawed version control and management system in Node.js.

Conclusion

These are some frustrations I experienced with Node.js. But, here are some things to remember:

  • A large portion of Node.js frustrations come from unfamiliarity with JavaScript as the underlying language. Make yourself more familiar with its basic and advance topics and life will be much easier as a Node.js developer
  • Make sure the use case for your Node.js application is valid. For example, a chat application is an awesome candidate for using Node.js. An application with CPU intensive computations, not so much. Familiarize yourself with common use cases
  • Finally, know that any framework can come with certain pain points. Use this article and similar ones in the reference list to learn about common issues and the best ways to handle them

Resources

https://dev.to/entrptaher/nodejs-frustration-4ckl

http://devangst.com/the-problem-with-nodejs/

https://stackify.com/node-js-error-handling/

https://medium.com/@iroshan.du/exception-handling-in-java-f430027d60bf

https://dev.to/fullstackcafe/nodejs-error-handling-demystified-2nbo

https://blog.insiderattack.net/error-management-in-node-js-applications-e43198b71663

https://stackify.com/async-javascript-approaches/

https://www.ryadel.com/en/yarn-vs-npm-pnpm-2019/

https://medium.com/the-node-js-collection/why-the-hell-would-you-use-node-js-4b053b94ab8ehttps://www.peterbe.com/plog/chainable-catches-in-a-promise

https://blog.insiderattack.net/you-really-hate-node-58b1ff72202d

https://hackernoon.com/inconsistency-as-a-feature-f5f1a28356d4

https://hackernoon.com/promises-and-error-handling-4a11af37cb0e

https://blog.geekforbrains.com/after-a-year-of-using-nodejs-in-production-78eecef1f65a


200's only ‎✅: Monitor failed and show GraphQL requests in production

While GraphQL has some features for debugging requests and responses, making sure GraphQL reliably serves resources to your production app is where things get tougher. If you’re interested in ensuring network requests to the backend or third party services are successful, try LogRocket.

Alt Text

LogRocket is like a DVR for web apps, recording literally everything that happens on your site. Instead of guessing why problems happen, you can aggregate and report on problematic GraphQL requests to quickly understand the root cause. In addition, you can track Apollo client state and inspect GraphQL queries' key-value pairs.

LogRocket instruments your app to record baseline performance timings such as page load time, time to first byte, slow network requests, and also logs Redux, NgRx, and Vuex actions/state. Start monitoring for free.


The post Frustrations with Node.js appeared first on LogRocket Blog.

Top comments (0)