The Node/JS ecosystem is large (over 1.7mm packages on npm) and evolving, and at Anvil, we work with Node.js quite a bit. We also like to create and contribute to open source Node/JS projects as well1. As a result, we've seen some good, bad, and ugly stuff out there. In this post (and its supporting Github repo) I'm going to share with you some of the best practices we've learned along the way while building a very simple web server.
NVM (Node Version Manager)
Even if you're developing JS for the browser, a streamlined development process will probably involve using Node to do some tasks for you. Different projects may require different Node runtimes/versions to be built, and developers are probably working on several different projects on their local machines at a time that may require incompatible Node versions. What do you do if your system's Node version is incompatible with the requirements of the project you're working on? Enter: NVM. NVM allows you to have different versions of Node on your machine, and to easily switch between versions as necessary. Additionally, by setting up shell integration and adding a
.nvmrc file to your project, your shell will automatically change to the Node version required by your project when you
cd into it. This is a must for any Node/JS developer's setup and projects. Note that the
.nvmrc file only specifies the Node version required to develop the project, but not necessarily to use the resulting package (more on that later).
Every Node/JS package starts with a
package.json file. I'm not going to cover all the ins and outs of that file (you can do that here), but I'd like to touch on a few important items that may not be intuitive at first, or that can have a big impact on your development:
main: specifies the path to module in your package whose exports will be used when your package is
engines: allows you to specify the version(s) of Node that your package will work on.
config: an object you can place arbitrary key/value data into and use elsewhere in your project. More on that later.
scripts: an object where you can specify named commands to run via
yarn my-command. Keep in mind that some names are special and correspond to "lifecycle" events. Read the docs to learn more.
package.json can also support some more arbitrary entries that other tools you may use are expecting—we'll touch on that a bit more later.
One final thing about
package.json: when adding a dependency, it's important to consciously decide whether it should be added to the
dependencies or the
devDependencies area (and use the appropriate installation command). Packages that are needed for development purposes only, and are not needed by the code that will be run when the package is installed and consumed, should go into
devDependencies (rather than
dependencies). This way they won't be unnecessarily installed on the user's system when they install your package. You may have noticed that this project has several
devDependencies, but zero
dependencies as it uses nothing but core Node modules at runtime. Nice!
In keeping with the 12 Factor App methodology, it's best that your app gets any configuration information it may need from the environment (e.g.
staging). Things that vary depending on the environment as well as sensitive things like API keys and DB credentials are great candidates for being provided via the environment. In Node, environment variables can be accessed via
process.env.<ENV_VAR_NAME_HERE>. This application has a
config.js file that centralizes and simplifies the resolution of these environment variables into more developer-friendly names and then exports them for consumption by the rest of the app. In production environments, there are myriad ways to populate the environment variables, so I will not go into them. However, for local development the usage of a
.env file along with the
dotenv package is very common and easy for developers. This
.env file should NOT be checked into source control (more on this later), but a
.env-example file that contains fake values is a nice thing to provide to developers so they know where to get started. Because it does not contain any sensitive information, the
.env-example can be checked into source control.
All developers are different, and not all teams will use the same coding styles. In addition, sometimes code can have serious problems (such as syntax errors), minor problems (such as unused variables or unreachable paths) or nits (tabs instead of spaces—oh no, I didn't!) that you don't want getting commited. Keeping code clean and uniform—especially when working with a team—can be difficult, but fortunately tools like Prettier and ESLint can help with all of that. Generally speaking, Prettier is concerned with formatting issues, while ESLint is concerned with errors, inefficiencies, and waste. ESLint is not only quite configurable, but also quite extensible: you can turn rules on or off, write your own rules, include someone else's shared set of rules, and more. Our very simple ESLint configuration is specified in the
.eslintrc.js file. Most IDEs will integrate with these tools and provide feedback to the developers, allowing them to correct the problems immediately. They also can fix many problems they encounter automatically, which is great.
Sometimes you'll want to run some commands before a developer can commit to your repository. Having Prettier and ESLint adjust and fix all JS files that have been staged for commit is a great example. This way, developers don't even have to remember to run the commands to fix and adjust things—it will happen automatically when they try to commit, and git will complain if something goes wrong. A popular way to set this up is by using
lint-staged. Once installed, I modified the
lint-staged entry in
package.json to run Prettier, followed by ESLint (we've found that Prettier sometimes undoes some of the things that ESLint does that we want, so it's important that we run them in that order).
As I mentioned in the beginning, Node/JS has been evolving rapidly. This quick pace of evolution means there are many Node (and browser) versions still in use that do not support the latest 🔥 hotness🔥 or even some features that have been around for a while. In order to take advantage of the latest language features while ensuring that your code will run on a reasonable amount of versions, you'll need to transpile it using Babel. Basically, Babel can rewrite parts of your code so that older runtimes can use them.
How do you know which language features are not supported by the runtimes you want to support? Using the
@babel/preset-env plugin, you just need to tell Babel what "target" runtimes you want to support and it will figure out which parts of your code to rewrite and which to leave alone! 😌 In this example project, I've specified supported node
>=12 in the
package.json, so I've put the Babel target of
12 in the
config area of
package.json to keep things near each other and hopefully in sync. I've added a
babel.config.js file that will tell Babel to use the
preset-env plugin, and will grab the "target" from the config area of the
Perhaps by now you've noticed that all the code for this package is in the
src/ directory. We'll keep all the source code there, and we'll use a directory called
dist/ for the output of Babel's transpilation. To tie that all together, I've added a few entries to the
scripts area of
clean: will delete the
build: will have Babel transpile everything in the
src/directory to the
clean:build: will run the
cleanand then the
prepare: this is one of the special "lifecycle" event scripts that will be automatically run before your code is published, and it simply calls the
Now that we're able to code using proposed, non-finalized ECMA standards, ESLint will get confused about some of the syntax it may see you developing in, so I've added
@babel/eslint-parser to our
devDependencies and referenced it as the parser for ESLint to use in the
One last thing about Babel I'd like to discuss is
@babel/node. This package installs a
babel-node command that will transpile the scripts you want to execute on the fly! It's a great tool for executing one-off scripts that you'd like to write using language features that are not compatible with your development environment, but that you don't want transpiled into the
dist/ folder with the rest of your package's code. I've created an example script in
scripts/my-script.js that can be executed using
yarn my-script, but would error if you tried to run it directly in Node. While
babel-node is great for these one-off scenarios, running your code on production using
babel-node is not recommended.
While developing your code, you'll want to verify the changes that you're making and make sure they're working properly. Shutting down and restarting this project's web server each time you make changes would be very time consuming, but fortunately there's Nodemon. Nodemon allows you to execute a command (like starting your app), but when it detects changes to files or directories you specify, it will restart that command. This way the effect of your changes can quickly and easily be verified. I've added a script entry in
develop that will (1) transpile the source code (2) start the server and (3) watch for changes to code that could impact the application. When any such changes occur, those steps will be repeated automatically. Sweet! Additionally, Nodemon is configurable so be sure to check out the documentation.
Unless your project is doing something extremely trivial and straightforward, you'll probably want to develop a suite of tests to make sure that your code is working as expected, and that it stays that way. I'm not going to get into test frameworks, philosophies, or specifics (perhaps another blog post would be good for that!), but I do have one big tip:
- While you're writing tests, fixing tests, or fixing code that breaks tests, it's great to leverage Nodemon to re-run your tests (or just the specific tests you're working on) with every code change. I've added a
package.jsonfor this purpose.
Not all the code and files that will end up in your local project directory should be committed to source control. For example, the
node_modules directory should not be committed since that's something that will be built by
npm using the
package.json and lockfiles. Also, in our specific case, the
dist/ folder should not be committed, since it's a byproduct/derivative of transpiling the
src/ directory, where the actual code changes are taking place. Also, the
.env file is very likely to have sensitive stuff and we all know that you should never check-in sensitive information to source control, right? 😉 Patterns of things to be ignored by git can be specified in the
.gitignore file. In general, it's also good practice to review the files that will be added by your commits and give a quick thought as to whether they should be ignored or not.
.gitignore, if you're publishing your package to NPM you can leverage a
.npmignore file to control which files will be included in the tarball that users will download from NPM when using your package. If you don't add a
.npmignore file, the
.gitignore file will be used. This is bad for a few reasons:
- We've told git to ignore the
dist/directory, which actually has the code we want users to run!
- A bunch of files that are irrelevant to the usage of our package will be included: the
test/directory, various development configuration files etc. For these reasons, I've found it beneficial to create a
.npmignorefile that explicitly ignores everything, but then adds exceptions for the
dist/directory and a few other files that I actually want to end up on end-users' installations. While several necessary files (like
package.json) are included no matter what you put in your
.npmignore, you should still be careful with how you use it.
This project now has some great attributes:
- developers should not have Node compatibility issues
- a clean
package.json, with as few
dependenciesas possible and some helpful script entries
- a pattern where configuration is loaded from the Environment at runtime in a straightforward manner
- code that will remain consistently formatted and free of lint
- development can be done using advanced language features, but boiled down to support older runtimes
- the ability to rapidly view or test changes to code while developing
- a clean git repository that does not contain unnecessary or sensitive files
- a clean, minimal package when uploading to NPM
There are certainly more things that could be done (or done differently), but this will hopefully be great food for thought as a starting point for those looking to create (or refactor) their Node/JS projects. Happy coding!