As a JavaScript developer (no matter back or front-end), we often rely upon npm scripts
to automate common tasks like starting a server, building a project, and even performing tasks before or after certain scripts like postbuild
, prebuild
, etc.
When those commands are simple like node index.js
, having them a single line in our package.json isn't a problem at all. The real problem starts when we need an extensive command, adding environment variables, and concatenating commands:
(Example extracted from Material UI package.json)
{
"scripts": {
"proptypes": "cross-env BABEL_ENV=development babel-node --extensions \".tsx,.ts,.js\" ./scripts/generateProptypes.ts",
"deduplicate": "node scripts/deduplicate.js",
"benchmark:browser": "yarn workspace benchmark browser",
"build:codesandbox": "lerna run --parallel --scope \"@material-ui/*\" build",
"release:version": "lerna version --exact --no-changelog --no-push --no-git-tag-version",
"release:build": "lerna run --parallel --scope \"@material-ui/*\" build",
"release:changelog": "node scripts/releaseChangelog",
"release:publish": "lerna publish from-package --dist-tag next --contents build",
"release:publish:dry-run": "lerna publish from-package --dist-tag next --contents build --registry=\"http://localhost:4873/\"",
"release:tag": "node scripts/releaseTag",
"docs:api": "rimraf ./docs/pages/api-docs && yarn docs:api:build",
"docs:api:build": "cross-env BABEL_ENV=development __NEXT_EXPORT_TRAILING_SLASH=true babel-node --extensions \".tsx,.ts,.js\" ./docs/scripts/buildApi.ts ./docs/pages/api-docs ./packages/material-ui-unstyled/src ./packages/material-ui/src ./packages/material-ui-lab/src --apiPagesManifestPath ./docs/src/pagesApi.js",
"docs:build": "yarn workspace docs build",
"docs:build-sw": "yarn workspace docs build-sw",
"docs:build-color-preview": "babel-node scripts/buildColorTypes",
"docs:deploy": "yarn workspace docs deploy",
"docs:dev": "yarn workspace docs dev",
"docs:export": "yarn workspace docs export",
"docs:icons": "yarn workspace docs icons",
"docs:size-why": "cross-env DOCS_STATS_ENABLED=true yarn docs:build",
"docs:start": "yarn workspace docs start",
//.....
}
}
But what if I told you could have those commands extracted into a separate file and having a scripts
config like this:
{
"scripts": {
"proptypes": "scripty",
"deduplicate": "scripty",
"benchmark:browser": "scripty",
"build:codesandbox": "scripty",
"release:version": "scripty",
"release:build": "scripty",
"release:changelog": "scripty",
"release:publish": "scripty",
"release:publish:dry-run": "scripty",
"release:tag": "scripty",
"docs:api": "scripty",
"docs:api:build": "scripty",
"docs:build": "scripty",
"docs:build-sw": "scripty",
"docs:build-color-preview": "scripty",
"docs:deploy": "scripty",
"docs:dev": "scripty",
"docs:export": "scripty",
"docs:icons": "scripty",
"docs:size-why": "scripty",
"docs:start": "scripty",
}
//.....
}
Scripty
Scripty is an npm package that enables us the ability to have executable files to run npm scripts
.
The whole idea is to treat these giant script lines we have as code and keep our package.json clean and simple.
Let's say we have this:
{
"scripts": {
"lint": "eslint . --cache --report-unused-disable-directives --ext .js,.ts,.tsx --max-warnings 0"
}
}
Using scripty it'll look like this:
{
"scripts": {
"lint": "scripty"
}
}
The magic behind
Of course, the command we just removed needs to be somewhere. To make it simple as that, scripty does a pairing of <npm-script-nam>:<executable-file-name>
.
In other words, if we have an npm script called lint
, we need an executable file called lint
, lint.sh
, or lint.js
.
The default folder is always, at the root level, a folder called scripts
. So, to solve the previous migration, we would create a file called lint.sh
under the scripts
folder, like this:
#!/usr/bin/env bash
yarn eslint . --cache --report-unused-disable-directives --ext .js,.ts,.tsx --max-warnings 0
Executable Bash or .JS
Scripty can handle only handle executable bash or JavaScript executables.
To have one of those, the file needs to:
- having the shebang at the top of the file (e.g.
#!/bin/bash
or#!/bin/node
; - having permission to execute (while
ls -la
, it needs to havex
flag);
Quick tip, if you're in a UNIX environment, you can quickly give this permission by running the command
chmod u+x <file-path>
Also, file extensions are not necessary. You can write a test.sh
, test.js
or only test
. What will define the syntax highlight and the execution will be one of the shebang instructions I've mentioned before.
#!/bin/node
const fs = require('fs');
fs.copyFileSync('static/base.css', 'dist/base.css');
// ...
#!/usr/bin/env bash
NODE_ENV=production
yarn nest build
For JS executable, keep in mind that it'll be executed by
node
and you can't use invalid js-node (e.g.import
) syntax.
Batching
Another requirement we often have is running a bunch of scripts related. Let's say we have a lot of test
script and we want to run all of them, like test:*
:
{
"scripts": {
"test:unit": "jest",
"test:e2e": "cypress run --ci",
"test": "npm-run-all test:*",
}
}
With scripty, we can create a subfolder called test
and declare those 2 types of tests there:
.
├── package.json
├── scripts
│ └── test
│ ├── e2e
│ └── unit
└── yarn.lock
By having those files with those instructions, you can change your package.json to be:
{
"scripts": {
"test:unit": "scripty",
"test:e2e": "scripty",
"test": "scripty",
}
}
Note that only the
test
script will be sufficient for this case. We'll only keeptest:unit
andtest:e2e
in case we want to run one of these commands isolated.
When you run test
, scripty will understand you have a folder called test
with a lot of scripts and it'll run all of them.
Keep in mind that this is a concurrency call and you should not rely on the execution order.
Controlling the batching sequence
If you need them being executed in a certain order, with the same package.json as before, all you need to do is, in our scripts/test
folder, creates a script called index
witch will be responsible for executing the other scripts in the sequence we want to:
.
├── package.json
├── scripts
│ └── test
│ ├── index
│ ├── integration
│ └── unit
└── yarn.lock
#!/bin/bash
scripts/test/unit
scripts/test/integration
Keep in mind that the current working directory (CWD) will be always where it's being executed, in that case, our root folder.
Parallel watch
Another common scenario is when we have certain scripts we need to run which will stay in watch mode
, in other words, lock a section and keep listening for file changes so it can perform something.
{
"scripts": {
"watch:css": "sass src/scss/main.scss public/css/main.css -s compressed",
"watch:js": "webpack --config webpack.config.js --watch --mode=development",
}
}
A way of booting both commands would be opening two tabs and running each command in a tab. But that's tedious. What if we could somehow have a single terminal tab and run all watch
at the same time?
To do that using scripty all we have to do is to create a folder called watch
inside scripts, pretty much as we did before for test
.
├── package.json
├── scripts
│ └── watch
│ ├── css
│ └── js
└── yarn.lock
But instead of only passing scripty
word to our npm script, we have to specify an environment variable called SCRIPTY_PARALELL
with true
:
{
"scripts": {
"watch": "SCRIPTY_PARALLEL=true scripty"
}
}
Now both will keep running.
Caveats
The biggest caveat here is for windows
users.
If you're one of them or you maintain a project which can be run in a Windows machine, you'll need some special treatment for that and I suggest you take a look at their docs with those instructions.
Conclusion
Scripty allows us to treat or npm scripts as code, having a file containing all instructions to execute some tasks.
It also eases the ability to roll back an incorrect script instruction and provides a great and isolated git history.
So be creative.
Top comments (2)
Great tool if you know what you are doing :) However, there is a threat of overusing it. I think only the complex scripts need to go into a separate file, otherwise, there will be many unnecessary files, which will create a mess for someone not very well aware of the project structure. Moreover, here is a case when there is a need to find the script which does what you intend to do. In some repositories, the starting script may be named as
start
, in the others, it may be calleddev
ordevelopment
, or there may be a 3-rd party lib starting up the project for the others, so, in case the person is new in the project, they need to scan over the scripts to understand which one they need. This separation may be an additional difficulty for them.I think this will work best for small teams or projects with good documentation, otherwise it may become a problem itself.
I often pick this tool for my personal projects just for the convenience.
But I do agree, I had a monorepo where I have a common scripts which runs a common
test
command and then inside the folders when I don't need to pass any special flag I only navigate back and invoke the bash script.I don't have this workflow documented because it's my projects I don't believe nobody will ever touch it but I strong believe in good and well maintained documentations explaining such workflows.
At Warner for example we work in a very complex monorepo with more then 30 projects cross 10 different teams. The work flow, how to setup, the why's of the tooling we use there is all written in
.md
files and it can help everyone understand the decisions taken.But I've worked in places where we had some scripts here and there, no docs nor reference explaining why and I needed indeed to scan to understand.
Maybe it's more about how do we organize ourselves and think of the team then the tool we're using.
But you brought a valid point... thanks for that :D