DEV Community

Hugo Di Francesco
Hugo Di Francesco

Posted on • Originally published at codewithhugo.com on

An enterprise-style Node.js REST API setup with Docker Compose, Express and Postgres

The why and how of enterprise-style Node.js application. A setup that’s easy to test and extend using battle-hardened technologies like Express.js, Postgres and Docker Compose to run locally.

Dive into the code on GitHub directly: github.com/HugoDF/express-postgres-starter.

A single command to bootstrap the whole application stack

There is only 1 pre-requisite to run the application to develop on a new machine: Docker for Desktop installed and running.

Run docker-compose up in the root of the project to bring up Postgres and the Express application server in development mode.

Based on the docker-compose.yaml, the application server is bound to localhost:3000. The port that’s bound on the host machine (ie. the dev machine, not the Docker containers) can be re-mapped this by changing the first 3000 in 3000:3000 of services.app.ports). The second 3000 in that line is the port that the app container should be listening on (ie. what we configure our Express app to listen on). The Express application is configured to listen on whatever PORT is defined in the environment, in this case, we’re looking at PORT: 3000 in services.app.environment.

Postgres is exposed on the host (dev machine, not Docker containers) port 35432. The connection string is postgres://user:pass@localhost:35432/db (username, password and database name are defined in the services.postgres.environment block of docker-compose.yaml). Internally it’s accessible at postgres:5432 (<name-of-the-service>:<port>), hence why we set services.app.environment.DATABASE_URL to postgres://user:pass@postgres:5432/db.

The start command for the app service is npm start, as defined in the Dockerfile, but docker-compose.yml overrides it CMD with npm run dev which runs the application using nodemon (auto-restart on file change).

Another point of interest are the services.app.volumes entries. - .:/app/ syncs the local directory to /app which is the WORKDIR defined in the Dockerfile. - /app/node_modules makes sure that the local node_modules directory (outside of Docker) doesn’t get sync-ed to the container. It’s there as an exception to the .:/app/ volume mount.

The docker-compose.yaml, .dockerignore and Dockerfile for the app are as follows:

# docker-compose.yml
version: "3"
services:
  app:
    build: .
    depends_on:
      - postgres
    environment:
      DATABASE_URL: postgres://user:pass@postgres:5432/db
      NODE_ENV: development
      PORT: 3000
    ports:
      - "3000:3000"
    command: npm run dev
    volumes:
      - .:/app/
      - /app/node_modules

  postgres:
    image: postgres:10.4
    ports:
      - "35432:5432"
    environment:
      POSTGRES_USER: user
      POSTGRES_PASSWORD: pass
      POSTGRES_DB: db

Dockerfile:

FROM node:10

WORKDIR /app

COPY ./package.json .
COPY ./package-lock.json .

RUN npm install

COPY . .

EXPOSE 3000

CMD npm start

As mentioned, the CMD is overriden by docker-compose to npm run dev. We’ll look at the different scripts in the following section.

Accessing the application

The application is available at http://localhost:3000.

curl http://localhost:3000
OK

Connecting directly to Postgres

You can connect to Postgres using the psql client:

psql postgres://user:pass@localhost:35432/db

Application lifecycle and scripts

Here are the relevant fields in package.json:

{
  "name": "express-postgres-starter",
  "scripts": {
    "test": "xo",
    "lint": "xo",
    "format": "xo --fix",
    "start": "node ./bin/start.js",
    "dev": "nodemon ./bin/start.js",
    "migrate": "node ./bin/migrate.js",
    "migrate:create": "migrate create --migrations-dir='./src/migrations'"
  },
  "dependencies": {
    "bcrypt": "^3.0.6",
    "client-sessions": "^0.8.0",
    "express": "^4.16.4",
    "helmet": "^3.16.0",
    "morgan": "^1.9.1",
    "pg": "^7.9.0",
    "sql-template-strings": "^2.2.2",
    "uuid": "^3.3.2"
  },
  "devDependencies": {
    "nodemon": "^1.18.11",
    "xo": "^0.24.0"
  },
  "xo": {
    "prettier": true,
    "space": true
  },
}

npm start vs npm dev

npm start runs the node ./bin/start.js.

The start.js script only contains glue code, reading PORT from the environment and calling server.start with that value.

const Server = require('../server');

Server.start(process.env.PORT);

npm run dev run the same script but with nodemon ./bin/start.js, which means it’ll restart if any of the JavaScript changes.

Linter setup

This project uses xo, the “JavaScript happiness style linter”. It’s set up with prettier and spaces instead of tabs.

npm run format will run xo --fix, which leans on prettier to format all the code.

npm run lint will run just xo which is a lint run without overwriting any of the code.

Ideally one could also use husky and/or lint-staged to run the linter/formatter on commit or push.

Database setup & management

Since we’re using Postgres with the Node Postgres (pg on npm) driver (instead of an ORM like Sequelize), we need to set up a system to get our relational database’s schema in order.

To this end we use node-migrate, “Abstract migration framework for node” with a custom “state storage” module at src/persistence/postgres-state-storage.js, you can see postgres-state-storage on GitHub, it’s lifted and slightly adapted from the node-migrate documentation.

We also use a custom ./bin/migrate.js (see migrate.js on GitHub) which can be called with up or down as arguments.

It’s all glued together using npm scripts in the package.json:

  • npm run migrate up will run the migrations.
  • npm run migrate down will roll back the migrations.
  • npm run migrate:create <migration-name> will create a new migration file in src/migrations folder.

To run the migrations inside of docker-compose. Which will run a bash instance inside the app container.

docker-compose run app bash

Followed by:

npm run migrate up

Express API setup

The Express API is located in src/api.

Applications routes for resources are defined in src/api/index.js, see src/index.js on GitHub.

Application entrypoint

The application entry point is server.js. It handles global concerns.

server.js exposes a module with start and stop functions.

It defines an application with / and /health that send a 200 status code.

That includes the morgan request logger. Helmet which sets sane defaults for application security). A JSON body parsing middleware built into Express (express.json), and Mozilla’s client-sessions for encrypted client-sessions to be stored in cookies.

It also mounts the API routes we’ll define in our src/api folder.

server.js looks like the following:

const express = require('express');

const morgan = require('morgan');
const clientSession = require('client-sessions');
const helmet = require('helmet');

const {SESSION_SECRET} = require('./config');

const app = express();
const api = require('./src/api');

app.get('/', (req, res) => res.sendStatus(200))
app.get('/health', (req, res) => res.sendStatus(200))

app.use(morgan('short'));
app.use(express.json());
app.use(
  clientSession({
    cookieName: 'session',
    secret: SESSION_SECRET,
    duration: 24 * 60 * 60 * 1000
  })
);
app.use(helmet());

app.use(api);

let server
module.exports = {
  start(port) {
    server = app.listen(port, () => {
      console.log(`App started on port ${port}`);
    });
    return app
  },
  stop() {
    server.close()
  }
}

API Architecture: Presentation Domain Data Layering

This application loosely follows the Presentation Domain Data Layering:

  • Presentation is dealt with in the ./src/api folder
  • Domain is dealt with in the ./src/modules folder. It’s currently non-existent since we’ve only got generic user and session resources.
  • Data is dealt with in the ./src/persistence folder

Architecture example: User + Session management

Session management is done through a custom sessions table, /api/session endpoints (see ./src/api/session.js) and leveraging client-sessions.

Presentation: a HTTP service with Express

The “user create” action has a good example of what falls into the HTTP presentation layer.

Request payload validation

This section of ./src/api/user.js is HTTP body content validation, which is one of the things one might express the presentation layer to do (see the code in context on GitHub src/api/user.js#L8-L13)

const {email, password} = req.body;
if (!email || !password) {
  return res
    .status(400)
    .json({message: 'email and password must be provided'});
}

Response based on domain function outputs

Based on whether the domain or data layer returns a user or not, the presentation module will respond with 400 (can’t create the user again) or 200 (created the user) (see the code in context on GitHub src/api/user.js#L16-L20).

 if (!user) {
   return res.status(400).json({message: 'User already exists'});
 }

return res.status(200).json(user);

Domain: orchestration

In the case of the above “user create” endpoint, the only bit of domain logic is the call to User.create (hence why it’s inline in the handler instead of a separate module):

const user = await User.create(email, password);

Refactoring tightly coupled presentation and domain

Another feature worth examining is the “session create”.

The following is the bulk of the endpoint (omitting error handling), it takes email and password from the request body, attempts to find a matching user, 403s if the user doesn’t exist or the passwords don’t match, creates a session and 201s if the user exists and password is correct.

const {email, password} = req.body;
const user = await User.find(email);
if (!user || !(await bcrypt.compare(password, user.password))) {
  return res.status(403).json({});
}

const sessionId = await Session.create(user.id);
req.session.id = sessionId;
res.status(201).json();

One way to re-write this following presentation/domain/data layering would be:

// probably should go into ./src/modules/user.js
async function findUser(email, password) {
  const user = await User.find(email)
    if (!user || !(await bcrypt.compare(password, user.password))) {
    return null
  }
  return user
}

// in ./src/modules/session.js
function createSession(userId) {
    return Session.create(user.id);
}

// in the HTTP handler
const {email, password} = req.body;
const user = await findUser(email, password);
if (!user) {
  return res.status(403).json({});
}
req.session.id = await createSession(user.id);
res.status(201).json();

Note how the presentation layer doesn’t know about the data layer any more, it only talks to the domain layer.

Data: raw Postgres with sql-template-strings

One of the huge downsides of writing Postgres queries yourself is to allow SQL injections. In order to mitigate this, we should use Postgres prepared statements.

The issue with prepared statements is that they take a bit of brainpower to parse, and it’s easy to introduce off-by-one errors (how many ? do you have, which order are the values in etc):

await db.query(
  'INSERT INTO users (id, email, password) VALUES (?, ?, ?) RETURNING id, email;',
  [uuid(), email, hashedPassword]
);

In order to get the ergonomics of interpolation (easier to read) with the benefits of prepared statements (smaller attack surface) we use the sql-template-strings package. Which allows to write the above as:

const sql = require('sql-template-strings');
await db.query(sql`
  INSERT INTO users (id, email, password)
  VALUES (${uuid()}, ${email}, ${hashedPassword})
    RETURNING id, email;
`);

Separating domain from data

Let’s have a look at how one of the session methods is implemented:

module.exports = {
  async create(userId) {
    const id = uuid();
    await db.query(sql`
    INSERT INTO sessions (id, user_id)
      VALUES (${id}, ${userId});
    `);
    return id;
  },
};

There’s an argument to be made the the uuid generation is a database concern since it’s also enforced at the schema level (see the migration that creates the sessions table on GitHub at src/migrations/1550969025172-authentication.js#L13-L16).

It could also be implemented in src/modules/session.js:

const uuid = require('uuid/v4');

const session = require('../persistence/session');

async function createSession(userId) {
  const sessionId = uuid();
  await session.create(sessionId, userId);
  return sessionId
}

With a matching updated data implementation in ./src/persistence/session.js:

module.exports = {
  async create(sessionId, userId) {
    await db.query(sql`
    INSERT INTO sessions (id, user_id)
      VALUES (${sessionId}, ${userId});
    `);
  },
};

That’s the basics of an enterprise-style REST API with Node.js and Postgres following Presentation/Domain/Data layering.

unsplash-logo
Jeff Nissen

Top comments (6)

Collapse
 
tiim profile image
Tim Bachmann

Is there a specific reason you use npm install as opposed to npm ci? The ci command has the advantage that it installs the exact versions that are specified in package-lock.json so your get repeatable builds.

Collapse
 
hugo__df profile image
Hugo Di Francesco • Edited

Definite oversight on my part 🙂 thanks for pointing it out

Collapse
 
antonioavelar profile image
António Avelar

Hi Hugo,
the way you specified your queries (with template strings), doesn't make your vulnerable to db attacks like SQL Injection?

I also noticed that in order to specify the querie with template strings you used an external lib. That lib just translates the template string into a prepared statement, right?

Collapse
 
hugo__df profile image
Hugo Di Francesco

Yes it does, ergonomics of templates with prepared statements.

Collapse
 
theodesp profile image
Theofanis Despoudis

It’s best if you use Typescript and pg-promise as they are more enteprise-like choices

Collapse
 
hugo__df profile image
Hugo Di Francesco

Re TypeScript I didn't want to add a build step in.

Pg-promise I just picked any old Postgres client they're all pretty solid.