Deep Dive into nextpalestine: setup overview
Deep Dive into nextpalestine:
A Feature-Rich Open-Source Blogging Platform Built with Next.js and Nest.js
Want to build a beautiful blogging platform with ease? Check out our open source web application nextpalestine!
Effortless blogging platform with a powerful editor, user management, and more. Open source on GitHub!, This presentation will delve into the capabilities of nextpalestine.
The root application folder:
nextpalestine is build on a mono-repo structure:
.
├── frontend
│ └── package.json
├── backend
│ └── package.json
└── package.json
The root package.json:
{
"name": "nextpalestine-monorepo",
"version": "0.1.0",
"private": true,
"license": "GPL",
"description": "Blogging platform",
"repository": {
"type": "git",
"url": "<https://github.com/adelpro/nextpalestine.git>"
},
"author": "Adel Benyahia <adelpro@gmail.com>",
"authors": ["Adel Benyahia <adelpro@gmail.com>"],
"engines": {
"node": ">=18"
},
"devDependencies": {
"husky": "^8.0.0",
"npm-run-all": "^4.1.5"
},
"scripts": {
"backend": "npm run start:dev -w backend",
"frontend": "npm run dev -w frontend",
"frontend:prod": "npm run build -w frontend && npm run start:prod -w frontend",
"backend:prod": "npm run build -w backend && npm run start:prod -w backend",
"dev": "npm-run-all --parallel backend frontend",
"start": "npm-run-all --parallel backend:prod frontend:prod",
"docker:build": "docker compose down && docker compose up -d --build",
"prepare": "husky install"
},
"workspaces": ["backend", "frontend"],
"lint-staged": {
"**/*.{js,jsx,ts,tsx}": ["npx prettier --write", "npx eslint --fix"]
}
}
We are using
npm-run-all package to run multiple commands in concurrency, for example the command start
will start two command in parallel backend
and frontend
.
Husky is used to run pre-commit hooks, if we check the .husky folder in the root folder
.
├── .husky
│ └── _
│ └── pre-commit
├── frontend
│ └── package.json
├── backend
│ └── package.json
└── package.json
The ./husky folder:
We are using husky to lint fix our code before committing it, and to make this process faster we are using a second package: lint-staged to only lint the staged code (newly added or modified code)
#!/usr/bin/env sh
. "$(dirname -- "$0")/_/husky.sh"
# Exit immediately if any command exits with a non-zero status.
set -e
echo 'Linting project before committing'
npx lint-staged
The compose.yaml
This file is used to build a self-hosted dockerized application that we can deploy to any system that run docker.
To properly run deploy our application to docker we have to:
1- Clone our repo: git clone [https://github.com/adelpro/nextpalestine.git](https://github.com/adelpro/nextpalestine.git)
2- Create an .env.production file in the frontend folder aligned to the .env.example (in the same folder).
3- Create an .env.production file in the backend folder aligned to the .env.example (in the same folder).
4- Run: docker compose up -d
We will now explain the compose.yaml file
services:
# Fontend: NextJs
frontend:
env_file:
- ./frontend/.env.production
container_name: nextpalestine-frontend
image: nextpalestine-frontend
build:
context: ./frontend
dockerfile: Dockerfile
args:
- DOCKER_BUILDKIT=1
ports:
- 3540:3540
restart: unless-stopped
depends_on:
backend:
condition: service_healthy
volumes:
- /app/node_modules
# For live reload if the source or env changes
- ./frontend/src:/app/src
networks:
- app-network
# Backend: NestJS
backend:
container_name: nextpalestine-backend
image: nextpalestine-backend
env_file:
- ./backend/.env.production
build:
context: ./backend
dockerfile: Dockerfile
args:
- DOCKER_BUILDKIT=1
ports:
- 3500:3500
restart: unless-stopped
depends_on:
mongodb:
condition: service_healthy
volumes:
- backend_v_logs:/app/logs
- backend_v_public:/app/public
- /app/node_modules
# For live reload if the source or env changes
- ./backend/src:/app/src
healthcheck:
test: ["CMD-SHELL", "curl -f http://backend:3500/health || exit 1"]
interval: 5s
timeout: 5s
retries: 5
start_period: 20s
networks:
- app-network
# Database: Mongodb
mongodb:
container_name: mongodb
image: mongo:latest
restart: unless-stopped
ports:
- 27018:27017
env_file:
- ./backend/.env.production
networks:
- app-network
volumes:
- mongodb_data:/data/db
- /etc/timezone:/etc/timezone:ro
#- type: bind
# source: ./mongo-entrypoint
# target: /docker-entrypoint-initdb.d/
healthcheck:
test: ["CMD", "mongosh", "--eval", "db.adminCommand('ping')"]
interval: 5s
timeout: 5s
retries: 5
start_period: 20s
# Database UI: Mongo Express
mongo-express:
image: mongo-express:1.0.2-20-alpine3.19
container_name: mongo-express
restart: always
ports:
- 8081:8081
env_file:
- ./backend/.env.production
depends_on:
- mongodb
networks:
- app-network
volumes:
backend_v_logs:
name: nextpalestine_v_backend_logs
backend_v_public:
name: nextpalestine_v_backend_public
mongodb_data:
name: nextpalestine_v_mongodb_data
driver: local
networks:
app-network:
driver: bridge
As you can see, we have forth images
1- mongo-express:
mongo-express is a web-based MongoDB admin interface
2- mongo:
The official mongo docker image, where we have added a health check, we will need at later in the backend image.
healthcheck:
test: ["CMD", "mongosh", "--eval", "db.adminCommand('ping')"]
interval: 5s
timeout: 5s
retries: 5
start_period: 20s
We have also loaded the .env file from the backend folder
env_file:
- ./backend/.env.production
We will need these .env variables:
MONGO_INITDB_ROOT_USERNAME=root
MONGO_INITDB_ROOT_PASSWORD=password
MONGO_DATABASE_NAME=database
And we are creating a persisted (named) volume to persist data between different builds, the second line is a hack to sync the time zone between the docker image and the host that runs it, it ensures that the timestamps in your database match the host system's timezone.
volumes:
- mongodb_data:/data/db
- /etc/timezone:/etc/timezone:ro
We have also changed the exposed port to(27018), 27018:27017
this will prevent any conflict with any mongodb database installed in the host system with the default port (27017)
3- backend (Nest.js)
backend:
container_name: nextpalestine-backend
image: nextpalestine-backend
env_file:
- ./backend/.env.production
build:
context: ./backend
dockerfile: Dockerfile
args:
- DOCKER_BUILDKIT=1
ports:
- 3500:3500
restart: unless-stopped
depends_on:
mongodb:
condition: service_healthy
volumes:
- backend_v_logs:/app/logs
- backend_v_public:/app/public
- /app/node_modules
# For live reload if the source or env changes
- ./backend/src:/app/src
healthcheck:
test: ["CMD-SHELL", "curl -f http://backend:3500/health || exit 1"]
interval: 5s
timeout: 5s
retries: 5
start_period: 20s
networks:
- app-network
This image is build on ./backend/Dockerfile using DOCKER_BUILDER=1 argument, this will enhance the build speed and caching.
- Env variable are load from ./backend/.env.production.
- It depends on mongodb image, the backend image will not start until the mongo image is fully started and healthy.
- The backend image has it’s own health check that we will use in the frontend (Next.js) image.
- We are persisting logs and public folders using named volumes.
- This volume:
/app/node_modules
is used to persist node_module. - This volume:
./backend/src:/app/src
to hot reload the image when ever the code is changed. - The port
3500:3500
is used for the backend.
We will now explain the Dockerfile of the backend (in the backend folder)
ARG NODE=node:21-alpine3.19
# Stage 1: builder
FROM ${NODE} AS builder
# Combine commands to reduce layers
RUN apk add --no-cache libc6-compat \
&& apk add --no-cache curl \
&& addgroup --system --gid 1001 nodejs \
&& adduser --system --uid 1001 nestjs
WORKDIR /app
COPY --chown=nestjs:nodejs package*.json ./
RUN --mount=type=cache,target=/root/.yarn YARN_CACHE_FOLDER=/root/.yarn \
yarn install --frozen-lockfile
COPY --chown=nestjs:nodejs . .
ENV NODE_ENV production
# Generate the production build. The build script runs "nest build" to compile the application.
RUN yarn build
# Install only the production dependencies and clean cache to optimize image size.
RUN --mount=type=cache,target=/root/.yarn YARN_CACHE_FOLDER=/root/.yarn \
yarn install --production --frozen-lockfile && yarn cache clean
USER nestjs
# Stage 2: runner
FROM ${NODE} AS runner
RUN apk add --no-cache libc6-compat \
&& apk add --no-cache curl \
&& addgroup --system --gid 1001 nodejs \
&& adduser --system --uid 1001 nestjs
WORKDIR /app
# Set to production environment
ENV NODE_ENV production
# Copy only the necessary files
COPY --chown=nestjs:nodejs --from=builder /app/dist ./dist
COPY --chown=nestjs:nodejs --from=builder /app/logs ./logs
COPY --chown=nestjs:nodejs --from=builder /app/public ./public
COPY --chown=nestjs:nodejs --from=builder /app/node_modules ./node_modules
COPY --chown=nestjs:nodejs --from=builder /app/package*.json ./
# Set Docker as non-root user
USER nestjs
EXPOSE 3500
ENV HOSTNAME "0.0.0.0"
CMD ["node", "dist/main.js"]
This is a multi-stage docker file, build on Linux alpine, we are first installing an extra package libc6-comat
then creating a separate user for our application for an extra security layer.
We are installing curl
, we will use it to check if our image is healthy.
Then we copies needed folders with the right permissions
We finally run our application suing node dist/main.js
3- frontend(Nextt.js)
# Args
ARG NODE=node:21-alpine3.19
# Stage 1: builder
FROM ${NODE} AS builder
RUN apk add --no-cache libc6-compat \
&& addgroup --system --gid 1001 nodejs \
&& adduser --system --uid 1001 nextjs
WORKDIR /app
COPY --chown=nextjs:nodejs package*.json ./
RUN --mount=type=cache,target=/root/.yarn YARN_CACHE_FOLDER=/root/.yarn \
yarn install --frozen-lockfile
COPY --chown=nextjs:nodejs . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
ENV NEXT_TELEMETRY_DISABLED 1
ENV NEXT_PRIVATE_STANDALONE true
ENV NODE_ENV production
# Generate the production build
RUN yarn build
# Install only the production dependencies and clean cache
RUN --mount=type=cache,target=/root/.yarn YARN_CACHE_FOLDER=/root/.yarn \
yarn install --frozen-lockfile --production && yarn cache clean
USER nextjs
# Stage 2: runner
FROM ${NODE} AS runner
RUN apk add --no-cache libc6-compat \
&& addgroup --system --gid 1001 nodejs \
&& adduser --system --uid 1001 nextjs
WORKDIR /app
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
ENV NEXT_TELEMETRY_DISABLED 1
COPY --from=builder /app/public ./public
# Set the correct permission for prerender cache
RUN mkdir .next
RUN chown nextjs:nodejs .next
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
# Copy next.conf only if it's not the default
COPY --from=builder --chown=nextjs:nodejs /app/next.config.js ./
COPY --from=builder --chown=nextjs:nodejs /app/package*.json ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
# Set Docker as non-root user
USER nextjs
EXPOSE 3540
ENV PORT 3540
ENV HOSTNAME "0.0.0.0"
# server.js is created by next build from the standalone output
# https://nextjs.org/docs/pages/api-reference/next-config-js/output
CMD ["node", "server.js"]
In this multi-stage Dockerfile:
- we are using a slim Linux image based on alpine
- We are creating a new user as an extra security layer
- We are using a special technique to re-use the yarn cache from preview builds (DOCKER_BUILDKIT must be enabled)
--mount=type=cache,target=/root/.yarn YARN_CACHE_FOLDER=/root/.yarn \
- We can (optionnaly) disable Next.js telemetry using this line:
ENV NEXT_TELEMETRY_DISABLED 1
- Then we are building our Next.js as a standalone application and coping the necessary folders
- We set a custom port: 3540
ENV PORT 3540
- And starting the app using:
node server.js
Top comments (0)