Table of Contents
- Project Overview
- Prerequisites
- Phase 1: Infrastructure Setup
- Phase 2: Application Setup and Containerization
- Phase 3: Automating with Jenkins CI/CD
- Phase 4: Securing Secrets with HashiCorp Vault
- Phase 5: Kubernetes Deployment
- Phase 6: Security Hardening in Kubernetes
- Conclusion
Project Overview
In this DevSecOps project, you will deploy a secure full-stack Node.js web application using Jenkins, Docker, Kubernetes, and HashiCorp Vault for secrets management. The project will focus on automating the entire deployment process with security integration at every stage. We'll scan Docker images for vulnerabilities, manage sensitive information securely, and deploy the application in a Kubernetes cluster, ensuring that security is embedded throughout.
Prerequisites
- AWS Account (for hosting the EKS Kubernetes cluster).
- Local or cloud-based Jenkins server.
- Docker and Kubernetes CLI (
kubectl
) installed on your local machine. - Terraform installed for infrastructure provisioning.
- HashiCorp Vault installed for secrets management.
- Basic understanding of Node.js, Docker, Kubernetes, and Jenkins.
Phase 1: Infrastructure Setup
1.1 Provision Kubernetes Cluster (EKS)
We will use Terraform to provision an EKS (Elastic Kubernetes Service) cluster on AWS.
a. Install Terraform:
# Install Terraform (Linux/macOS)
wget https://releases.hashicorp.com/terraform/1.0.11/terraform_1.0.11_linux_amd64.zip
unzip terraform_1.0.11_linux_amd64.zip
sudo mv terraform /usr/local/bin/
terraform --version
b. Define the Terraform configuration:
Create a file eks-cluster.tf
for provisioning an EKS cluster.
provider "aws" {
region = "us-west-2"
}
resource "aws_eks_cluster" "my_cluster" {
name = "devsecops-cluster"
role_arn = aws_iam_role.eks_role.arn
vpc_config {
subnet_ids = [aws_subnet.subnet1.id, aws_subnet.subnet2.id]
}
}
resource "aws_iam_role" "eks_role" {
name = "eks-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
c. Apply the Terraform configuration:
terraform init
terraform apply
Once the cluster is created, configure kubectl
to point to your new cluster:
aws eks --region us-west-2 update-kubeconfig --name devsecops-cluster
1.2 Set Up Jenkins Server
Install Jenkins on a local machine or a cloud server. Jenkins will automate the building, testing, and deployment of our Node.js application.
a. Install Jenkins:
# Install Java (required for Jenkins)
sudo apt update
sudo apt install openjdk-11-jdk
# Add Jenkins repository and install Jenkins
wget -q -O - https://pkg.jenkins.io/debian/jenkins.io.key | sudo apt-key add -
sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list'
sudo apt update
sudo apt install jenkins
# Start Jenkins
sudo systemctl start jenkins
sudo systemctl enable jenkins
Once installed, configure Jenkins plugins for Kubernetes, Docker, and Vault integration.
Phase 2: Application Setup and Containerization
2.1 Node.js Web Application (Source Code)
For this project, we'll use a simple Node.js web application. Create the following files:
app.js
:
const express = require('express');
const app = express();
const port = process.env.PORT || 3000;
app.get('/', (req, res) => {
res.send('Hello from DevSecOps Node.js App!');
});
app.listen(port, () => {
console.log(`App listening at http://localhost:${port}`);
});
package.json
:
{
"name": "devsecops-nodejs-app",
"version": "1.0.0",
"description": "A simple Node.js app for DevSecOps project",
"main": "app.js",
"scripts": {
"start": "node app.js"
},
"dependencies": {
"express": "^4.17.1"
}
}
Run npm install
to install the dependencies:
npm install
2.2 Create Dockerfile
Create a Dockerfile for containerizing the Node.js app.
# Use an official Node.js runtime as a parent image
FROM node:14
# Set the working directory
WORKDIR /usr/src/app
# Copy the package.json and install dependencies
COPY package*.json ./
RUN npm install
# Bundle app source code
COPY . .
# Expose the application port
EXPOSE 3000
# Run the application
CMD ["npm", "start"]
2.3 Build and Test Docker Image
a. Build the Docker image:
docker build -t devsecops-nodejs-app .
b. Run the Docker container locally:
docker run -p 3000:3000 devsecops-nodejs-app
Visit http://localhost:3000
to see the running app.
Phase 3: Automating with Jenkins CI/CD
3.1 Jenkins Pipeline Setup
Set up a Jenkins pipeline to automate building the Docker image, scanning for vulnerabilities, pushing the image to DockerHub, and deploying to Kubernetes.
Create a Jenkinsfile
in your project repository:
pipeline {
agent any
stages {
stage('Clone Repository') {
steps {
git 'https://github.com/your-repo.git'
}
}
stage('Build Docker Image') {
steps {
script {
docker.build("devsecops-nodejs-app:${env.BUILD_ID}")
}
}
}
stage('Security Scan') {
steps {
// Use Trivy to scan the Docker image for vulnerabilities
sh 'docker run --rm -v /var/run/docker.sock:/var/run/docker.sock aquasec/trivy:latest image devsecops-nodejs-app:${env.BUILD_ID}'
}
}
stage('Push Image to DockerHub') {
steps {
script {
docker.withRegistry('https://index.docker.io/v1/', 'dockerhub-credentials') {
docker.image("devsecops-nodejs-app:${env.BUILD_ID}").push()
}
}
}
}
stage('Deploy to Kubernetes') {
steps {
kubernetesDeploy(kubeconfigId: 'kubeconfig', configs: 'k8s/deployment.yaml', enableConfigSubstitution: true)
}
}
}
}
Phase 4: Securing Secrets with HashiCorp Vault
Integrate HashiCorp Vault to securely manage sensitive data like database credentials, API keys, and tokens.
4.1 Install HashiCorp Vault:
# Install Vault
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
sudo apt-get update && sudo apt-get install vault
4.2 Configure Vault for Kubernetes:
vault auth enable kubernetes
vault write auth/kubernetes/config \
token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
kubernetes_host="https://$(kubectl config view --raw --minify --flatten --output='jsonpath={.clusters[].cluster.server}')" \
kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Phase 5: Kubernetes Deployment
In this phase, we'll deploy the Node.js application to the Kubernetes cluster using Kubernetes manifests. The deployment process involves creating resources such as Deployment and Service objects, which define how our application is run within the cluster.
5.1 Create Kubernetes Deployment File (deployment.yaml
)
The Deployment resource ensures that the desired number of application instances (pods) are running. If a pod fails, Kubernetes will automatically recreate it to meet the specified replica count.
Create a file named deployment.yaml
with the following content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: devsecops-nodejs-app
labels:
app: devsecops-nodejs-app
spec:
replicas: 3 # We will run 3 instances (pods) of our app
selector:
matchLabels:
app: devsecops-nodejs-app
template:
metadata:
labels:
app: devsecops-nodejs-app
spec:
containers:
- name: devsecops-nodejs-container
image: your-dockerhub-username/devsecops-nodejs-app:latest # Use your DockerHub repo
ports:
- containerPort: 3000
env:
- name: SECRET_MESSAGE
valueFrom:
secretKeyRef:
name: app-secret
key: secret-message # Will get secret value from Vault through Kubernetes Secrets
---
apiVersion: v1
kind: Service
metadata:
name: devsecops-nodejs-service
spec:
selector:
app: devsecops-nodejs-app
ports:
- protocol: TCP
port: 80 # Expose the service on port 80
targetPort: 3000 # Internally forward traffic to container port 3000
type: LoadBalancer # Exposes the app to the internet
5.2 Deploy the Application
Now, use kubectl
to deploy the application to the Kubernetes cluster.
a. Apply the Deployment and Service:
kubectl apply -f deployment.yaml
b. Verify the Deployment:
kubectl get deployments
c. Check the Pods:
kubectl get pods
d. Get the External IP of the Service:
kubectl get services
Once the service is running, you can access the Node.js application using the external IP address provided by the LoadBalancer.
5.3 Handling Secrets with Kubernetes
In the Deployment manifest, we reference a Kubernetes Secret object, which is securely managed by HashiCorp Vault. We will integrate Vault with Kubernetes so that the sensitive data such as API keys, passwords, or database credentials can be securely injected into pods at runtime.
To create a secret in Kubernetes, run:
kubectl create secret generic app-secret \
--from-literal=secret-message="This is a secret from Vault!"
This secret will be passed to the application container via the environment variable SECRET_MESSAGE
, and your application can read it securely.
Phase 6: Security Hardening in Kubernetes
In this phase, we’ll enhance the security of the Kubernetes deployment by implementing PodSecurityPolicies, RBAC, and Network Policies.
6.1 PodSecurityPolicies (PSP)
PodSecurityPolicies are used to control the security aspects of a pod, such as privileged container execution, volume types, host networking, etc. You can create a policy that prevents containers from running as the root user.
Example PodSecurityPolicy (pod-security-policy.yaml
):
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: restricted-psp
spec:
privileged: false
runAsUser:
rule: MustRunAsNonRoot
fsGroup:
rule: MustRunAs
ranges:
- min: 1
max: 65535
seLinux:
rule: RunAsAny
supplementalGroups:
rule: MustRunAs
ranges:
- min: 1
max: 65535
volumes:
- configMap
- secret
Apply the policy to the Kubernetes cluster:
kubectl apply -f pod-security-policy.yaml
6.2 Role-Based Access Control (RBAC)
RBAC allows fine-grained access control in Kubernetes. Define roles that grant specific permissions to users or services.
Example RBAC Configuration (rbac.yaml
):
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: app-deployer
rules:
- apiGroups: ["apps", "extensions"]
resources: ["deployments"]
verbs: ["create", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: app-deployer-binding
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: app-deployer
subjects:
- kind: User
name: jenkins # Jenkins service account that will deploy the app
apiGroup: rbac.authorization.k8s.io
Apply the RBAC policy:
kubectl apply -f rbac.yaml
6.3 Network Policies
Network Policies define the traffic rules that allow or deny communication between pods. Use them to secure inter-pod communication and restrict access to your application.
Example Network Policy (network-policy.yaml
):
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-only-internal-traffic
namespace: default
spec:
podSelector:
matchLabels:
app: devsecops-nodejs-app
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: internal-service
This policy restricts traffic to the devsecops-nodejs-app
pods, allowing only communication from pods labeled internal-service
.
Apply the Network Policy:
kubectl apply -f network-policy.yaml
Conclusion
This project demonstrates a full end-to-end secure deployment of a Node.js application using Jenkins, Docker, Kubernetes, and HashiCorp Vault. By integrating security practices such as vulnerability scanning and secret management, we ensure the application is securely deployed in a production-grade environment.
👤 Author
Join Our Telegram Community || Follow me on GitHub for more DevOps content!
Top comments (12)
This was what I got when I tried doing no 1.1b:
╷
│ Error: Reference to undeclared resource
│
│ on eks-cluster.tf line 10, in resource "aws_eks_cluster" "my_cluster":
│ 10: subnet_ids = [aws_subnet.subnet1.id, aws_subnet.subnet2.id]
│
│ A managed resource "aws_subnet" "subnet1" has not been declared in the root module.
╵
╷
│ Error: Reference to undeclared resource
│
│ on eks-cluster.tf line 10, in resource "aws_eks_cluster" "my_cluster":
│ 10: subnet_ids = [aws_subnet.subnet1.id, aws_subnet.subnet2.id]
│
│ A managed resource "aws_subnet" "subnet2" has not been declared in the root module.
Thanks for pointing this out! @bepoadewale It looks like the error is related to missing or undeclared subnet resources in the Terraform configuration.
To fix this, you’ll need to declare the subnets in your Terraform code. Here’s an example of how to define the subnets before referencing them in your
aws_eks_cluster
resource:Once you declare the subnets like this, you can reference them in your EKS cluster configuration:
This should resolve the error. Let me know if you run into any more issues! 👍
And can you please add, after deployment how to handling the monitor and vulnerabilities. And Storting the deployment files too.
Thanks for the feedback! 😊 @sai_chowdary
Monitoring and handling vulnerabilities post-deployment are crucial parts of maintaining a secure and reliable application. I’ll consider adding a section on integrating tools like Prometheus and Grafana for monitoring, as well as Trivy or Aqua Security for vulnerability scanning.
As for storing the deployment files, I can definitely expand on best practices for organizing and storing them in version control (Git) for better traceability and collaboration. Stay tuned for updates! 👍
How do I become as good as you 😅.
Thanks anyways, I learned a lot reading this
Haha, you’re too kind! @sherif_san 😅
Honestly, it’s all about continuous learning and staying curious. Dive into hands-on projects, keep experimenting, and don’t be afraid to make mistakes — that’s where the real learning happens! I'm really glad you found the article helpful, and if you ever have any questions or need guidance, feel free to reach out! 🙌 Keep up the great work! 💪
Good write up! How about integrating SonarQube or Snyk to do the static code analysis for shift left approach? :)
Thank you! @venky_soma 😊 I'm glad you liked the write-up!
Integrating SonarQube or Snyk for static code analysis is an excellent idea to enhance security and follow the shift-left approach. These tools can help catch vulnerabilities early in the development cycle. I’ll definitely consider adding a section on how to integrate SonarQube or Snyk into the pipeline for automated security checks. Thanks for the suggestion! 👍
Good info, handling devsecops project and educating others a great job.all the best for your future endeavours
Thanks a lot mate 😊 @sai_chowdary
Excellent writeup. Thanks for sharing.
Thanks 😊👍 @shiful_islam_27265c68c14b