DEV Community

Cover image for End-to-End DevOps Project: Building, Deploying, and Monitoring a Full-Stack Application

End-to-End DevOps Project: Building, Deploying, and Monitoring a Full-Stack Application

Table of Contents

  1. Introduction
  2. Project Overview
  3. Prerequisites
  4. Step 1: Infrastructure Setup on AWS
  5. Step 2: Installing and Configuring Jenkins
  6. Step 3: Containerizing the Application with Docker
  7. Step 4: Deploying to Kubernetes (Amazon EKS)
  8. Step 5: Implementing Continuous Monitoring with Prometheus and Grafana
  9. Step 6: Securing the CI/CD Pipeline
  10. Step 7: Automating Infrastructure with Terraform
  11. Step 8: Implementing Blue-Green Deployments
  12. Conclusion
  13. Further Reading and Resources

Introduction

DevOps is about automating processes, improving collaboration between development and operations teams, and deploying software more quickly and reliably. This project guides you through the creation of a comprehensive CI/CD pipeline using industry-standard tools. You will deploy a full-stack application on AWS using Jenkins, Docker, Kubernetes (Amazon EKS), Prometheus, Grafana, Trivy, SonarQube, and Terraform. This hands-on experience will help you master key DevOps concepts and tools.

Project Diagram

  +------------------------+
  |   Developer Workstation |
  |                        |
  |  - Code Repository     |
  |  - Local Build & Test  |
  +-----------+------------+
              |
              v
  +------------------------+
  |        Jenkins         |
  |                        |
  |  - CI/CD Pipeline      |
  |  - Build & Test        |
  |  - Docker Build        |
  |  - Push Docker Image   |
  +-----------+------------+
              |
              v
  +------------------------+          +----------------------+
  |        Docker Hub      |          |        AWS EKS        |
  |                        |          |                      |
  |  - Docker Image        |          |  - Kubernetes Cluster |
  |                        |          |                      |
  +-----------+------------+          +-----------+----------+
              |                                    |
              v                                    |
  +------------------------+          +----------------------+
  |   Kubernetes Deployment|          |  Prometheus & Grafana|
  |                        |          |                      |
  |  - Deployment          |          |  - Monitoring         |
  |  - Service             |          |  - Dashboards        |
  |                        |          |                      |
  +------------------------+          +----------------------+
              |
              v
  +------------------------+
  |     Amazon RDS         |
  |                        |
  |  - MySQL Database      |
  |                        |
  +------------------------+
Enter fullscreen mode Exit fullscreen mode

Project Overview

Objectives

  • Infrastructure Setup: Provision AWS resources including VPC, EC2 instances, and RDS databases.
  • CI/CD Pipeline: Automate the build, test, and deployment processes with Jenkins.
  • Containerization: Containerize the application using Docker.
  • Kubernetes Deployment: Deploy the application on Amazon EKS.
  • Monitoring: Implement continuous monitoring using Prometheus and Grafana.
  • Security: Secure the pipeline with Trivy and SonarQube.
  • Infrastructure as Code: Automate infrastructure management with Terraform.
  • Blue-Green Deployment: Implement blue-green deployment strategies.

Tools and Technologies

  • AWS: EC2, VPC, RDS, EKS.
  • Jenkins: CI/CD automation.
  • Docker: Containerization.
  • Kubernetes: Container orchestration.
  • Prometheus & Grafana: Monitoring and visualization.
  • Trivy & SonarQube: Security and code quality checks.
  • Terraform: Infrastructure as code.

Prerequisites

  • AWS Account: Required for cloud resource provisioning.
  • Basic Linux Knowledge: For managing EC2 instances.
  • Docker and Kubernetes Knowledge: For containerization and orchestration.
  • Familiarity with CI/CD: Understanding basic CI/CD concepts.
  • GitHub Account: For version control and Jenkins integration.

Step 1: Infrastructure Setup on AWS

1.1 Setting Up the VPC and Networking

  1. Create a VPC:
   aws ec2 create-vpc --cidr-block 10.0.0.0/16
Enter fullscreen mode Exit fullscreen mode
  • Configure subnets:

     aws ec2 create-subnet --vpc-id <vpc-id> --cidr-block 10.0.1.0/24 --availability-zone us-east-1a
    
  • Set up an Internet Gateway:

     aws ec2 create-internet-gateway
     aws ec2 attach-internet-gateway --vpc-id <vpc-id> --internet-gateway-id <igw-id>
    
  • Create route tables and associate with subnets:

     aws ec2 create-route-table --vpc-id <vpc-id>
     aws ec2 create-route --route-table-id <rtb-id> --destination-cidr-block 0.0.0.0/0 --gateway-id <igw-id>
     aws ec2 associate-route-table --subnet-id <subnet-id> --route-table-id <rtb-id>
    
  1. Set Up Security Groups:

    • Create a security group:
     aws ec2 create-security-group --group-name MySecurityGroup --description "Security group for my app" --vpc-id <vpc-id>
    
  • Allow SSH, HTTP, and HTTPS:

     aws ec2 authorize-security-group-ingress --group-id <sg-id> --protocol tcp --port 22 --cidr 0.0.0.0/0
     aws ec2 authorize-security-group-ingress --group-id <sg-id> --protocol tcp --port 80 --cidr 0.0.0.0/0
     aws ec2 authorize-security-group-ingress --group-id <sg-id> --protocol tcp --port 443 --cidr 0.0.0.0/0
    

1.2 Provisioning EC2 Instances

  1. Launch EC2 Instances:

    • Use the AWS Management Console or CLI:
     aws ec2 run-instances --image-id ami-0abcdef1234567890 --count 1 --instance-type t2.micro --key-name MyKeyPair --security-group-ids <sg-id> --subnet-id <subnet-id>
    
  • Install Docker and Jenkins on the EC2 instance:

     sudo yum update -y
     sudo yum install docker -y
     sudo service docker start
     sudo usermod -a -G docker ec2-user
    
     # Jenkins
     sudo yum install java-1.8.0-openjdk -y
     wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat-stable/jenkins.repo
     rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key
     sudo yum install jenkins -y
     sudo systemctl start jenkins
     sudo systemctl enable jenkins
    

1.3 Setting Up an RDS Database

  1. Provision an RDS Instance:

    • Create a MySQL instance:
     aws rds create-db-instance --db-instance-identifier mydbinstance --db-instance-class db.t2.micro --engine mysql --master-username admin --master-user-password password --allocated-storage 20 --vpc-security-group-ids <sg-id>
    
  2. Connect the Application:

    • Update application configuration with the RDS endpoint:
     jdbc:mysql://<rds-endpoint>:3306/mydatabase
    
  • Ensure connectivity by testing with MySQL client

:

 ```bash
 mysql -h <rds-endpoint> -u admin -p
 ```
Enter fullscreen mode Exit fullscreen mode

Step 2: Installing and Configuring Jenkins

2.1 Jenkins Installation

  1. Install Jenkins:

    • Already covered under EC2 provisioning. Access Jenkins via <ec2-public-ip>:8080.
  2. Unlock Jenkins:

    • Retrieve the initial admin password:
     sudo cat /var/lib/jenkins/secrets/initialAdminPassword
    
  • Complete the setup wizard.

2.2 Configuring Jenkins for GitHub Integration

  1. Install GitHub Plugin:

    • Navigate to Manage Jenkins -> Manage Plugins.
    • Search for "GitHub" and install it.
  2. Generate a GitHub Token:

    • Generate a personal access token from GitHub and add it to Jenkins:
      • Manage Jenkins -> Manage Credentials -> Add Credentials.
  3. Create a New Job:

    • Set up a new pipeline job and link it to your GitHub repository.

2.3 Setting Up Jenkins Pipelines

  1. Define a Jenkinsfile:

    • Create a Jenkinsfile in your repository:
     pipeline {
       agent any
       stages {
         stage('Build') {
           steps {
             sh 'mvn clean install'
           }
         }
         stage('Test') {
           steps {
             sh 'mvn test'
           }
         }
         stage('Deploy') {
           steps {
             sh 'docker build -t myapp .'
             sh 'docker push myrepo/myapp'
           }
         }
       }
     }
    
  2. Trigger the Pipeline:

    • Commit and push the Jenkinsfile to your repository.
    • Jenkins will automatically trigger the build.

Step 3: Containerizing the Application with Docker

3.1 Writing a Dockerfile

  1. Create a Dockerfile:

    • In your application directory:
     FROM openjdk:8-jdk-alpine
     VOLUME /tmp
     ARG JAR_FILE=target/*.jar
     COPY ${JAR_FILE} app.jar
     ENTRYPOINT ["java","-jar","/app.jar"]
    
  2. Build the Docker Image:

    • Run the following commands:
     docker build -t myapp:latest .
    

3.2 Building and Pushing Docker Images

  1. Tag and Push Image:

    • Tag the image with the appropriate version:
     docker tag myapp:latest myrepo/myapp:v1.0.0
    
  • Push the image to Docker Hub:

     docker push myrepo/myapp:v1.0.0
    

3.3 Docker Compose for Local Development

  1. Create a docker-compose.yml File:

    • Define your multi-container application:
     version: '3'
     services:
       app:
         image: myrepo/myapp:v1.0.0
         ports:
           - "8080:8080"
       db:
         image: mysql:5.7
         environment:
           MYSQL_ROOT_PASSWORD: password
           MYSQL_DATABASE: mydatabase
         ports:
           - "3306:3306"
    
  2. Run Docker Compose:

    • Start the application locally:
     docker-compose up
    

Step 4: Deploying to Kubernetes (Amazon EKS)

4.1 Setting Up the EKS Cluster

  1. Install kubectl and eksctl:

    • Install kubectl:
     curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
     chmod +x kubectl
     sudo mv kubectl /usr/local/bin/
    
  • Install eksctl:

     curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/0.110.0/eksctl_Linux_amd64.tar.gz" | tar xz -C /tmp
     sudo mv /tmp/eksctl /usr/local/bin
    
  1. Create an EKS Cluster:
   eksctl create cluster --name my-cluster --version 1.21 --region us-east-1 --nodegroup-name my-nodes --node-type t3.medium --nodes 3
Enter fullscreen mode Exit fullscreen mode

4.2 Creating Kubernetes Manifests

  1. Write Deployment Manifests:

    • Create a deployment.yaml:
     apiVersion: apps/v1
     kind: Deployment
     metadata:
       name: myapp-deployment
     spec:
       replicas: 3
       selector:
         matchLabels:
           app: myapp
       template:
         metadata:
           labels:
             app: myapp
         spec:
           containers:
           - name: myapp
             image: myrepo/myapp:v1.0.0
             ports:
             - containerPort: 8080
    

4.3 Deploying the Application on EKS

  1. Apply the Manifests:

    • Deploy the application to EKS:
     kubectl apply -f deployment.yaml
    
  • Monitor the deployment:

     kubectl get pods
    
  1. Expose the Application:

    • Create a service to expose the application:
     apiVersion: v1
     kind: Service
     metadata:
       name: myapp-service
     spec:
       type: LoadBalancer
       selector:
         app: myapp
       ports:
         - protocol: TCP
           port: 80
           targetPort: 8080
    
  • Apply the service:

     kubectl apply -f service.yaml
    

Step 5: Implementing Continuous Monitoring with Prometheus and Grafana

5.1 Installing Prometheus

  1. Deploy Prometheus:

    • Use Helm to install Prometheus:
     helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
     helm repo update
     helm install prometheus prometheus-community/prometheus
    
  2. Configure Prometheus:

    • Edit the values.yaml file to scrape your application metrics:
     scrape_configs:
       - job_name: 'myapp'
         static_configs:
           - targets: ['myapp-service:8080']
    

5.2 Configuring Grafana Dashboards

  1. Deploy Grafana:

    • Install Grafana via Helm:
     helm install grafana grafana/grafana
    
  2. Access Grafana:

    • Retrieve the admin password:
     kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
    
  • Forward port to access Grafana:

     kubectl port-forward svc/grafana 3000:80
    
  1. Add Prometheus as a Data Source:
    • Log in to Grafana and add Prometheus as a data source.

5.3 Setting Up Alerts

  1. Define Alerting Rules:

    • Create alerting rules in Prometheus for critical metrics:
     groups:
       - name: example
         rules:
         - alert: HighMemoryUsage
           expr: node_memory_Active_bytes > 1e+09
           for: 1m
           labels:
             severity: critical
           annotations:
             summary: "Instance {{ $labels.instance }} high memory usage"
    
  2. Set Up Alertmanager:

    • Configure Alertmanager for notifications:
     receivers:
       - name: 'email'
         email_configs:
           - to: 'your-email@example.com'
    

Step 6: Securing the CI/CD Pipeline

6.1 Scanning for Vulnerabilities with Trivy

  1. Install Trivy:

    • Install Trivy on the Jenkins server:
     sudo apt-get install wget apt-transport-https gnupg lsb-release
     wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -
     echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/trivy.list
     sudo apt-get update
     sudo apt-get install trivy
    
  2. Integrate Trivy with Jenkins:

    • Add Trivy to the Jenkins pipeline:
     stage('Security Scan') {
       steps {
         sh 'trivy image myrepo/myapp:v1.0.0'
       }
     }
    

6.2 Integrating SonarQube for Code Quality

  1. Install SonarQube:

    • Install SonarQube on the EC2 instance:
     sudo yum install java-11-openjdk-devel -y
     wget https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-8.9.6.50800.zip
     unzip
    

sonarqube-*.zip
sudo mv sonarqube-8.9.6.50800 /opt/sonarqube
sudo chown -R sonar: /opt/sonarqube
```

  1. Configure SonarQube:

    • Modify the sonar.properties file for database integration:
     sonar.jdbc.username=sonar
     sonar.jdbc.password=sonar
     sonar.jdbc.url=jdbc:postgresql://localhost/sonarqube
    
  2. Integrate SonarQube with Jenkins:

    • Add SonarQube analysis in Jenkins:
     stage('SonarQube Analysis') {
       steps {
         withSonarQubeEnv('My SonarQube Server') {
           sh 'mvn sonar:sonar'
         }
       }
     }
    

Conclusion

This project guide provides an in-depth walkthrough of setting up an end-to-end DevOps pipeline with CI/CD, containerization, Kubernetes deployment, monitoring, and security. By following this guide, you’ll not only gain practical experience but also create a production-ready pipeline. Remember, the key to mastering DevOps is consistent practice and staying updated with the latest tools and methodologies.

Final Thoughts

Feel free to customize the steps and integrate more tools as per your project requirements. DevOps is a vast field, and this guide is just the beginning of your journey towards becoming a proficient DevOps engineer. Happy coding and happy deploying!


👤 Author

banner

Join Our Telegram Community || Follow me on GitHub for more DevOps content!

Top comments (21)

Collapse
 
notharshhaa profile image
H A R S H H A A

Updated Project Integration

Repository: Vitual-Browser


Step-by-Step Integration with Vitual-Browser

1. Clone the Repository

Start by cloning the Vitual-Browser repository to your local environment:

git clone https://github.com/jaiswaladi246/Vitual-Browser.git
cd Vitual-Browser
Enter fullscreen mode Exit fullscreen mode

2. Build and Test Locally

2.1 Build the Application

Since Vitual-Browser appears to be a Node.js application, you need to build it. Ensure you have Node.js and npm installed.

# Install dependencies
npm install

# Build the project (if applicable)
npm run build
Enter fullscreen mode Exit fullscreen mode

2.2 Run Tests

npm test
Enter fullscreen mode Exit fullscreen mode

3. Containerize the Application

3.1 Create a Dockerfile

In the root of your Vitual-Browser project, create a Dockerfile:

# Use official Node.js image as base
FROM node:14

# Create and set working directory
WORKDIR /usr/src/app

# Copy package.json and package-lock.json
COPY package*.json ./

# Install dependencies
RUN npm install

# Copy the rest of the application
COPY . .

# Build the application (if applicable)
RUN npm run build

# Expose port (adjust if necessary)
EXPOSE 3000

# Start the application
CMD ["npm", "start"]
Enter fullscreen mode Exit fullscreen mode

3.2 Build and Push Docker Image

# Build the Docker image
docker build -t myrepo/vitual-browser:latest .

# Push the Docker image to Docker Hub
docker tag myrepo/vitual-browser:latest myrepo/vitual-browser:v1.0.0
docker push myrepo/vitual-browser:v1.0.0
Enter fullscreen mode Exit fullscreen mode

4. Deploy to Kubernetes (Amazon EKS)

4.1 Create Kubernetes Manifests

Deployment Manifest (deployment.yaml):

apiVersion: apps/v1
kind: Deployment
metadata:
  name: vitual-browser-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: vitual-browser
  template:
    metadata:
      labels:
        app: vitual-browser
    spec:
      containers:
      - name: vitual-browser
        image: myrepo/vitual-browser:v1.0.0
        ports:
        - containerPort: 3000
Enter fullscreen mode Exit fullscreen mode

Service Manifest (service.yaml):

apiVersion: v1
kind: Service
metadata:
  name: vitual-browser-service
spec:
  type: LoadBalancer
  selector:
    app: vitual-browser
  ports:
    - protocol: TCP
      port: 80
      targetPort: 3000
Enter fullscreen mode Exit fullscreen mode

4.2 Apply the Manifests

kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
Enter fullscreen mode Exit fullscreen mode

5. CI/CD Integration with Jenkins

5.1 Update Jenkinsfile

In your Jenkinsfile, add stages to build, test, and deploy the application:

pipeline {
  agent any
  stages {
    stage('Checkout') {
      steps {
        git 'https://github.com/jaiswaladi246/Vitual-Browser.git'
      }
    }
    stage('Build') {
      steps {
        sh 'npm install'
        sh 'npm run build'
      }
    }
    stage('Test') {
      steps {
        sh 'npm test'
      }
    }
    stage('Docker Build and Push') {
      steps {
        sh 'docker build -t myrepo/vitual-browser:latest .'
        sh 'docker push myrepo/vitual-browser:latest'
      }
    }
    stage('Deploy') {
      steps {
        sh 'kubectl apply -f deployment.yaml'
        sh 'kubectl apply -f service.yaml'
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

6. Monitoring and Security

  • Prometheus and Grafana: Ensure Prometheus is configured to scrape metrics from the Vitual-Browser application if it exposes any.
  • Trivy and SonarQube: Integrate Trivy in the Jenkins pipeline for scanning Docker images and SonarQube for code quality.

Conclusion

This integration ensures a smooth CI/CD pipeline for the Vitual-Browser application. By following these steps, you’ll have a robust, automated setup for building, testing, deploying, and monitoring your application.

Collapse
 
geminitwins profile image
Hamza Boughanim

Hi HARSHHAA,
Top, very nice and helpful !
Thanks for sharing.
geminitwins

Collapse
 
notharshhaa profile image
H A R S H H A A

Thanks @geminitwins for your feedback ☺️

Collapse
 
sushantnair profile image
Sushant Nair • Edited

Hi. Thanks for the article. I had four questions:

  1. From start to end is it completely free?
  2. It seems this was done in Linux. But I am a Windows User. Any help?
  3. I didn't quite get your featured comment Updated Project Integration. Like, is it a particular application of a general procedure you delineated in the article, applying the process to deploy a nodejs application? Also, in case there is a django application to deploy, how to do it?
  4. Finally, how would the pushed website be accessible? Like, what would be the URL?
Collapse
 
notharshhaa profile image
H A R S H H A A

Thanks 😊@sushantnair

Collapse
 
sushantnair profile image
Sushant Nair

Thanks? I thought I asked a question for you to answer.

Collapse
 
serhiyandryeyev profile image
Serhiy

Great article! Thanks!

Collapse
 
notharshhaa profile image
H A R S H H A A

Thanks @serhiyandryeyev for your feedback ☺️

Collapse
 
jangelodev profile image
João Angelo

Hi HARSHHAA,
Top, very nice and helpful !
Thanks for sharing.

Collapse
 
notharshhaa profile image
H A R S H H A A

Thanks @jangelodev for your feedback ☺️

Collapse
 
fawzee profile image
Badmus Faoziyat

Great job!
I'll really appreciate it you can make a video on this.

Collapse
 
notharshhaa profile image
H A R S H H A A

Thanks @fawzee for your feedback ☺️

Collapse
 
hectorlaris profile image
Héctor Serrano

Thaks you, great job!

Collapse
 
notharshhaa profile image
H A R S H H A A

Thanks @hectorlaris for your feedback ☺️

Collapse
 
ricardogesteves profile image
Ricardo Esteves • Edited

Nice, Great article. Really interesting! 💯

Collapse
 
notharshhaa profile image
H A R S H H A A

Thanks @ricardogesteves for your feedback ☺️

Collapse
 
ibraheem101 profile image
Ibraheem101

Great article!!!

Collapse
 
notharshhaa profile image
H A R S H H A A

Thanks @ibraheem101 for your feedback ☺️

Collapse
 
ashwani_kumar_5bf02e1998e profile image
ashwani kumar • Edited

Powerful simple explanation, liked your stuff.

Collapse
 
notharshhaa profile image
H A R S H H A A

Thanks @ashwani_kumar_5bf02e1998e for your feedback ☺️

Collapse
 
prashantnerkar profile image
PN

Where can I find content for remaining steps 7-13?