DEV Community

Madhesh Waran
Madhesh Waran

Posted on

Internship Notes

🌟 CareerByteCode Cloud DevOps Challenge🌟

1. Deploy Virtual Machines in AWS & Google Cloud:

Using Terraform, deploy virtual machines (VMs) in both AWS and Google Cloud and ensure they are properly configured.

AWS:

Step 1: Download and install terraform. Also install AWS CLI and configure it with your credentials using 'aws configure'.

Step 2: Write a main.tf configuration file that tells terraform to deploy a EC2 instance.

** This documentation is very helpful.
https://registry.terraform.io/providers/hashicorp/aws/latest/docs **

Step 3: Initialize terraform using 'terraform init', preview the changes using 'terraform plan' and create the resources using 'terraform apply'.

Image description

Image description

Step 4: Use 'terraform destroy' to destroy the resources that you created using terraform.

GCP:
(should create a GCP account and do this ASAP after finishing AWS stuff.)

2. Kubernetes Microservices Deployment

Deploy a microservices-based application using Kubernetes and ensure it scales automatically.

Step 1: First you need to containerize the application using Docker. To do this, write a Dockerfile to create an image of the container of your application. I wrote a simple node.js app that displays 'Hello World' and wrote it into the Dockerfile.

Simple Dockerfile:
FROM node:14
WORKDIR /app
COPY . .
RUN npm install
EXPOSE 3000
CMD ["node", "app.js"]

Step 2: After saving the Dockerfile and simple 'app.js', type 'docker build -t nodeapp .' to build the Docker image. Using the image, create the container and check it by using 'docker run -p 3000:3000 nodeapp'.
If you did everything right, if you type localhost:3000 into your browser, it will show the below image. Push the image to dockerhub.

Image description

Step 3: Deploy the application in your DockerHub on Kubernetes using the deployment manifest file which specifies the number of replicas, the container that is used etc. Type 'kubectl apply -f deployment.yaml' in the command line to do it. Before doing this, you should download and install minikube on your local PC.

Simple deployment.yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
labels:
app: nodeapp
spec:
selector:
matchLabels:
app: nodeapp
replicas: 2
template:
metadata:
labels:
app: nodeapp
spec:
containers:
- name: nodeapp-container01
image: [yourdockerhubrepo]/nodeapp:latest
ports:
- containerPort: 3000

Simple Service Manifest file:
apiVersion: v1
kind: Service
metadata:
name: nodeapp-service
spec:
selector:
app: nodeapp
ports:

  • protocol: TCP port: 80 targetPort: 3000 type: ClusterIP

Simple Ingress manifest file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nodeapp-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/secure-backends: "true"
spec:
rules:
- host: localhost
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nodeapp-service
port:
number: 80

Step 3: The above service manifest file exposes the app to the Kubernetes cluster internally using ClusterIP. To expose the app externally we use the ingress which is deployed using ingress manifest file.

Step 4:To automatically scale the containers based on the load, we can use horizontal pod autoscaler which is deployed using the following hpa manifest file. To get metrics we need to enable metrics-server on minikube.

Simple HPA manifest file:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: deployment
minReplicas: 2
maxReplicas: 5
targetCPUUtilizationPercentage: 70

Step 5: This will ensure that the container gets scaled based on the load.

3. Azure Infrastructure Automation

Use Terraform to automate the provisioning of an infrastructure setup in Azure.

(Do after GCP)

4. AWS Lambda with EC2 Tag Management

Using AWS Lambda, write a Python program to start and stop EC2 instances based on tags.

Step 1: Go to lambda in the AWS console and create a new function with a role that gives EC2 access to the lambda (mandatory permissions: ec2:DescribeInstances, ec2:StartInstances, ec2:StopInstances).

Image description

Step 2: After creating the lambda function, populate the code section with the script to automate the starting or stopping of ec2 instances based on their tags.

**
This document is very helpful in writing the code for Python.
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/ec2-example-managing-instances.html
**

My Code:
import json
import boto3

ec2 = boto3.client('ec2')

def lambda_handler(event, context):
response = ec2.describe_instances()

TagName = 'Environment'

TagValue = 'Test'

for i in response['Reservations']:
    id = i['Instances'][0]['InstanceId']
    str = i['Instances'][0]['Tags']
    for j in str:
        if j['Key'] == TagName and j['Value'] == TagValue:
            if i['Instances'][0]['State']['Name'] == 'stopped':
                print(ec2.start_instances(InstanceIds = [id]))
                action_performed = 'starting'
            else:
                print(ec2.stop_instances(InstanceIds = [id]))
                action_performed = 'stopping'
print("\n")

return {
    'statusCode': 200,
    'body': json.dumps(f'{action_performed} the instances with the tag {TagName}:{TagValue}')
}
Enter fullscreen mode Exit fullscreen mode

My Result:

Image description

Image description

Logic:
If the instance with the tag is stopped, running this script will start them again. If they are already running, we can stop them using this script.
We can combine this script with API gateway or CloudWatch to automate the running of this script based on an event.

7. Monitoring & Logging Setup

Set up monitoring and logging for a cloud infrastructure using Prometheus and Grafana.

curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install prometheus prometheus-community/kube-prometheus-stack --namespace monitoring --create-namespace

(need to add deleted screenshots soon)

8. Secure Cloud Network Configuration

Configure a VPC in AWS with proper security groups, NACLs, and public/private subnets.

Step 1: Click 'Create VPC'. Click 'VPC Only' and provide the 'NameTag' and publicly available IPV4 cider (I chose 10.0.0.0/16 which has a range of 65,536 IP addresses from 10.0.0.0 to 10.0.255.255).

Step 2: Click 'Create Subnet' in the Subnets dashboard. When it asks for VPC details, select the VPC you just created. For Public Subnet, choose IPV4 CIDR block as 10.0.1.0/24 and choose any availablity zone. (us-east-1). Create another subnet for public subnet with the same availability zone as public subnet and IPV4 CIDR block as 10.0.2.0/24.

Step 3: Create an Internet Gateway and attach it to the VPC. Create a NAT gateway choosing the public subnet as its subnet and allocate elastic IP.(it incurs some costs so, be careful and delete it as soon as possible)

Step 4: Create a Route table with the 'Name' as Public Route Table and the VPC as the one you just created. In the Route table, click and edit routes and add the following route: Destination: 0.0.0.0/0, Target: The Internet Gateway you just created. In the Subnets Associations tab, add the Public Subnet to the table.

Step 5: Create a Route table with the 'Name' as Private Route Table and the VPC as the one you just created. In the Route table, click and edit routes and add the following route: Destination: 0.0.0.0/0, Target: The NAT Gateway you just created. In the Subnets Associations tab, add the Private Subnet to the table.

Image description

Step 6: Click on the Security Groups on the VPC Dashboard. Create a Security Group and select the VPC as VPC you just created. Click Edit inbound rules and allow SSH (port 22), HTTP (port 80), HTTPS (port 443) access from Source: 0.0.0.0/0 (anywhere). Create a new security group again for the private instances and edit the inbound rules with: Allow all traffic: From the CIDR range 10.0.0.0/16.

Step 7: select Network ACLs on the VPC Dashboard. Create a public Network ACL and edit the inbound rules as:
Rule 100: Allow HTTP (80) from 0.0.0.0/0. , Rule 110: Allow HTTPS (443) from 0.0.0.0/0. , Rule 120: Allow SSH (22) from 0.0.0.0/0.
and edit the outbound rules as:
Rule 100: Allow All Traffic to 0.0.0.0/0 and associate this NACL with the public subnet.
Create another private NACL and edit the inbound rules as:
Rule 100: Allow All Traffic from the VPC CIDR block (10.0.0.0/16).
and edit the outbound rules as:
Rule 100: Allow All Traffic to 0.0.0.0/0. and associate with the private subnet.

Image description

Image description

Image description

If you have any other doubts, refer this article by me: https://dev.to/madhesh_waran_63/deploying-wordpress-on-a-private-subnet-in-aws-ec2-using-a-linux-server-4a65

9. Database Backup and Restore

Automate the backup and restore of a MySQL database in AWS RDS.

Step 1: Create a MYSQL database in RDS or use an existing one for this workshop. Make sure automated backups is enabled on the database under Maintenance and Backups tab. Click on modify and change the retention days or backup window or duration and save the changes.

Image description

Step 2: Click on automated backups in the left dashboard. Select the database backup that you would like to restore and click the Actions button to bring down 'Restore to point in time'. Configure the database as you like and launch it to create a restored database from the backup.

Image description

10. 3-Node Kubernetes Cluster Setup

Build a Kubernetes cluster with 1 master node and 2 worker nodes using Ubuntu OS VMs.
Step 1: Create 3 EC2 instances with Ubuntu as their AMI with one of them being Master Node and the other 2 being Worker Nodes. Create Secu

Image description

Step 2: SSH into the Master Node and use the following code.

Install docker
sudo apt update
sudo apt install docker.io -y
sudo systemctl start docker
sudo systemctl enable docker

Install Kubernetes components
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

sudo apt update
sudo apt install -y kubelet kubeadm kubectl socat
sudo apt-mark hold kubeadm kubelet kubectl

Step 3: Initialize the master node and install a pod network using the following code.
Initializing master node and setting up kubeconfig
sudo kubeadm init --pod-network-cidr=10.244.0.0/16

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Setting up Pod network
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Image description

Step 4: After you setup the master-node, you will get the join token for worker nodes. Use them to join the worker nodes to the master node. Now, if you check the status of the clusters, you will see that the 2 worker nodes and the master node form the Kubernetes cluster.

Image description

Image description

11. Elastic Load Balancing

Configure AWS Elastic Load Balancer (ELB) for an auto-scaling web application.

Step 1: Launch EC2 Instances and let the instance's security group to allow HTTP (port 80).
Step 2: Create an Auto Scaling Launch Template by clicking on launch Templates on EC2 console. Configure it based on your needs. Here, we will deploy a web app using apache server and simple userdata.

Simple User Data :

!/bin/bash

yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
echo "

Hello World from $(hostname -f)

" > /var/www/html/index.html

Step 3: Click on Create Auto Scaling Group in ASG console and select the launch template that you just created. Let the Min: 0 Max: 3, Desired: 2. Set scaling policies to auto-scale based on CPU utilization or memory usage.

Image description

Step 4: Click 'Create Load Balancer' on EC2 console and choose Application Load Balancer (ALB).
Select the VPC and subnets. Configure a listener (port 80 for HTTP).

Step 4: Under Target Groups, create a new target group to register your instances. Choose EC2 Instances target type and select HTTP/HTTPS as the protocol.
Choose the health check for your application (e.g., HTTP with a path /healthcheck). Add the EC2 instances as targets in your target group.

Step 5: Edit your Auto Scaling Group and under Load Balancing section, select Attach to an existing load balancer and attach the ELB you just created and select the target group created for the ELB.

Image description

Step 6: If you type in the DNS name of your ELB in your browser it will switch between the two instances that got deployed by ASG. This shows that our load is balanced across multiple EC2 instances.

Image description

Image description

13. AWS IAM Role Setup

Create custom IAM roles for secure access control to specific AWS resources.

Step 1: Go to IAM and click the 'Create Role' button.
Choose the AWS service that will assume this role(Lambda). We will create the role that I used for python automation using Lambda in 'AWS Lambda with EC2 Tag Management'.

Step 2: Select the existing EC2 Full Access policy or create a custom policy. To create a custom policy, go to the policy section, choose create policy and write the json file containing all the policies we need. (ec2:DescribeInstances, ec2:StartInstances, ec2:StopInstances). Give the Name, Tags and other optional things and create the role

Image description

Step 3: We can now use this role to control our EC2 instance using Lambda

14. DNS Setup with Route 53

Configure Amazon Route 53 to route traffic to multiple endpoints based on geolocation.

Step 1: Buy a domain or use one you already own. Create a hosted zone for your domain. Click create Hosted Zone and enter the name of your domain.

Image description

Image description

Step 2: In the hosted zone, create records that will route traffic based on geolocation.
Click Create Record and Select 'A' Record and enter a valid IP address/AWS resource for the record (e.g., 172.217.12.100).

Step 3: In the Routing Policy section, select Geolocation. Select the country to where this record will apply. Do this for different locations like North America, Europe, etc. Also create a default record for DNS queries from locations not specified in the hosted zone records.

Image description

Step 4: If you search for this domain from different regions, it will direct you to different websites.

Image description

15. Cloud Migration Plan

Create a detailed migration plan to move an on-premise application to AWS. Include architecture diagrams, tools, and risk mitigation strategies.

Image description

Top comments (0)