DEV Community

Cover image for Shell Scripting in DevOps: A Complete Guide
Avesh
Avesh

Posted on

Shell Scripting in DevOps: A Complete Guide

Shell scripting is a vital skill for DevOps professionals, offering the ability to automate tasks, improve efficiency, and ensure consistency across environments. By creating and executing shell scripts, DevOps engineers can deploy applications, interact with APIs, monitor systems, and handle a variety of routine tasks. This article provides a detailed look at the role of shell scripting in DevOps, along with practical examples and use cases.


Benefits of Shell Scripting in DevOps

  1. Automation: Shell scripts automate repetitive tasks, saving time and reducing the risk of human error.
  2. Efficiency: Scripts enable quick execution of complex tasks, improving operational efficiency.
  3. Consistency: Automated scripts ensure consistent execution across multiple environments, reducing variability.
  4. Flexibility: Shell scripts can interact with a wide range of tools and services, providing versatility.
  5. Scalability: Automated tasks scale easily, making shell scripts valuable for managing large infrastructures.

Common DevOps Use Cases for Shell Scripting

1. Automating Deployment on Kubernetes

Shell scripts can simplify deploying applications to a Kubernetes cluster by automating the creation of necessary resources, such as namespaces, deployments, services, and config maps.

Example: Deploying a Web Application to Kubernetes

#!/bin/bash

# Define variables
NAMESPACE="myapp-namespace"
DEPLOYMENT_NAME="myapp-deployment"
IMAGE="myapp-image:latest"

# Create Kubernetes namespace
echo "Creating Kubernetes namespace..."
kubectl create namespace $NAMESPACE

# Apply ConfigMap
echo "Creating ConfigMap..."
kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
  name: myapp-config
  namespace: $NAMESPACE
data:
  APP_ENV: "production"
EOF

# Apply Deployment
echo "Creating Deployment..."
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: $DEPLOYMENT_NAME
  namespace: $NAMESPACE
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp-container
        image: $IMAGE
        ports:
        - containerPort: 80
        envFrom:
        - configMapRef:
            name: myapp-config
EOF

# Apply Service
echo "Creating Service..."
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
  name: myapp-service
  namespace: $NAMESPACE
spec:
  selector:
    app: myapp
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: LoadBalancer
EOF

echo "Deployment completed successfully!"
Enter fullscreen mode Exit fullscreen mode

Output:

Creating Kubernetes namespace...
namespace/myapp-namespace created
Creating ConfigMap...
configmap/myapp-config created
Creating Deployment...
deployment.apps/myapp-deployment created
Creating Service...
service/myapp-service created
Deployment completed successfully!
Enter fullscreen mode Exit fullscreen mode

2. Interacting with APIs (GitHub Example)

Shell scripts can automate interactions with APIs, such as creating GitHub issues directly from the command line.

Example: Creating GitHub Issues

#!/bin/bash

# Define GitHub credentials and repository details
GITHUB_USER="your-username"
GITHUB_REPO="your-repo"
GITHUB_TOKEN="your-token"
ISSUE_TITLE="New issue title"
ISSUE_BODY="Description of the new issue"

# Create the issue using the GitHub API
curl -u $GITHUB_USER:$GITHUB_TOKEN -X POST -H "Content-Type: application/json" \
  -d '{
    "title": "'$ISSUE_TITLE'",
    "body": "'$ISSUE_BODY'"
  }' \
  https://api.github.com/repos/$GITHUB_USER/$GITHUB_REPO/issues
Enter fullscreen mode Exit fullscreen mode

Output:

{
  "id": 123456789,
  "number": 1,
  "title": "New issue title",
  "state": "open",
  "body": "Description of the new issue",
  "user": {
    "login": "your-username",
    ...
  }
}
Enter fullscreen mode Exit fullscreen mode

3. Monitoring with AWS CloudWatch

Shell scripts can interact with AWS CLI to set up and manage CloudWatch alarms, enabling automated monitoring and alerting based on predefined metrics.

Example: Monitoring CPU Utilization of an EC2 Instance

#!/bin/bash

# Define variables
INSTANCE_ID="i-1234567890abcdef0"
ALARM_NAME="HighCPUUtilization"
ALARM_THRESHOLD=80

# Create CloudWatch alarm
aws cloudwatch put-metric-alarm --alarm-name $ALARM_NAME --metric-name CPUUtilization --namespace AWS/EC2 --statistic Average --period 300 --threshold $ALARM_THRESHOLD --comparison-operator GreaterThanThreshold --dimensions Name=InstanceId,Value=$INSTANCE_ID --evaluation-periods 2 --alarm-actions arn:aws:sns:us-east-1:123456789012:my-sns-topic --unit Percent

echo "CloudWatch alarm created successfully!"
Enter fullscreen mode Exit fullscreen mode

Output:

CloudWatch alarm created successfully!
Enter fullscreen mode Exit fullscreen mode

4. Monitoring and Alerting for Disk Usage

Shell scripts can also help monitor system metrics like disk usage, sending alerts if usage exceeds a specified threshold.

Example: Disk Usage Monitoring Script

#!/bin/bash

# Define threshold (in percentage)
THRESHOLD=80

# Get the current disk usage
DISK_USAGE=$(df / | grep / | awk '{ print $5 }' | sed 's/%//g')

# Check if the disk usage exceeds the threshold
if [ $DISK_USAGE -gt $THRESHOLD ]; then
  # Send an alert (e.g., email or logging)
  echo "Disk usage is at ${DISK_USAGE}%, which is above the threshold of ${THRESHOLD}%!" | mail -s "Disk Usage Alert" admin@example.com
fi
Enter fullscreen mode Exit fullscreen mode

Output:

Disk usage is at 85%, which is above the threshold of 80%!
Enter fullscreen mode Exit fullscreen mode

5. Interacting with APIs (Jira Example)

Shell scripts can automate tasks like creating issues in project management tools such as Jira.

Example: Creating Jira Issues

#!/bin/bash

# Define Jira credentials and project details
JIRA_URL="https://your-jira-instance.atlassian.net"
JIRA_USER="your-email@example.com"
JIRA_API_TOKEN="your-api-token"
JIRA_PROJECT="PROJ"
ISSUE_SUMMARY="New issue summary"
ISSUE_DESCRIPTION="Description of the new issue"

# Create the issue using the Jira API
curl -u $JIRA_USER:$JIRA_API_TOKEN -X POST -H "Content-Type: application/json" \
  --data '{
    "fields": {
       "project": {
          "key": "'$JIRA_PROJECT'"
       },
       "summary": "'$ISSUE_SUMMARY'",
       "description": "'$ISSUE_DESCRIPTION'",
       "issuetype": {
          "name": "Task"
       }
   }
}' $JIRA_URL/rest/api/2/issue/
Enter fullscreen mode Exit fullscreen mode

Output:

{
  "id": "10001",
  "key": "PROJ-123",
  "self": "https://your-jira-instance.atlassian.net/rest/api/2/issue/10001"
}
Enter fullscreen mode Exit fullscreen mode

6. Executing Tasks Based on Conditions

Scripts can conditionally execute actions, such as checking the status of a service and restarting it if itโ€™s down.

Example: Service Status Monitoring

#!/bin/bash

# Define the service name
SERVICE_NAME="apache2"

# Check the service status
SERVICE_STATUS=$(systemctl is-active $SERVICE_NAME)

# Perform actions based on the service status
if [ "$SERVICE_STATUS" != "active" ]; then
  echo "Service $SERVICE_NAME is not running. Restarting the service..."
  sudo systemctl restart $SERVICE_NAME
  echo "Service $SERVICE_NAME restarted."
else
  echo "Service $SERVICE_NAME is running."
fi
Enter fullscreen mode Exit fullscreen mode

Output:

Service apache2 is not running. Restarting the service...
Service apache2 restarted.
Enter fullscreen mode Exit fullscreen mode

Conclusion

Shell scripting is an indispensable tool in DevOps, enabling automation, enhancing monitoring, and facilitating interaction with various APIs and services. By implementing shell scripts, DevOps teams can streamline operations, boost efficiency, and ensure reliability across environments. Each of these examples demonstrates how shell scripting can be leveraged to address common DevOps tasks, from application deployment and API interaction to system monitoring and alerting. Embracing shell scripting can empower DevOps professionals to drive automation and innovation in their workflows, creating more resilient and scalable infrastructures.

Top comments (0)