DEV Community

Manoj Swami
Manoj Swami

Posted on

Automated Deployment to Google Cloud with Backup to Google Cloud Storage

Deploying applications to the cloud and maintaining regular backups are crucial aspects of modern software development. In this guide, we'll walk through creating a robust deployment script that not only updates your application on Google Cloud but also manages backups in Google Cloud Storage. We'll do this without relying on the Google Cloud SDK (gcloud or gsutil) on your local machine, making it easier for team members to deploy without additional setup.

Prerequisites

Before we begin, ensure you have the following:

  1. A Google Cloud account with a running Compute Engine instance
  2. A Google Cloud Storage bucket for backups
  3. SSH access to your Google Cloud instance
  4. curl and jq installed on your local machine
  5. A service account with necessary permissions for Google Cloud Storage

Step 1: Setting Up Your Google Cloud Environment

Creating a Service Account

  1. Go to the Google Cloud Console (https://console.cloud.google.com/)
  2. Navigate to "IAM & Admin" > "Service Accounts"
  3. Click "CREATE SERVICE ACCOUNT"
  4. Name your service account (e.g., "deployment-account")
  5. Grant the following roles:
    • Storage Object Creator
    • Storage Object Viewer
    • Storage Object Admin
  6. Click "CREATE KEY" and select JSON as the key type
  7. Save the downloaded JSON key file securely

Creating a Google Cloud Storage Bucket

  1. In the Google Cloud Console, go to "Cloud Storage" > "Buckets"
  2. Click "CREATE BUCKET"
  3. Choose a globally unique name for your bucket
  4. Select your desired location type
  5. Choose a storage class (Standard is usually fine for frequent access)
  6. Click "CREATE"

Step 2: Preparing Your Deployment Script

Now, let's create our deployment script. We'll call it deploy.sh. This script will handle deploying your application and managing backups.

#!/bin/bash

set -e

# Configuration
PROJECT_NAME="your-project-name"
INSTANCE_IP="your-instance-ip"
SSH_USER="your-ssh-username"
SSH_KEY_PATH="/path/to/your/ssh/private/key"
REMOTE_PROJECT_DIR="/home/$SSH_USER/$PROJECT_NAME"
BUCKET_NAME="your-backup-bucket-name"
FRONTEND_DIR="frontend"
BACKEND_DIR="backend"
SERVICE_ACCOUNT_KEY_PATH="/path/to/your/service-account-key.json"

# Function to get access token
get_access_token() {
    local token=$(cat $SERVICE_ACCOUNT_KEY_PATH | jq -r '.private_key' | openssl pkcs8 -topk8 -nocrypt -in /dev/stdin | \
        openssl sign -in <(echo -n "$(echo -n "{'alg':'RS256','typ':'JWT'}" | base64 -w 0).$(echo -n "{
        'iss':'$(jq -r '.client_email' $SERVICE_ACCOUNT_KEY_PATH)',
        'scope':'https://www.googleapis.com/auth/devstorage.full_control',
        'aud':'https://oauth2.googleapis.com/token',
        'exp':$(($(date +%s) + 3600)),
        'iat':$(date +%s)
        }" | base64 -w 0)") -sha256 -keyform PEM | base64 -w 0)

    curl -s -X POST https://oauth2.googleapis.com/token \
        -d "grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer&assertion=$token" \
        | jq -r '.access_token'
}

# Function to upload file to bucket with progress
upload_to_bucket() {
    local file=$1
    local access_token=$(get_access_token)
    local file_size=$(wc -c < "$file")

    echo "Uploading $file to bucket..."
    curl -X POST -H "Authorization: Bearer $access_token" \
        -H "Content-Type: application/octet-stream" \
        --data-binary @"$file" \
        --progress-bar \
        "https://storage.googleapis.com/upload/storage/v1/b/$BUCKET_NAME/o?uploadType=media&name=$(basename $file)" \
        | tee /dev/null

    echo "Upload complete!"
}

# Function to list files in bucket
list_bucket_files() {
    local prefix=$1
    local access_token=$(get_access_token)

    curl -s -H "Authorization: Bearer $access_token" \
        "https://storage.googleapis.com/storage/v1/b/$BUCKET_NAME/o?prefix=$prefix" \
        | jq -r '.items[].name'
}

# Function to delete file from bucket
delete_from_bucket() {
    local file=$1
    local access_token=$(get_access_token)

    curl -X DELETE -H "Authorization: Bearer $access_token" \
        "https://storage.googleapis.com/storage/v1/b/$BUCKET_NAME/o/$file"
}

# Function to create a backup and upload to Google Cloud Storage
create_backup() {
    local dir=$1
    local timestamp=$(date +%Y%m%d_%H%M%S)
    local backup_file="${PROJECT_NAME}_${dir}_${timestamp}.tar.gz"

    echo "Creating backup of $dir..."
    tar -czf $backup_file $dir

    upload_to_bucket $backup_file

    echo "Removing local backup file..."
    rm $backup_file

    echo "Managing old backups..."
    # Keep only the last 3 backups
    local files=($(list_bucket_files "${PROJECT_NAME}_${dir}_"))
    for file in "${files[@]:3}"; do
        echo "Deleting old backup: $file"
        delete_from_bucket "$file"
    done
}

# Function to deploy changes
deploy() {
    local dir=$1
    local pm2_process=$2

    create_backup $dir

    echo "Deploying changes to $dir..."
    rsync -avz -e "ssh -i $SSH_KEY_PATH" $dir/* "$SSH_USER@$INSTANCE_IP:$REMOTE_PROJECT_DIR/$dir/"

    echo "Updating application on remote instance..."
    ssh -i "$SSH_KEY_PATH" "$SSH_USER@$INSTANCE_IP" "
        cd $REMOTE_PROJECT_DIR/$dir
        git pull
        npm install
        npm run build
        pm2 restart $pm2_process
    "
}

# Deploy frontend
deploy_frontend() {
    deploy $FRONTEND_DIR "frontend-process"
}

# Deploy backend
deploy_backend() {
    deploy $BACKEND_DIR "backend-process"
}

# Deploy .env file
deploy_env() {
    echo "Deploying .env file..."
    scp -i "$SSH_KEY_PATH" .env "$SSH_USER@$INSTANCE_IP:$REMOTE_PROJECT_DIR/"
}

# Main execution
case "$1" in
    frontend)
        deploy_frontend
        ;;
    backend)
        deploy_backend
        ;;
    env)
        deploy_env
        ;;
    all)
        deploy_frontend
        deploy_backend
        deploy_env
        ;;
    *)
        echo "Usage: $0 {frontend|backend|env|all}"
        exit 1
        ;;
esac

echo "Deployment completed successfully!"
Enter fullscreen mode Exit fullscreen mode

Step 3: Understanding the Script

Let's break down the key components of this script:

  1. Configuration: At the top of the script, we define variables for our project setup. You'll need to replace these with your specific values.

  2. Authentication: The get_access_token function generates a JWT token and exchanges it for an access token. This allows us to authenticate with Google Cloud Storage without using gcloud.

  3. Bucket Operations:

    • upload_to_bucket: Uploads a file to the specified Google Cloud Storage bucket with a progress bar.
    • list_bucket_files: Lists files in the bucket with a given prefix.
    • delete_from_bucket: Deletes a file from the bucket.
  4. Backup Creation: The create_backup function creates a tar.gz archive of the specified directory, uploads it to the bucket, and manages old backups (keeping only the last 3).

  5. Deployment: The deploy function handles rsync-ing files to the remote instance, pulling the latest changes, installing dependencies, building the project, and restarting the PM2 process.

  6. Main Execution: The script accepts arguments to deploy the frontend, backend, .env file, or all components.

Step 4: Using the Deployment Script

  1. Save the script as deploy.sh in your project root.
  2. Make it executable:
   chmod +x deploy.sh
Enter fullscreen mode Exit fullscreen mode
  1. Update the configuration variables at the top of the script with your specific values.
  2. Run the script:
    • To deploy frontend: ./deploy.sh frontend
    • To deploy backend: ./deploy.sh backend
    • To deploy .env file: ./deploy.sh env
    • To deploy everything: ./deploy.sh all

Step 5: Troubleshooting and Tips

  1. SSH Key Issues: Ensure your SSH key has the correct permissions (usually 600) and is added to the SSH agent (ssh-add /path/to/your/key).

  2. Permissions: Make sure your service account has the necessary permissions for Google Cloud Storage operations.

  3. Firewall Rules: Check that your Google Cloud instance allows incoming SSH connections.

  4. Bucket Naming: Remember that Google Cloud Storage bucket names must be globally unique.

  5. Error Handling: The script uses set -e to exit on any error. You might want to add more specific error handling for production use.

  6. Security: Never commit your service account key or .env file to version control. Consider using environment variables or a secure secret management system.

  7. Logging: For production use, consider adding more robust logging to track deployments and backups.

Conclusion

This deployment script provides a powerful, flexible way to manage your application deployments and backups on Google Cloud. By using the Google Cloud Storage JSON API directly, we've eliminated the need for gcloud or gsutil on local machines, making it easier for team members to deploy.

Remember to regularly review and update your deployment process as your application grows and changes. Security should always be a top priority, so ensure that access to the deployment script and associated credentials is tightly controlled.

Happy deploying!

Top comments (0)