Deploying applications to the cloud and maintaining regular backups are crucial aspects of modern software development. In this guide, we'll walk through creating a robust deployment script that not only updates your application on Google Cloud but also manages backups in Google Cloud Storage. We'll do this without relying on the Google Cloud SDK (gcloud
or gsutil
) on your local machine, making it easier for team members to deploy without additional setup.
Prerequisites
Before we begin, ensure you have the following:
- A Google Cloud account with a running Compute Engine instance
- A Google Cloud Storage bucket for backups
- SSH access to your Google Cloud instance
-
curl
andjq
installed on your local machine - A service account with necessary permissions for Google Cloud Storage
Step 1: Setting Up Your Google Cloud Environment
Creating a Service Account
- Go to the Google Cloud Console (https://console.cloud.google.com/)
- Navigate to "IAM & Admin" > "Service Accounts"
- Click "CREATE SERVICE ACCOUNT"
- Name your service account (e.g., "deployment-account")
- Grant the following roles:
- Storage Object Creator
- Storage Object Viewer
- Storage Object Admin
- Click "CREATE KEY" and select JSON as the key type
- Save the downloaded JSON key file securely
Creating a Google Cloud Storage Bucket
- In the Google Cloud Console, go to "Cloud Storage" > "Buckets"
- Click "CREATE BUCKET"
- Choose a globally unique name for your bucket
- Select your desired location type
- Choose a storage class (Standard is usually fine for frequent access)
- Click "CREATE"
Step 2: Preparing Your Deployment Script
Now, let's create our deployment script. We'll call it deploy.sh
. This script will handle deploying your application and managing backups.
#!/bin/bash
set -e
# Configuration
PROJECT_NAME="your-project-name"
INSTANCE_IP="your-instance-ip"
SSH_USER="your-ssh-username"
SSH_KEY_PATH="/path/to/your/ssh/private/key"
REMOTE_PROJECT_DIR="/home/$SSH_USER/$PROJECT_NAME"
BUCKET_NAME="your-backup-bucket-name"
FRONTEND_DIR="frontend"
BACKEND_DIR="backend"
SERVICE_ACCOUNT_KEY_PATH="/path/to/your/service-account-key.json"
# Function to get access token
get_access_token() {
local token=$(cat $SERVICE_ACCOUNT_KEY_PATH | jq -r '.private_key' | openssl pkcs8 -topk8 -nocrypt -in /dev/stdin | \
openssl sign -in <(echo -n "$(echo -n "{'alg':'RS256','typ':'JWT'}" | base64 -w 0).$(echo -n "{
'iss':'$(jq -r '.client_email' $SERVICE_ACCOUNT_KEY_PATH)',
'scope':'https://www.googleapis.com/auth/devstorage.full_control',
'aud':'https://oauth2.googleapis.com/token',
'exp':$(($(date +%s) + 3600)),
'iat':$(date +%s)
}" | base64 -w 0)") -sha256 -keyform PEM | base64 -w 0)
curl -s -X POST https://oauth2.googleapis.com/token \
-d "grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer&assertion=$token" \
| jq -r '.access_token'
}
# Function to upload file to bucket with progress
upload_to_bucket() {
local file=$1
local access_token=$(get_access_token)
local file_size=$(wc -c < "$file")
echo "Uploading $file to bucket..."
curl -X POST -H "Authorization: Bearer $access_token" \
-H "Content-Type: application/octet-stream" \
--data-binary @"$file" \
--progress-bar \
"https://storage.googleapis.com/upload/storage/v1/b/$BUCKET_NAME/o?uploadType=media&name=$(basename $file)" \
| tee /dev/null
echo "Upload complete!"
}
# Function to list files in bucket
list_bucket_files() {
local prefix=$1
local access_token=$(get_access_token)
curl -s -H "Authorization: Bearer $access_token" \
"https://storage.googleapis.com/storage/v1/b/$BUCKET_NAME/o?prefix=$prefix" \
| jq -r '.items[].name'
}
# Function to delete file from bucket
delete_from_bucket() {
local file=$1
local access_token=$(get_access_token)
curl -X DELETE -H "Authorization: Bearer $access_token" \
"https://storage.googleapis.com/storage/v1/b/$BUCKET_NAME/o/$file"
}
# Function to create a backup and upload to Google Cloud Storage
create_backup() {
local dir=$1
local timestamp=$(date +%Y%m%d_%H%M%S)
local backup_file="${PROJECT_NAME}_${dir}_${timestamp}.tar.gz"
echo "Creating backup of $dir..."
tar -czf $backup_file $dir
upload_to_bucket $backup_file
echo "Removing local backup file..."
rm $backup_file
echo "Managing old backups..."
# Keep only the last 3 backups
local files=($(list_bucket_files "${PROJECT_NAME}_${dir}_"))
for file in "${files[@]:3}"; do
echo "Deleting old backup: $file"
delete_from_bucket "$file"
done
}
# Function to deploy changes
deploy() {
local dir=$1
local pm2_process=$2
create_backup $dir
echo "Deploying changes to $dir..."
rsync -avz -e "ssh -i $SSH_KEY_PATH" $dir/* "$SSH_USER@$INSTANCE_IP:$REMOTE_PROJECT_DIR/$dir/"
echo "Updating application on remote instance..."
ssh -i "$SSH_KEY_PATH" "$SSH_USER@$INSTANCE_IP" "
cd $REMOTE_PROJECT_DIR/$dir
git pull
npm install
npm run build
pm2 restart $pm2_process
"
}
# Deploy frontend
deploy_frontend() {
deploy $FRONTEND_DIR "frontend-process"
}
# Deploy backend
deploy_backend() {
deploy $BACKEND_DIR "backend-process"
}
# Deploy .env file
deploy_env() {
echo "Deploying .env file..."
scp -i "$SSH_KEY_PATH" .env "$SSH_USER@$INSTANCE_IP:$REMOTE_PROJECT_DIR/"
}
# Main execution
case "$1" in
frontend)
deploy_frontend
;;
backend)
deploy_backend
;;
env)
deploy_env
;;
all)
deploy_frontend
deploy_backend
deploy_env
;;
*)
echo "Usage: $0 {frontend|backend|env|all}"
exit 1
;;
esac
echo "Deployment completed successfully!"
Step 3: Understanding the Script
Let's break down the key components of this script:
Configuration: At the top of the script, we define variables for our project setup. You'll need to replace these with your specific values.
Authentication: The
get_access_token
function generates a JWT token and exchanges it for an access token. This allows us to authenticate with Google Cloud Storage without usinggcloud
.-
Bucket Operations:
-
upload_to_bucket
: Uploads a file to the specified Google Cloud Storage bucket with a progress bar. -
list_bucket_files
: Lists files in the bucket with a given prefix. -
delete_from_bucket
: Deletes a file from the bucket.
-
Backup Creation: The
create_backup
function creates a tar.gz archive of the specified directory, uploads it to the bucket, and manages old backups (keeping only the last 3).Deployment: The
deploy
function handles rsync-ing files to the remote instance, pulling the latest changes, installing dependencies, building the project, and restarting the PM2 process.Main Execution: The script accepts arguments to deploy the frontend, backend, .env file, or all components.
Step 4: Using the Deployment Script
- Save the script as
deploy.sh
in your project root. - Make it executable:
chmod +x deploy.sh
- Update the configuration variables at the top of the script with your specific values.
- Run the script:
- To deploy frontend:
./deploy.sh frontend
- To deploy backend:
./deploy.sh backend
- To deploy .env file:
./deploy.sh env
- To deploy everything:
./deploy.sh all
- To deploy frontend:
Step 5: Troubleshooting and Tips
SSH Key Issues: Ensure your SSH key has the correct permissions (usually 600) and is added to the SSH agent (
ssh-add /path/to/your/key
).Permissions: Make sure your service account has the necessary permissions for Google Cloud Storage operations.
Firewall Rules: Check that your Google Cloud instance allows incoming SSH connections.
Bucket Naming: Remember that Google Cloud Storage bucket names must be globally unique.
Error Handling: The script uses
set -e
to exit on any error. You might want to add more specific error handling for production use.Security: Never commit your service account key or .env file to version control. Consider using environment variables or a secure secret management system.
Logging: For production use, consider adding more robust logging to track deployments and backups.
Conclusion
This deployment script provides a powerful, flexible way to manage your application deployments and backups on Google Cloud. By using the Google Cloud Storage JSON API directly, we've eliminated the need for gcloud
or gsutil
on local machines, making it easier for team members to deploy.
Remember to regularly review and update your deployment process as your application grows and changes. Security should always be a top priority, so ensure that access to the deployment script and associated credentials is tightly controlled.
Happy deploying!
Top comments (0)