DEV Community

Cover image for GitOps for CloudFront and S3
Tarlan Huseynov
Tarlan Huseynov

Posted on • Updated on

GitOps for CloudFront and S3

Front-end deployments in GitOps style

This strategy uses ArgoCD, EKS, and AWS services like CloudFront and S3 to streamline deployments, improve performance, and maintain best practices.

Table of Contents

  1. Introduction
  2. Building and Syncing to S3
  3. Deploying with Helm Chart and Kubernetes Job
  4. Conclusion

In the world of modern web development, deploying front-end applications efficiently and reliably is a key challenge. As teams adopt GitOps strategies to streamline and automate deployments, certain complexities arise, particularly when integrating with AWS services like CloudFront and S3.

So let’s consider that ideally all our workloads are containerized and all run on Kubernetes (EKS) platform, we have all security checks, automations, tests and pipelines in place, and already have leveraged ArgoCD and supplementary tools for deployments.

Now one common dilemma is deciding how to manage front-end deployments consistently in GitOps style while maintaining the benefits of using CloudFront for caching and performance optimization. Some teams consider moving front-end assets to containers for consistency, but this can introduce unnecessary complexity and deviate from best practices.

When employing a centralized GitOps strategy, it’s crucial to keep the deployment process consistent and manageable. However, front-end applications often require specific considerations:

  • Caching and Performance: CloudFront provides a robust solution for caching and delivering static assets, ensuring high performance and low latency.
  • Artifact Management: Synchronizing build artifacts to the correct S3 paths while managing different versions can be challenging.
  • Deployment Automation: Automating the deployment process while ensuring the correct paths and versions are updated in CloudFront.
  • Consistency and Reproducibility: Maintaining consistent and reproducible deployments across environments.
  • Easy and rapid Rollbacks — if possible ofc. 😌

Introduction

In this article, I will share a solution I implemented to address these challenges. This approach leverages ArgoCD, EKS, AWS CloudFront, and S3, integrating them seamlessly into a GitOps workflow. By using a Kubernetes job with AWS CLI, we can manage CloudFront paths dynamically, ensuring our front-end application is always up-to-date and efficiently delivered.

Image description

  1. Release branch is merged to main
  2. New release tag created from main
  3. GHA is triggered on release to test and build the code
  4. Generated artifacts are tagged and synced to s3 with respective path
  5. Developer creates a pull request to pass the new version to GitOps repo
  6. PR is merged and values file is updated with new version
  7. ArgoCD picks up the changes after being triggered via Webhook or by polling
  8. Values diff triggered a new job creation by ArgoCD
  9. Kubernetes Job sends api call to CF to swap the origin path based on the new version

Building and Syncing to S3

To deploy our front-end application, we use GitHub Actions to handle the build and deployment process. The workflow triggers on new version tags, checks out the repository, sets up build environment, and configures AWS credentials. It retrieves secrets, installs dependencies, runs tests, builds the application, and syncs the output to an S3 bucket. We can of-course have multiple parallel workflows for each environment and sync to different s3 buckets with dedicated path that reflects release tag, or even sync to the same s3 bucket with dedicated path that reflects target environment + release tag ( I like the 1st option better for total segregation ).

For instance, if we’re deploying a version tagged v1.0.0 to the production environment, the path in S3 would be s3://frontend-production-artifacts/production/v1.0.0.



name: Publish Release Artifact Version
run-name: "Production Release ${{ github.ref_name }} | Publishing Artifact Version | triggered by @${{ github.actor }}"

on:
  push:
    tags:
      - 'v[0-9]+.[0-9]+.[0-9]+'

env:
  ARTIFACTS_BUCKET: frontend-production-artifacts
  AWS_REGION: us-east-2
  ENVIRONMENT: production

jobs:
  build-and-publish:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout repository
        uses: actions/checkout@v4

      - name: Set up Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'

      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key:  ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: ${{ env.AWS_REGION }}

      - name: Get Secrets
        uses: aws-actions/aws-secretsmanager-get-secrets@v2
        with:
          secret-ids: frontend-secrets-${{ env.ENVIRONMENT }}
          parse-json-secrets: true

      - name: Install dependencies
        run: yarn install

      - name: Run tests
        run: yarn test

      - name: Build
        run: yarn build

      - name: Sync files to Artifacts bucket
        run: aws s3 sync build/ s3://${{ env.ARTIFACTS_BUCKET }}/${{ env.ENVIRONMENT }}/${{ github.ref_name }} --delete


Enter fullscreen mode Exit fullscreen mode

Deploying with Helm Chart and Kubernetes Job

To automate the deployment process further, we can use a Helm chart that defines a Kubernetes job. This job handles updating the CloudFront origin path for the new version of our application using a Docker image with AWS CLI installed.

We have a values file that provides parameters like the application name, version, Docker image, S3 bucket name, and AWS region:



global:
  region: "us-east-2"

app:
  name: "frontend-app"
  version: v1.0.10
  backOffLimit: "4"
  jobImage: "amazon/aws-cli:2.16.1"
  originS3: "frontend-production-artifacts"


Enter fullscreen mode Exit fullscreen mode

The Kubernetes job uses these values to dynamically set its configuration. It includes the job name, the container image, and environment variables for the S3 bucket, origin path, and AWS region.

When the job runs, it installs jq for JSON processing, retrieves the CloudFront distribution ID based on the S3 bucket name, fetches the current CloudFront configuration, updates the origin path to the new version, and invalidates the CloudFront cache to ensure the latest version is served to users. Of course you can always build your own lightweight docker image with all dependencies already installed (aws-cli and jq), or even build your own solution by leveraging AWS SDK directly.



apiVersion: batch/v1
kind: Job
metadata:
  name: swap-cf-origin-path-{{ .Values.app.name }}-{{ .Values.app.version }}
spec:
  template:
    spec:
      serviceAccountName: {{ .Values.app.name }}
      containers:
        - name: aws-cli
          image: {{ .Values.app.jobImage }}
          env:
            - name: S3_BUCKET_NAME
              value: {{ .Values.app.originS3 }}
            - name: ORIGIN_PATH
              value: /{{ .Release.Namespace }}/{{ .Values.app.version }}
            - name: AWS_REGION
              value: {{ .Values.global.region }}
          command: ["/bin/sh","-c"]
          args:
            - |
              set -e
              yum install jq -y

              CF_DIST_ID=$(aws cloudfront list-distributions --query "DistributionList.Items[?contains(Origins.Items[].DomainName, '${S3_BUCKET_NAME}.s3.${AWS_REGION}.amazonaws.com')].Id | [0]" --output text)

              OUTPUT=$(aws cloudfront get-distribution-config --id $CF_DIST_ID)
              ETAG=$(echo "$OUTPUT" | jq -r '.ETag')
              DIST_CONFIG=$(echo "$OUTPUT" | jq '.DistributionConfig')

              UPDATED_CONFIG=$(echo "$DIST_CONFIG" | jq --arg path "${ORIGIN_PATH}" '.Origins.Items[0].OriginPath = $path')

              aws cloudfront update-distribution --id $CF_DIST_ID --if-match $ETAG --distribution-config "$UPDATED_CONFIG"

              aws cloudfront create-invalidation --distribution-id $CF_DIST_ID --paths "/*"
      restartPolicy: Never
  backoffLimit: {{ .Values.app.backOffLimit }}



Enter fullscreen mode Exit fullscreen mode

To allow our Kubernetes job to interact with AWS services like CloudFront and S3, we need to grant it the necessary permissions to the job’s service account. We can achieve this by using IAM Roles for Service Accounts (IRSA) or Pod identities. Here’s how you can configure IRSA option using Terraform. This setup allows the Kubernetes job to securely perform actions required for updating the CloudFront origin path and invalidating the cache.



data "aws_iam_policy_document" "service_account_assume_role" {
  statement {
    actions = ["sts:AssumeRoleWithWebIdentity"]
    effect  = "Allow"

    condition {
      test     = "StringEquals"
      variable = "${replace(aws_iam_openid_connect_provider.oidc_provider_sts.url, "https://", "")}:aud"
      values   = ["sts.amazonaws.com"]
    }

    condition {
      test     = "StringEquals"
      variable = "${replace(aws_iam_openid_connect_provider.oidc_provider_sts.url, "https://", "")}:sub"
      values   = ["system:serviceaccount:${var.namespace}:frontend-app"]
    }

    principals {
      identifiers = [aws_iam_openid_connect_provider.oidc_provider_sts.arn]
      type        = "Federated"
    }
  }
}

resource "aws_iam_role" "service_account_role" {
  assume_role_policy = data.aws_iam_policy_document.service_account_assume_role.json
  name               = "frontend-app-sa-role-${var.namespace}"
  tags               = local.default_tags

  lifecycle {
    create_before_destroy = false
  }
}

resource "aws_iam_policy" "frontend_app_swap_origin_policy" {
  name        = "frontend-app-policy-${var.namespace}"
  path        = "/"
  description = "IAM policy for frontend-app job service account"

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action   = [
          "cloudfront:GetDistribution",
          "cloudfront:UpdateDistribution",
          "cloudfront:CreateInvalidation",
          "s3:ListBucket",
          "s3:GetObject"
        ]
        Effect   = "Allow"
        Resource = "*"
      }
    ]
  })
}

# Attach policies to the service account role
resource "aws_iam_role_policy_attachment" "service_account_role" {
  depends_on = [
    aws_iam_role.service_account_role,
    aws_iam_policy.frontend_app_policy
  ]

  role       = aws_iam_role.service_account_role.name
  policy_arn = aws_iam_policy.frontend_app_swap_origin_policy.arn
}


Enter fullscreen mode Exit fullscreen mode

Conclusion

ArgoCD

At this point, all we need to do is push the new version after building and syncing it to S3. The job will handle updating the CloudFront origin path and invalidating the cache, ensuring that users always get the latest version of our front-end application. For an even more cosmetically satisfying approach, we could implement an additional Continuous Deployment (CD) solution on top of ArgoCD, such as OctopusDeploy. However, that is a topic for another day and another discussion 😉 Farewell Folks! 😊

Farewell

Top comments (0)