GitHub Actions is a CICD platform from GitHub, with tight integration to GitHub. Being a SaaS product, this frees the development team from having to setup and manage a separate application for doing the CICD. (It is also offered in a self-hosted model if you prefer that). If your code is on GitHub, this product makes a lot of sense to add to your toolkit.
Using GitHub Actions has the following advantages:
-
Event-driven - You can trigger your CICD workflows based on events on your GitHub repo.
- Distributed Execution - GitHub manages CICD pipeline execution on its own fleet of virtual machines (Runners) You can use Unix, Windows, Mac, ARM, and even containers for this purpose.
- Visualization - See and manage your pipeline live on the GitHub Actions console right inside GitHub and share status quickly.
- Support - GitHub Actions is supported by an active developer community & all major vendors.
- Matrix Build - Test your code on many combinations of target systems.
For public repositories, GitHub Actions is free to run. For others, there is an X number of minutes free per year and Runners are billed per minute.
In this article, we will look at how to use GitHub Actions when your application's target environment is AWS.
Setup GitHub Actions to work on AWS
- Enable GitHub Actions from the GitHub console
On the GitHub repository in the web console, do the steps as given on the GitHub docs:
https://docs.GitHub.com/en/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-GitHub-actions-settings-for-a-repository
- Set permissions and access in the GitHub console. Set Read and Write access for your workflow:
In order for GitHub Actions to do anything on AWS, it needs to be assigned a role. For this:
Create an OIDC Identity Provider that points to the GitHub.
Create a Role in IAM that has all the permissions required for the GitHub Actions workflow.
Edit the role’s trust relationship so that your GitHub Repo workflow can assume the role.
Full steps for doing this are given here:
https://aws.amazon.com/blogs/security/use-iam-roles-to-connect-GitHub-actions-to-actions-in-aws/
Workflow, Jobs, Steps
Workflow - the top-most component, which represents a CICD pipeline
Job - Under one workflow we can have multiple Jobs, which might have a dependency on one another. E.g.: one job for CI, one job for CD.
Step - Each job will in turn have multiple steps in them. These steps are actual actions that take place. For eg. you want to check out code. You can use the action actions/checkout@v3 in the step. This is an “Action” provided by GitHub to check out your code in a step.
You can either use such actions provided by GitHub, AWS, or other providers or write your own script (we will expand on this later)
You can pass variables between jobs, or between steps. You can set default variables. You can read and write to the GitHub repository, and use GitHub secrets to store your secret information.
GitHub workflow file is stored as a YAML under the default directory .GitHub/workflows/
Now, let's create a very basic workflow.
A Very Basic GitHub Actions Workflow
name: Hello World
on:
push:
branches:
- main # Replace with your repository's main branch name
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Print Hello World
run: echo "Hello, World"
Save this as main.yml under .gitHub/workflows/
in your repository.
Let’s try to understand the code!
On: This is the trigger section. When does your workflow get triggered? When someone pushes to the branch given.
Jobs: Contains the jobs. We have one job - build.
Runs-on: This is the virtual machine on which workflow will run. Each job runs on different virtual machines by default that do not share the data. Our job is going to run on an Ubuntu virtual machine. These VMs are called “Runners” by GitHub.
E.g.: ubuntu-latest, windows-latest, macos-latest.
The list of runners is given here: https://docs.GitHub.com/en/actions/using-GitHub-hosted-runners/about-GitHub-hosted-runners/about-GitHub-hosted-runners
These are standard runners with 7GB-30GB of RAM and 2-core to 12-core CPUs depending on the type. Team and Enterprise customers can get “large runners” that have more memory and CPU, and can come with auto-scaling and IPs.
Under the job section, we have steps. We have 2 steps here. Checkout code and Print Hello World. Checkout Code uses the standard GitHub-provided action actions/checkout@v3
.
@v3 here means we are using version 3 of the Action. All actions are open-source, for example:
https://GitHub.com/actions/checkout
In the Print Hello World step, you can see we have used the Linux command echo to print Hello World to the console. Run is the keyword used to run commands or scripts.
Connect to AWS From GitHub Actions
We can use the AWS-supplied action, aws-actions/configure-aws-credentials to connect to AWS and assume the role we defined. It looks like this:
- name: Configure AWS credentials from the AWS account
uses: aws-actions/configure-aws-credentials@v2
with:
role-to-assume: arn:aws:iam::AWS_ACCOUNT_ID:role/GitHub_actions_role
role-session-name: GitHub_to_AWS_via_FederatedOIDC
aws-region: ‘us-east-1’
Here, role-to-assume should have the ARN for the role that you created for the GitHub Actions workflow. Role-session-name will set the name for the session created on login. This allows you to search in CloudWatch or CloudTrail for the role session.
Use Terraform in GitHub Actions
Setting up the popular IaC tool Terraform is easy in GitHub Actions, using a HashiCorp-supplied action:
- name: Setup Terraform
uses: hashicorp/setup-terraform@v2
Example Workflow for Continuous Integration
Code for this demo is kept here: https://GitHub.com/manumaangit/awsug_demo1
Let's assume an application, which is deployed as a docker container. Continuous Integration in this case would be creating the docker image everytime there is a change to the code. Let's see how we can do that, with the job below:
jobs:
docker-creation:
runs-on: ubuntu-latest
defaults:
run:
shell: bash
working-directory: .
steps:
- name: Git checkout
uses: actions/checkout@v3
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v2
with:
role-to-assume: arn:aws:iam::644107485976:role/GitHub_actions_role #change to reflect your IAM role’s ARN
role-session-name: GitHub_to_AWS_via_FederatedOIDC
aws-region: ${{ env.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v1.7.0
with:
mask-password: 'true'
- name: Build and push image to Amazon ECR
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: nodeapp
IMAGE_TAG: latest
run: |
cat src/index.ts
echo $ECR_REGISTRY && echo $ECR_REPOSITORY && echo $IMAGE_TAG
docker build -t $ECR_REPOSITORY .
docker tag $ECR_REPOSITORY $ECR_REGISTRY/$ECR_REPOSITORY
docker push $ECR_REGISTRY/$ECR_REPOSITORY
This workflow would run on an Ubuntu runner and will have bash as its shell. The first step, uses the standard actions/checkout@v3
action to checkout the code from your repository. Let’s say your repository contains all the code for your application and the Dockerfile that would create the docker image.
The next step uses the standard aws-actions/configure-aws-credentials@v2
action to set up your AWS credentials using the OIDC provider and AWS Role.
We want to push the docker image when created, to AWS ECR. So in the next step, we login to the ECR using again a standard action, aws-actions/amazon-ecr-login@v1.7.0
The last step uses the run
command to run unix commands, and we are running the unix docker commands to build, tag, and push the image to ECR with the proper repository name and tag.
Example Workflow for Infrastructure Provisioning
We are going to use Terraform in this example as the IaC tool. We have written terraform configurations to create an EC2 instance and open requisite ports using security groups. Now we want to call the terraform using our GitHub Actions workflow. Let's see the code:
Salient points to note:
needs: [docker-creation] - This in the job section means that this job depends on the docker-creation job to be complete before this can start. This way we can create dependencies between jobs in a workflow.
Again, we would use the actions/checkout@v3
action to checkout the code (which includes the Terraform .tf files) and connect to AWS using the aws-actions/configure-aws-credentials@v2
action. Then we can use Hashicorp hashicorp/setup-terraform@v2
to setup Terraform.
Then, we can use the run
command to run the terraform commands to format, plan, and apply.
Note during terraform init
, we are asking to save the state in an S3 bucket and we are passing S3 bucket information as environment variables, using the env
command.
deploy:
runs-on: ubuntu-latest
permissions: write-all
needs: [docker-creation]
defaults:
run:
shell: bash
working-directory: .
steps:
- name: Git checkout
uses: actions/checkout@v3
- name: Configure AWS credentials from AWS account
uses: aws-actions/configure-aws-credentials@v2
with:
role-to-assume: arn:aws:iam::644107485976:role/GitHub_actions_role
role-session-name: GitHub_to_AWS_via_FederatedOIDC
aws-region: ${{ env.AWS_REGION }}
- name: Setup Terraform
uses: hashicorp/setup-terraform@v2
- name: Terraform fmt
id: fmt
run: terraform fmt
continue-on-error: true
- name: Terraform Init
id: init
env:
AWS_BUCKET_NAME: "tf-state-manu16082023"
AWS_BUCKET_KEY_NAME: "remote-state"
run: terraform init -backend-config="bucket=${AWS_BUCKET_NAME}" -backend-config="key=${{env.AWS_BUCKET_KEY_NAME}}" -backend-config="region=${{env.AWS_REGION}}"
- name: Terraform Validate
id: validate
run: terraform validate -no-color
- name: Terraform Plan
id: plan
run: terraform plan -no-color
if: GitHub.event_name == 'pull_request'
continue-on-error: true
- name: Terraform Plan Status
if: steps.plan.outcome == 'failure'
run: exit 1
- name: Terraform Apply
if: GitHub.ref == 'refs/heads/main' && GitHub.event_name == 'push'
run: terraform apply -auto-approve -input=false
Passing Values between Steps and Jobs
Many times, you will need to pass a value from a job to its dependent job. Maybe you created an EC2 instance, and want to pass the IP address of the EC2 instance to the next job downstream. Outputs are the way to do it in GitHub Actions.
First, I will add an outputs section to the job like below:
deploy:
runs-on: ubuntu-latest
permissions: write-all
needs: [docker-creation]
outputs:
NEWEC2PUBLICIP: ${{ steps.set-ip.outputs.NEWEC2PUBLICIP }}
What this does is, it takes the value you set to NEWEC2PUBLICIP in the step with the ID set-ip using outputs, and makes it available to downstream jobs as
${{ needs.deploy.outputs.NEWEC2PUBLICIP }}
Let’s understand that complex variable:
Needs - because the dependent job will have the first job in its needs section.
Deploy - This is the name of the first job.
Outputs - We take the value from the outputs of the deploy job.
NEWEC2PUBLICIP - Actual name of the output from the deploy job.
Now let's see how set-ip actually sets the output variable. It does that actually by echoing a string that looks like NEWEC2PUBLICIP=1.2.3.4 to the GitHub_OUTPUT environment file.
deploy:
runs-on: ubuntu-latest
permissions: write-all
needs: [docker-creation]
outputs:
NEWEC2PUBLICIP: ${{ steps.set-ip.outputs.NEWEC2PUBLICIP }}
steps:
- name: Set Output
id: set-ip
run: |
echo "NEWEC2PUBLICIP=$(terraform-bin output -json | jq -r '.new_public_ip.value')" >> $GitHub_OUTPUT
Now in my dependent job I can do:
pull-docker-image:
runs-on: ubuntu-latest
needs: [deploy]
defaults:
run:
shell: bash
working-directory: .
steps:
- name: Use the value
id: use_value
run: |
echo "I'm running on ${{ needs.deploy.outputs.NEWEC2PUBLICIP }}"
A Simple CD Workflow
In our case the application is a docker container. So our CD constitutes deploying the latest docker image onto the EC2.
Since we have setup the infra provisioning job with outputs, we can use that output to get the IP number of the EC2, SSH to it, and then pull the latest docker image. We can use the action appleboy/ssh-action@v1.0.0
to SSH onto an EC2.
The job would look like below:
pull-docker-image:
runs-on: ubuntu-latest
needs: [deploy]
defaults:
run:
shell: bash
working-directory: .
- name: SSH Action
uses: appleboy/ssh-action@v1.0.0
with:
host: ${{ needs.deploy.outputs.NEWEC2PUBLICIP }}
username: "ubuntu"
key_path: ec2_private.pem
port: "22"
script: |
whoami
(docker stop app_deploy || true)
(docker rmi $(docker images --filter "dangling=true" -q --no-trunc) )
docker run --pull=always --rm --name app_deploy -d -p 3000:3000 644107485976.dkr.ecr.us-east-1.amazonaws.com/nodeapp:latest
The job has the deploy job in its needs section, which specifies the dependency. It has one step, which uses the SSH Action, and after logging onto the EC2, it will run a UNIX script to remove existing images (to conserve space) and pull the latest one.
The complete code is on the GitHub link given at the top. You can now take advantage of the abilities of GitHub Actions.
Hope this was helpful!
Top comments (0)