Here comes the automation part using GitHub through a CI/CD pipeline. The first thing we're going to do is create a file called aws.yml in the .github/workflows folder, as the extension suggests is a file that follows the YAML format.
The first thing we are going to specify is the name of the pipeline and the conditions under which it should be executed:
name: Build and deploy lambda-cycles image
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
# Job definitions
For now I want the pipeline to run every time someone puts something into main and every time someone opens a pull request to main.
The next thing is to define the jobs that are part of the workflow – in this case we will have two: one to prepare our application and the other to publish it.
Preparing the image – build
To define a job we must specify the steps
that form part of it, individually you can specify a more friendly name and the type of runner in which it is executed. We won't be using anything complicated, so ubuntu-latest
works fine for us.
build:
name: Build
runs-on: ubuntu-latest
steps:
The next thing to do is to specify the steps that are part of the job:
Steps
We need to get a copy of our newly pushed code to main, we use the checkout action:
- name: Checkout
uses: actions/checkout@v2
Since we are going to interact with AWS, we need to configure the credentials in the runner*, Amazon offers an action for this, what we must specify are our credentials (which we previously set as secrets in our *repo).
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: eu-west-1
This couple of steps are specific to my implementation since I am using pipenv, so it is necessary to install Python, then pipenv and install the dependencies:
- name: Set up Python 3.8
uses: actions/setup-python@v1
with:
python-version: 3.8
- name: Install pipenv
run: |
pip install pipenv
pipenv install
The next three steps are all about creating the image that will be used to create our lambda instance.
The first step calls make container
, the utility I added in past posts to build and tag as lambda-cycles
. The second step exports this image to a compressed file. The third step stores the newly exported Docker image as an artifact, which we will use in the next job, were we will deploy it.
- name: Build lambda-cycles image
run: make container
- name: Pack docker image
run: docker save lambda-cycles > ./lambda-cycles.tar
- name: Temporarily save Docker image and dependencies
uses: actions/upload-artifact@v2
with:
name: lambda-cycles-build
path: |
./shapefiles/
./requirements.txt
./lambda-cycles.tar
retention-days: 1
We must configure, initialize and finally, plan the creation of the infrastructure using terraform. For the first action, Hashicorp offers a pre-defined action, for the following two using the terraform console tool is enough:
- name: Set up terraform
uses: hashicorp/setup-terraform@v1
- name: Terraform init
run: terraform -chdir=terraform init
- name: Terraform plan
run: terraform -chdir=terraform plan
Creating infrastructure in AWS
Once GitHub Actions has finished the build job, we can move on to the deploy job. To define it (in addition to the name and runner information) I indicate that it depends on the build
job and very importantly, that it should only be executed when the branch that is going to execute this job is the main
branch, see the if
instruction?.
deploy:
name: Deploy
runs-on: ubuntu-latest
needs: build
if: github.ref == 'refs/heads/main'
steps:
Steps
As usual, we get a copy of the code with actions/checkout@v2
:
- name: Checkout
uses: actions/checkout@v2
We configure our credentials, remember, each job is executed in a different runner:
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: eu-west-1
Do you remember that in the previous job we created an artifact named lambda-cycles-build
that contained a Docker image and some other dependencies? – well, now we are going to download it, and after that we will use docker load
to import the image and make it available to be used by docker.
- name: Retrieve saved Docker image
uses: actions/download-artifact@v2
with:
name: lambda-cycles-build
path: ./
- name: Docker load
run: docker load < ./lambda-cycles.tar
Finally, we configure terraform again, we initialise it and lastly, we apply the planned changes. Note that we are using the -auto-approve
option so that the changes are automatically approved without the need for human interaction.
- name: Set up terraform
uses: hashicorp/setup-terraform@v1
- name: Terraform init
run: terraform -chdir=terraform init
- name: Terraform apply
run: terraform -chdir=terraform apply -auto-approve
And that is it, with this concludes the 6-part series explaining how to automatically build and deploy a lambda from GitHub.
This is how the repository looks like by the end of this post.
Remember that you can find me on Twitter at @feregri_no to ask me about this post – if something is not so clear or you found a typo. The final code for this series is on GitHub and the account tweeting the status of the bike network is @CyclesLondon.
Top comments (0)