Hi everyone, I am back with another project. This time we will be working on a CICD pipeline, deploying our application to our AWS account. In this project we will use Terraform to deploy our Django application to our AWS account.
Below are the tools and services that we will use and their categories.
https://gist.github.com/apotitech/eefebc53ce4c154f180defdc758d385f
Before we get into today's hands on writeup, let us start with the different parts of our project from start to finish.
Parts of Project
- Minimal Working Setup
- Connecting PostgreSQL RDS
- GitLab CI/CD
- Namecheap Domain + SSL
- Celery and SQS
- Connecting to Amazon S3
- ECS Autoscaling
We are starting with the first part today
Minimal Working Setup
In this part, we will go through first - the setup of our AWS account, then we will create our Terraform project, and lastly we will define resources for our web application. At the end, we will deploy our Django application on our AWS ECS. We will access our app using our Load Balancer URL.
Creating Django project
Let’s start with our Django application. Create a new folder and initialize a default Django project.
$ mkdir django-aws && cd django-aws
$ mkdir django-aws-backend && cd django-aws-backend
$ git init --initial-branch=main
$ python3.10 -m venv venv
$ . ./venv/bin/activate
(venv) $ pip install Django==3.2.13
(venv) $ django-admin startproject django_aws .
(venv) $ ./manage.py migrate
(venv) $ ./manage.py runserver
Now that our Django server is setup, let's check our Django greeting page at http://127.0.0.1:8000, ensure that Django is running, and kill the development server.
Next, we are going to dockerize our application. First, we will add a requirements.txt
file to the Django project:
Django==3.2.13
For testing purposes, enable debug mode and allow all hosts in our settings.py
DEBUG = True
ALLOWED_HOSTS = ['*']
To containerize our app, we need to have a dockerfile. So next, we add a Dockerfile
in our working directory:
FROM python:3.10-slim-buster
# Open http port
EXPOSE 8000
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV DEBIAN_FRONTEND noninteractive
# Install pip and gunicorn web server
RUN pip install --no-cache-dir --upgrade pip
RUN pip install gunicorn==20.1.0
# Install requirements.txt
COPY requirements.txt /
RUN pip install --no-cache-dir -r /requirements.txt
# Moving application files
WORKDIR /app
COPY . /app
Now let us go ahead and build and run our docker container locally.
$ docker build . -t django-aws-backend
$ docker run -p 8000:8000 django-aws-backend gunicorn -b 0.0.0.0:8000 django_aws.wsgi:application
Let's go to pur localhost port http://127.0.0.1:8000 page and verify that we have successfully built and run the docker image with a Django application. You should see exactly the same greeting page as for the runserver
command.
There may be some files that we do not want to push to our git repo. For that. let’s add a .gitignore
file:
*.sqlite3
.idea
.env
venv
.DS_Store
__pycache__
static
media
Next, we will continue with our git steps. Now we commit our changes:
$ git add .
$ git commit -m "initial commit"
Good job making it this far.
For now, we are done with the Django part. In the next steps, we deploy our application on AWS. But first, we need to configure our AWS account.
We need to create credentials for AWS CLI and Terraform. So we’ll create a new user with administration access to the AWS account. This user will be able to create and change resources on your AWS account.
First, we will go to the IAM service, select the “Users” tab, and click “Add Users”.
Enter Username and choose the ‘Access key — Programmatic access’ option. This option means that your user will have ‘Access key’ to use AWS API. Also, this user won’t be able to sing in the AWS web console.
Select the “Attach existing policies directly” tab and select “AdministratorAccess.” Then click next and skip the “Add tags” step.
Review user details and click “Create user.”
Yay, we have successfully created our user!
Now we need to save your Access key ID and Secret access key in some safe place. Be aware of committing these keys in public repositories or other public places. Anybody who owns these keys can manage your AWS account.
Now we can configure AWS CLI and check our credentials. We will use the us-east-2
region in this guide. Feel free to change it.
Now we can configure AWS CLI and check our credentials. We will use the us-east-2
region in this guide. Feel free to change it.
$ aws configure
AWS Access Key ID [None]: AKU832EUBFEFWICT
AWS Secret Access Key [None]: 5HZMEFi4ff4F4DEi24HYEsOPDNE8DYWTzCx
Default region name [us-east-2]: us-east-2
Default output format [table]: table
$ aws sts get-caller-identity
-----------------------------------------------------
| GetCallerIdentity |
+---------+-----------------------------------------+
| Account| 947134793474 | <- AWS_ACCOUNT_ID
| Arn | arn:aws:iam::947134793474:user/admin |
| UserId | AIDJEFFEIUFBFUR245EPV |
+---------+-----------------------------------------+
Remember your AWS_ACCOUNT_ID
. We'll use it in the next steps.
Now we are all set up to create Terraform project!
Creating Terraform Project
Let’s create a new folder django-aws/django-aws-infrastructure
for our Terraform project.
cd ..
mkdir django-aws-infrastructure && cd django-aws-infrastructure
git init --initial-branch=main
Let us add our provider.tf
:
provider "aws" {
region = var.region
}
Here, we defined our AWS provider. We use Terraform variable for specifying an AWS region. Let’s define region
and project_name
variables in the variables.tf
file:
variable "region" {
description = "The AWS region to create resources in."
default = "us-east-2"
}
variable "project_name" {
description = "Project name to use in resource names"
default = "django-aws"
}
Now, we will run terraform init
to create a new Terraform working directory, download the AWS provider, and everything else that will be needed.
Now we are ready to create resources for our infrastructure.
Resources we will use
Here is the plan of the services we wull use to configure our project
https://gist.github.com/apotitech/6c1ac26ee528eacb7dbeae7635253058
Elastic Container Repository, ECR
resource "aws_ecr_repository" "backend" {
name = "${var.project_name}-backend"
image_tag_mutability = "MUTABLE"
}
Next, we will run terraform plan
. We'll see that Terraform is going to create an ECR repository.
Terraform will perform the following actions:
# aws_ecr_repository.backend will be created
+ resource "aws_ecr_repository" "backend" {
...
}
Plan: 1 to add, 0 to change, 0 to destroy.
If the plan looks good, we will go ahead and run terraform apply
. We will be prompted to accept or refuse the plan. Type yes
to confirm changes.
aws_ecr_repository.backend: Creating...
aws_ecr_repository.backend: Creation complete after 1s [id=django-aws-backend]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Great, our repository is created. Now, let’s push our Django image to this new registry. Before we do that, we will need to build an image with tag, below is mine ${AWS_ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/django-aws-backend:latest
, authorize in the ECR, and push an image:
In my own case, my account ID is 94713479347
and I will use the us-east-2
region
$ cd ../django-aws-backend
$ docker build . -t 947134793474.dkr.ecr.us-east-2.amazonaws.com/django-aws-backend:latest
$ aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin 947134793474.dkr.ecr.us-east-2.amazonaws.com
$ docker push 947134793474.dkr.ecr.us-east-2.amazonaws.com/django-aws-backend:latest
Network
Now, let’s create a network for our application. First we create our variables.tf file. Add this block to the variables.tf
file:
variable "availability_zones" {
description = "Availability zones"
default = ["us-east-2a", "us-east-2c"]
}
Next, we will create a network.tf
file with the following content:
https://gist.github.com/apotitech/ba6d140d902cef8e7038118829335d7e
Below are the resources that we will be creating. We’ve defined the following resources with our code:
- Virtual Private Cloud.
- Public and Private subnets in different Availability zones
- Internet Gateway for internet access for our public subnets.
- NAT Gateway for internet access for our private subnets.
Next we will apply what we just coded. Run terraform apply
command to apply changes on AWS.
Load Balancer
We continue our infrastructure building. Next we will create our load balancer load_balancer.tf
file with the following content:
https://gist.github.com/apotitech/da3f7ba10640b6c1809c2894c7525d62
Let us look at the resources that the code below will create:
- Application Load Balancer
- LB Listener to receive incoming HTTP requests.
- LB Target group to route requests to the Django application.
- Security Group to control incoming traffic to load balancer.
Before we proceed, we want to know the load balancer URL. For that we need our code to output it in our terminal. For that we will add a outputs.tf
file with the following code and run terraform apply
to create load balancer and see its hostname.
output "prod_lb_domain" {
value = aws_lb.prod.dns_name
}
We will see our ALB domain in output.
Outputs:
prod_lb_hostname = "prod-57218461274.us-east-2.elb.amazonaws.com"
We can now check our this domain in our browser. It should respond with 503 Service Temporarily Unavailable
error because there are no targets associated with the target groups we created yet.
In the next step, we'll deploy the Django application that will be accessible by this URL.
Application
Last but not least, we’ll create the application using the ECS Service. For this, we will add a ecs.tf
file with following content:
https://gist.github.com/apotitech/bfb372e92d2b88fee935ee39a8f68ed7
Also, we will add the ecs_prod_backend_retention_days
variable to the variables.tf
file:
variable "ecs_prod_backend_retention_days" {
description = "Retention period for backend logs"
default = 30
}
then we add a container definition in a new file templates/backend_container.json.tpl
and run terraform apply
.
[
{
"name": "${name}",
"image": "${image}",
"essential": true,
"links": [],
"portMappings": [
{
"containerPort": 8000,
"hostPort": 8000,
"protocol": "tcp"
}
],
"command": ${jsonencode(command)},
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "${log_group}",
"awslogs-region": "${region}",
"awslogs-stream-prefix": "${log_stream}"
}
}
}
]
Our code will be creating:
- ECS Cluster
- ECS Task Definition
- ECS Service to run tasks with the specified definition in the ECS cluster
- IAM Policies to allow tasks access to resources.
- Cloudwatch Log group and stream for log collection.
Now, go to the our ECS AWS Console and look at our running service and tasks.
Now we will go ahead and Check the Load Balancer domain in a browser again, to ensure that our setup works. Now we will see the Django’s starting page.
Awesoome, great job making it this far, our setup is working. It’s time to commit our changes in the django-aws-infrastructure
repo. We will add another file, a .gitignore
file
Great job making it this far. Now we will go ahead and apply all of our terraform configuration terraform apply
. We will st
output "prod_lb_domain" {
value = aws_lb.prod.dns_name
}
We will see our ALB domain in output.
Outputs:
prod_lb_hostname = "prod-57218461274.us-east-2.elb.amazonaws.com"
We can now check our this domain in our browser. It should respond with 503 Service Temporarily Unavailable
error because there are no targets associated with the target groups we created yet.
https://gist.github.com/apotitech/5f4afbe7826322f8cf6de112d150cd8f
Our code is all ready to go. Now we will save, commit our changes and push our changes:
$ git add .
$ git commit -m "initialize infrastructure"
Congratulations
Yay! We have now deployed our Django web application with ECS Service + Fargate on AWS. But now it works with SQLite file database. This file will be recreated on every service restart. So, our app cannot persist any data for now. In the next article we’ll connect Django to AWS RDS PostgreSQL.
If you need technical consulting on your project or have any questions or suggestions, please comment below or connect with me directly on LinkedIn.
Do not forget the 👏❤️ and share if you like this content!
Thank you for joining me, and best of luck with your AWS endeavors!
Top comments (0)