In this post I will talk about how to deploy a Django application (but easy to replicate to any other Python framework like Flask, for example) in the AWS saving a lot of money (depending of the traffic of your application and what services are you intended to use inside AWS, it could be free...forever!)
AND THIS POST WILL BE A QUITE LONG, SORRY FOR THAT!
In July, I finished the development of my educational project for children using Django and my first question when I finished was how to deploy this application in the cloud in a cheapest way possible.
My first option was Heroku, because is quite easy to deploy and implement a continuous deployment process using them for free, but if I have to scale my application for any reason, Heroku could get a lot more expensive, so I decided to think better.
In that process, I found a project that was a game changer in my quest. The project is Zappa, below the description of the project taken from they readme at Github:
Zappa makes it super easy to build and deploy server-less, event-driven Python applications (including, but not limited to, WSGI web apps) on AWS Lambda + API Gateway. That means infinite scaling, zero downtime, zero maintenance - and at a fraction of the cost of your current deployments!
They accomplishes that, indeed, is quite easy deploy the app in the AWS, the hardest part of the deployment is configure the IAM roles and policies inside AWS (I think this is the hardest part for any deployment there :-) ). So, without further ado, let me show you how I managed to deploy my Django app using Zappa.
PS: It is out of scope of this article explain deeper how AWS services works, I'm assuming that you have some knowledge about it and I won't use the console to configure AWS services, I'll use the AWS CLI, so if you want to use it as well, check here how to setup CLI
PS 2: Setup AWS CLI to use some user with Administrator access but NEVER EVER NEVER USE ROOT ACCOUNT! And keep the Access and Secret keys safe, for example using a password manager
PS 3: All commands and examples were made using a Linux workstation
1. Creating the required AWS S3 Bucket
Zappa will use the S3 bucket to upload the Lambda-compatible archive generated by the deploy command which I'll show further
To create the bucket using the CLI:
aws s3api create-bucket --bucket name_of_the_bucket --region region_of_your_choose --create-bucket-configuration LocationConstraint=region_of_your_choose
The region parameter and the LocationConstraint
configuration are required if you will create a bucket outside us-east-1
region, if you choose this region you can remove both from the command.
Remember that bucket names are unique globally, so if you receive an error like BucketAlreadyExists
, you have to choose a new name.
If everything goes well, you should receive this return from the command:
{
"Location": "http://name_of_your_bucket.s3.amazonaws.com/"
}
PS 4: If your application uses SQLite as database, I recommend you to create one more bucket for that and if your application uses static and media files, I recommend also you to create another bucket for that, we will see later how to manage them inside AWS
2. Configuring needed IAM policies, roles, group and user
For a better security (and also a best practice), we need to setup some IAM objects exclusive for Zappa to restrict which AWS resources it will have access.
2.1 Creating the role
Let's start creating the role that will be passed to the Lambda function which will be created by Zappa during the deployment process.
For create the role using the CLI, we need a JSON file that will be represent which AWS services will be allowed to call other AWS services in the Zappa user behalf (the Assume Role Policy Document).
Create the JSON file in some directory in your computer with this content:
So, let's go back to the CLI for create the role:
aws iam create-role --role-name my-role --assume-role-policy-document file:///tmp/zappa_assume_role.json
my-role
could be any name that you want.
file:///tmp/zappa_assume_role.json
must match with the path of the file that you created before.
This will be the return, if the command succeded:
{
"Role": {
"Path": "/",
"RoleName": "my-role",
"RoleId": "AROA3IDJ3HNEWIQZA5IQZ",
"Arn": "arn:aws:iam::773316098889:role/my-role",
"CreateDate": "2020-08-29T19:24:37Z",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": [
"apigateway.amazonaws.com",
"lambda.amazonaws.com",
"events.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
}
}
Save the value returned by the Arn
field in some place, we will need him afterwards.
2.2 Attaching the policy to the role
Now, it's time to join the policy with the role that we created before.
The AWS policy can also be a JSON document that represents the permissions that will be granted for use the services inside AWS, for the role that we will attach it.
For that again, you need create a JSON file in some directory in your computer with this content:
Again, let's go back to the CLI and execute the following command:
aws iam put-role-policy --role-name my-role --policy-name my-policy --policy-document file:///tmp/zappa_policy.json
my-role
must match with the name of the role that we created before.
my-policy
could be any name that you want.
file:///tmp/zappa_policy.json
must match with the path of the file that you created before.
If the command succeded, nothing will be returned
But if you want to confirm if everything goes well, you can run this command:
aws iam get-role-policy --role-name my-role --policy-name my-policy | head -3
The return should be:
{
"RoleName": "my-role",
"PolicyName": "my-policy",
2.3 Creating the group
In the last 2 steps, we defined the role and policy that will be passed to the Lambda function which will be created by Zappa during the deployment.
Now, we have to define the group, user and the policies that will allow Zappa create the resources inside AWS (Ex.: the Lambda function, create the API gateway, store the package in S3, etc).
We will start creating the group, executing that following CLI command:
aws iam create-group --group-name my-group
This will be the return, if the command succeded:
{
"Group": {
"Path": "/",
"GroupName": "my-group",
"GroupId": "AGPA3IDJ3HNEYAXDIXQ5K",
"Arn": "arn:aws:iam::773316098889:group/my-group",
"CreateDate": "2020-08-29T20:30:14Z"
}
}
2.4 Attaching the policy for Zappa general permissions to the group
We will do a similar job, that we did in the 2.2 step, but instead create just one JSON, we will create two JSON's, because we have to attach two policies for the group, one policy for general permissions and the other one specific for S3 permissions.
Let's start with the general one, create the JSON file in some directory in your computer with this content below, but this time we need do a small change in the JSON before using inside the CLI.
Find full_arn_from_created_role
inside the content and replace with the Arn
of the previous created role.
After that let's go to the CLI and run the command that will attach the policy to the group:
aws iam put-group-policy --group-name my-group --policy-document file:///tmp/zappa_general_policy.json --policy-name my-general-policy
my-group
must match with the name of the group that we created before.
my-general-policy
could be any name that you want, but cannot be the same from previous policy created in the step 2.2.
file:///tmp/zappa_general_policy.json
must match with the path of the file that you created before.
If the command succeded, nothing will be returned
But if you want to confirm if everything goes well, you can run this command:
aws iam get-group-policy --group-name my-group --policy-name my-general-policy | head -3
The return should be:
{
"GroupName": "my-group",
"PolicyName": "my-general-policy",
2.5 Attaching the policy for Zappa specific S3 permissions to the group
Now, we will attach the S3 specific policy to the group, we will repeat the same steps of the previous one.
But the content of the JSON to apply will be different and again we have to do a small change in the JSON before using inside the CLI.
Find full_arn_from_s3_bucket
inside the content and replace with the Arn
of the S3 bucket created in the step 1.
The ARN of the S3 bucket follow this pattern: arn:aws:s3:::name_of_your_bucket
If you created more than one bucket in the step 1, you must add the ARN of those as well.
After that let's go repeat the same commands from the previous step:
aws iam put-group-policy --group-name my-group --policy-document file:///tmp/zappa_s3_policy.json --policy-name my-s3-policy
my-group
must match with the name of the group that we created before.
my-s3-policy
could be any name that you want, but cannot be the same from previous policies created in the steps 2.2 and 2.4.
file:///tmp/zappa_s3_policy.json
must match with the path of the file that you created before.
If the command succeded, nothing will be returned
But if you want to confirm if everything goes well, you can run this command:
aws iam get-group-policy --group-name my-group --policy-name my-s3-policy | head -3
The return should be:
{
"GroupName": "my-group",
"PolicyName": "my-s3-policy",
2.6 Creating the user, attach it to the group and create the secret key
At last, we have to create the user that will be used exclusive by Zappa, attach it to the group created previously and generate the access key of the user.
Let's start creating the user:
aws iam create-user --user-name my-user
The return should be:
{
"User": {
"UserName": "my-user",
"Path": "/",
"CreateDate": "2013-06-08T03:20:41.270Z",
"UserId": "AIDAIOSFODNN7EXAMPLE",
"Arn": "arn:aws:iam::123456789012:user/Bob"
}
}
Now, we will attach this user to the group:
aws iam add-user-to-group --user-name my-user --group-name my-group
If the command succeded, nothing will be returned
But if you want to confirm if everything goes well, you can run this command:
aws iam get-group --group-name my-group
The return should be:
{
"Users": [
{
"Path": "/",
"UserName": "my-user",
"UserId": "AIDA3IDJ3HNEVBCYBTCVB",
"Arn": "arn:aws:iam::773316098889:user/my-user",
"CreateDate": "2020-08-29T21:55:40Z"
}
],
"Group": {
"Path": "/",
"GroupName": "my-group",
"GroupId": "AGPA3IDJ3HNEYAXDIXQ5K",
"Arn": "arn:aws:iam::773316098889:group/my-group",
"CreateDate": "2020-08-29T20:30:14Z"
}
}
Now, finally let's generate the user access key:
aws iam create-access-key --user-name my-user
The return should be:
{
"AccessKey": {
"UserName": "my-user",
"Status": "Active",
"CreateDate": "2015-03-09T18:39:23.411Z",
"SecretAccessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYzEXAMPLEKEY",
"AccessKeyId": "AKIAIOSFODNN7EXAMPLE"
}
}
Save the SecretAccessKey
and AccessKeyId
in a safe place (it is a best practice do that), we will need them further.
3. Considerations before install and configure Zappa
There are two things that Zappa won't help you inside AWS: how to handle your static and media files and how to connect to a database.
So, if you want to use AWS for that also, you will have to handle that inside your Python app.
In my deployment, to keep things simple, I chosen SQLite as database and store the database file in a S3 bucket and I also chosen store the static and media in another S3 bucket (because of that I warmed you to create some extra buckets in the step 1).
Other options are available, for example Aurora or RDS as database (but if you use them, you should have to update the IAM policies that we created before).
To setup Django to use S3 both for store the SQLite database and store the static/media files, we need to install two python packages that will be responsible for that.
Remember to activate your virtualenv, if you are using that or use pipenv
pip install django-s3-storage # Will handle the static/media files
pip install django-s3-sqlite # Will handle the SQLite database
Example of how to configure Django settings.py
to manage the static/media files and the SQLite database using S3:
4. Installing and configuring Zappa
Now we will install and configure Zappa, for install we will use pip.
Remember to activate your virtualenv, if you are using that or use pipenv
pip install zappa
For configure Zappa, you need to create the file zappa_settings.json
inside the root of your Django application repository (the same place of the manage.py
), below follow an example:
dev
is the name of the stage that will be created inside the API Gateway.
django_settings
must be name_of_your_django_project.settings
s3_bucket
must be the bucket that we created in the step 1
You can check my project repo at Github for more information.
5. Deploying the application
Finally, we got to what matters :-)
Before run the command to finally deploy the application inside AWS, we need to define the AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
environment variables with the access key and secret key of the user that we created in the step 2.6
Remember to activate your virtualenv before, if you are using that or if you are using pipenv, run pipenv shell
before
export AWS_ACCESS_KEY_ID=access_key_of_the_user_created_before
export AWS_SECRET_ACCESS_KEY=secret_key_of_the_user_created_before
Now, we can run the command to deploy our application for first time:
zappa deploy dev
dev
must be the name of the stage that you defined in the zappa_settings.json
If everything goes well, the return should be something similar to:
...
Deploying API Gateway...
Deployment complete!: https://wf31r9h75a.execute-api.us-west-2.amazonaws.com/dev
And you can use the above URL to test if your application is running properly, if not you can use this command to check the logs and check what happened:
zappa tail dev
And to do some update in your application, you don't need run the deploy command again, just run:
zappa update dev
So, that's all folks, below I'm leaving some links with additional information:
Deploying using custom domain and SSL certificate
Deploying using Aurora as database
Thanks a lot for the patience!
And have a nice day!
Top comments (0)