There's a lot of aspects you need to consider to call it a modern web application. For me, the most important one is that your application can dynamically alter their own content without loading a new document and can handle large to intermittent shifts of traffic to meet the demands.
Modern applications utilizes the use of cloud for it to be highly available and scalable, this kind of applications isolates business logic, optimize reuse and iteration and remove administrative overhead whenever possible. For example, AWS cloud has a lot of services that enables you to just focus on writing your code while automating infrastructure maintenance tasks.
I said earlier that the most important aspect for your web application is to be dynamic (don't worry we'll get there), we can't also deny the fact that there are always parts of the website that are fixed.
The best and the cheapest services that will handle our static contents are AWS CloudFront and AWS S3.
Let's create an AWS S3 bucket and upload all of our static web contents (e.g. html, css, JS, medias, etc.). I'll configure our CloudFront distribution to deliver these contents in multiple Edge Locations around the world.
For security, I'll create an CloudFront Origin Access Identity (OAI) and create a S3 Bucket Policy stating that this identity only can have a read access on our S3 Bucket. I'll use AWS Certificate Manager to provision a certificate for our website so we can deliver our content via HTTPS.
Additionally, you can register a domain in AWS Route53 for the FQDN (Fully Qualified Domain Name) of our website and create a record that will point to our CloudFront distribution.
Here I will create a Flask application in a container behind a Network Load Balancer. These will make our frontend website more interactive and yes, you read it right, dynamic.
Here I'll used AWS Elastic Container Service (ECS) with the deployment option of Fargate so I can deploy containers without having to manage any servers.
Build a docker image from the dockerfile with our application dependencies and push the image to AWS Elastic Container Repository (ECR), you can troubleshoot your docker image by running it locally.
After the docker image is pushed to ECR, let's create ECS Cluster, Service, and Task Definition so we can place where (subnet) our containers will run and set resources and configurations that they require.
Let's then create a Network Load Balancer and configure the listener to our Target group that will forward the traffic to our containers.
Here we will create AWS API Gateway that will proxy the traffic from the internal Load Balancer. To make this work we will provision a VPC Endpoint for our Load Balancer that is inside the VPC to be able to communicate with the API Gateway.
Let's integrate Continuous Integration/Continuous Deployment (CI/CD) in our application so every changes that will be made are automatically built and deployed to our docker image. Its a good practice to increase the development speed, we will not go through all the same steps every time we wanted make some changes in our application.
First, let's create an AWS CodeCommit Repository where we can store our code, then an artifacts bucket that will store all our CI/CD artifacts for every build in our pipeline.
Let's continue with the service that will do the most of the tasks in our pipeline CodeBuild, it will provision a build server using the configuration provided and execute the steps required to build our docker image and push every new version to ECR.
Finally, lets arrange our pipeline to automatically build whenever a code change was pushed in our CodeCommit repository then configure it to deliver the latest code that was build using our CodeBuild project to our ECR, all of this will be orchestrated using CodePipeline.
At this point, if you have any issues on your builds, check the Identity Access Management (IAM) Roles you granted on every service and also check the CloudWatch Logs.
We will create a DynamoDB Table that will store our data.
While we are on the table creation, let's also create Secondary Indexes so we can filter items efficiently.
Again, we will create a VPC Endpoint for our container be able to communicate to DynamoDB without traversing to the public network.
User Registration will help us modulate our website features access. Obviously, authenticated users will have more features then unauthenticated users.
Let's create AWS Cognito, using this service we will be able to require our users to be authenticated before they can do something that may affect our database.
We will setup our Cognito to require users to verify their email address before they can complete their registration
Again, we will setup an API Gateway that will be use to authorize actions for our authenticated users.
By implementing this, we can understand the actions that our users performs in our website (e.g. User Clicks, etc.). This will help us design our website efficiently so we can provide better user experience in the future.
To help us gain insights on user behaviors, let's use AWS Kinesis Firehose that can ingest data and will help us store these data's to several storage destinations, for this we'll gonna store the ingested data to S3 (e.g. S3, ElasticSearch, RedShift, etc.)
We will again use API Gateway to abstract the request being made to the Kinesis Firehose.
While the website interactions are being ingested, we will use AWS Lambda to process these data furthermore.
- Docker Pull Rate Limits
This made my build fail multiple times and by the time I checked my CloudWatch Logs, I saw error:
toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
I found out that this was caused by Docker Pull Rate Limit. This rate limit has been announced by Docker, Inc. that took effect last November 2, 2020.
For more information about this, you can check this blog from Docker.
Here is the solution that I used.
- Insufficient space on my build
I updated my dockerfile to a specific Linux distribution version and by the time I try to build my image, it throws an error:
Docker error : no space left on device
docker prune -all
Overall, here's the diagram with all the related microservices integrated with each other.
Here's my final output: Mythical Mysfits
Does it look familiar? Yes! it's a workshop made by Amazon Web Services (AWS). You can see the detailed information here. I added steps and integrated services for the application to be much efficient and highly available.
You can reach me at: