Picture this: You and your friend had launched your SaaS application and the entire globe's rushing to your platform. The application's written in the bleeding-edge technology, Next JS and hosted on AWS Amplify, in Europe region. After a couple days, you saw your inbox's flooded with angry emails from your customers saying the website takes a long time to load up. As a young CTO, challenges rise and you're now scratching your head not know how to improve the performance.
Behold: Edge locations
You then stumbled upon something called "Edge locations". What are they? Edge locations refer to global network of data centres strategically placed around the world and are designed to bring content closer to your end-users geographically. They reduce latency and improves the overall performance of content delivery.
Each edge location serves as a caching endpoint for content delivery networks (CDNs). In our case, our website may be hosted in Europe region, but if someone from Japan wants to access the website, the initial load takes a bit of time but after that, it caches the web page or a file on the closest edge location to the user. So next time when a user from that particular region wants to access your website, the network doesn't have to travel all the way across the world to Europe but can just get the content from that closet edge location, ready to be served and boost your sales.
So how are we going to achieve this? Simple. We will create an Amazon S3 bucket, enable static website hosting, sync our Next JS static files into that bucket and let the CloudFront do its thing.
Let's dive right into it.
Next JS Setup
First we'll create a working Next JS app with a few pages, so we'll create a new directory and use next-app
template for it.
yarn create next-app nextjs-s3-cloudfront
Select the options that you want to use in creating Next JS application, I'll leave everything as default and use App Router rather than Pages Router.
Wait for a couple of minutes and you got your tiny little working Next JS application. So we'll go ahead and make some changes in the app/page.tsx
.
This will now be what our app/page.tsx
would look like.
import Image from "next/image";
export default function Home() {
return (
<main className="flex min-h-screen flex-col items-center justify-between p-24">
<div className="lg:flex"></div>
<div className="flex place-items-center before:absolute before:h-[300px] before:w-[480px] before:-translate-x-1/2 before:rounded-full before:bg-gradient-radial before:from-white before:to-transparent before:blur-2xl before:content-[''] after:absolute after:-z-20 after:h-[180px] after:w-[240px] after:translate-x-1/3 after:bg-gradient-conic after:from-sky-200 after:via-blue-200 after:blur-2xl after:content-[''] before:dark:bg-gradient-to-br before:dark:from-transparent before:dark:to-blue-700 before:dark:opacity-10 after:dark:from-sky-900 after:dark:via-[#0141ff] after:dark:opacity-40 before:lg:h-[360px] z-[-1]">
<p className="text-4xl">Live long and prosper!</p>
</div>
<div className="mb-32 grid text-center lg:mb-0 lg:grid-cols-4 lg:text-left"></div>
</main>
);
}
And then let's head into next.config.js
file to configure Next JS build setting to be output type of export. This is now what our next.config.js
should look like.
/** @type {import('next').NextConfig} */
const nextConfig = {
output: "export",
};
module.exports = nextConfig;
Let's build our Next JS application.
yarn run build
You'll notice a new folder appear which is out
and if you open it, you will see a bunch of HTML files and _next
static files. This will come in handy when we transfer them into S3 later.
AWS Resources Setup
We will then, create a new terraform
file right inside the application to avoid having to create a mono-repo or another directory. In an actual working environment, what would be ideal is to create a separate folder and have the Terraform resources there.
mkdir terraform && cd terraform
And as usual, we will need 4 main files
-
providers.tf
- To configure Terraform prodivers -
main.tf
- To provision resources -
variables.tf
- To use variables inside terraform files -
output.tf
- To get the URL of the CloudFront distribution (or any other properties that we want to check)
touch providers.tf main.tf variables.tf output.tf
We will need to grab our AWS provider Terraform registry into the providers.tf
file along with your AWS Access Key and Secret Key (Read more on how to generate these keys here). Since I'm using AWS IAM Identity centre with SSO login, I won't be adding the code for Access key and Secret Key, but I'll leave the config as it is. You will need to create a main.tfvars
to supply these values.
# providers.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.8.0"
}
}
}
provider "aws" {
region = var.aws_region
access_key = var.access_key
secret_key = var.secret_key
}
# variables.tf
variable "aws_region" {
type = string
description = "AWS Region"
default = "eu-west-1"
}
variable "secret_key" {
type = string
description = "AWS Secret Key"
}
variable "access_key" {
type = string
description = "AWS Access Key"
}
Now we will be using Terraform AWS S3 module and CloudFront module to provision our resources. The architecture here is to create a S3 bucket with static website hosting option, and then have our Next JS static files there and then use CloudFront to actually serve the content on the edge! We will be making use of custom Access Control Lists (ACLs) so that users cannot directly access the S3 static website URL rather than the CloudFront URL. Here's what our main.tf
file should look like now.
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.14.0"
bucket = "my-crazy-good-nextjs-bucket"
}
module "cloudfront" {
source = "terraform-aws-modules/cloudfront/aws"
version = "3.2.1"
is_ipv6_enabled = true
enabled = true
price_class = "PriceClass_All"
retain_on_delete = false
wait_for_deployment = false
create_origin_access_identity = true
origin_access_identities = {
"oai-nextjs" = "cloudfront s3 oai for nextjs website"
}
origin = {
s3 = {
domain_name = module.s3_bucket.s3_bucket_bucket_regional_domain_name
s3_origin_config = {
origin_access_identity = "oai-nextjs" # key from origin_access_identities map
}
}
}
default_cache_behavior = {
target_origin_id = "s3" # key from origin map
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD", "OPTIONS"]
viewer_protocol_policy = "redirect-to-https"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
compress = true
}
custom_error_response = [
{
error_code = 403
response_code = 403
response_page_path = "/index.html"
}
]
default_root_object = "index.html"
}
data "aws_iam_policy_document" "s3_policy" {
version = "2012-10-17"
statement {
sid = "1"
effect = "Allow"
actions = ["s3:GetObject"]
resources = ["${module.s3_bucket.s3_bucket_arn}/*"]
principals {
type = "AWS"
identifiers = module.cloudfront.cloudfront_origin_access_identity_iam_arns
}
}
}
resource "aws_s3_bucket_policy" "s3_policy" {
bucket = module.s3_bucket.s3_bucket_id
policy = data.aws_iam_policy_document.s3_policy.json
}
Let's go through them.
We have 2 modules, each using terraform module specified above. We give a name of my-crazy-good-nextjs-bucket
for our S3 bucket. For CloudFront module, we set the OAI to be enabled so that users can only access the S3 website content from our CloudGront URL (Read more about Origin Access Identity, OAIs here). For the cache behaviours, we cache all the GET
request to our website on an edge location and only allow HTTPS
.
There's another data
block which is the IAM policy document for our S3 bucket. The reason why we need to attach this policy is that, it is not a good idea to have our bucket publicly accessible from the internet and we only want to allow access from the ARN of the CloudFront OAI.
We also want to see some details of the resources that we provision after running terraform apply
so let's create a outputs.tf
file to grab some values from the provisioning.
# outputs.tf
output "s3" {
description = "S3 module outputs"
value = {
bucket_id = module.s3_bucket.s3_bucket_id
}
}
output "cloudfront" {
description = "Cloudfront module outputs"
value = {
distribution_id = module.cloudfront.cloudfront_distribution_id
domain = module.cloudfront.cloudfront_distribution_domain_name
}
}
What we're doing in this file is pretty much letting terraform know that, we want these certain values to be shown in the CLI, after the resources have been provisioned.
That's pretty much it! Let's run terraform init
and terraform plan
. It will show us a bunch of resources that terraform will create. Normally this plan should be reviewed with other team members to finalise the changes but since it's only a small business, let's go ahead and run terraform apply
.
After waiting for a couple of seconds, the resources will be provisioned and it will show us these output values.
Apply complete! Resources: 13 added, 0 changed, 0 destroyed.
Outputs:
cloudfront = {
"arn" = "arn:aws:cloudfront::389144622841:distribution/E3QS5X2RJNINOF"
"distribution_id" = "E3QS5X2RJNINOF"
"domain" = "d11r27a15bgorv.cloudfront.net"
}
s3 = {
"bucket_id" = "my-crazy-good-nextjs-bucket"
}
We will need 2 things which are the S3 bucket name and the CloudFront distribution ID. You'll also see a CloudFront domain from the outputs, but if you visit it now, you will see nothing but an Error page. This is because we have only created the resources but not moved the data from our Next JS app to our S3 bucket.
So for that, let's go back to our Next JS app by running cd ..
. We will be using AWS cli to copy our Next JS out/
for static HTML pages to our s3 by running
aws s3 sync out/ s3://my-crazy-good-nextjs-bucket
It will now copy all the content inside the Next JS export directory out
to our newly created S3 bucket. Now that we got our required static pages, we will revalidate our CloudFront distribution with the new contents by running
aws cloudfront create-invalidation --distribution-id E3QS5X2RJNINOF --paths "/*"
Note: Paste the distribution ID from the outputs in the args of --distribution-id
Now if we visit our CloudFront domain again, we'll see our blazing fast Next JS website served from the edge.
That's it for this blog! All the codes can be found here on my Github!
Improvements
Of course, there are ways that we can improve this deployment further.
- We can deploy on our own custom domain name. If we want to have a custom domain name, we need to name the S3 bucket as the same name as our domain's A record (ie. www.google.com), and have our certificate in the ACM.
- Setting up CI/CD pipeline to automate static file transfer to S3 and revalidating the CloudFront cache every-time we push to our VCS could also help in our case if we want to automate the process.
- We can even integrate this application with a custom CRM that we did on another blog-post and have our customers reach out to us.
That's it for now and I hope to see you in the next one! Ciao!
Top comments (5)
Hey, great post, but there is a problem you are not taking in count, the spa redirection when the path doesn't match with the S3 file key. Please update this if you have figure it out :)
Great catch, I didn't notice this, and been a little inactive on the platform.
Don't worry, very nice post anyway. Thank you.
What about SSR?
SSR is a Next JS specific feature. I only wanted to touch on the AWS specific feature on content delivery for this one but it's a good question. Apologies for the late reply, life got in the way :D