You’ll think this article is clickbait, but it’s not. I’ve built a fully-functional static serverless Wordpress solution on AWS, with Global CDN, W...
Some comments have been hidden by the post's author - find out more
For further actions, you may consider blocking this person and/or reporting abuse
Nice, I wrote something similar a few year back rehanvdm.com/serverless/serverless....
Except back then I wasn't big into IaC. The main difference is that I have the whole WordPress setup on an EC2 instance. Then just start it when I am writing a blog, then exporting the static content, also using WP2Static and then shut it down again.
Then I deploy locally by extracting the static zip and running a few Gulp scripts to do post processing like compressing images, and a few HTML and IP replacements. Then I just do an S3 sync.
It's really interesting @rehanvdm because if the plugin (or a fork) could be customised for true AWS-native deployment, the publish step could be incredibly fast and scalable, and not rely on running within the Wordpress installation at all. I've discussed this a bit with @leonstafford and I really like the idea - I just don't have enough (any) experience in building Wordpress plugins so I can't do this myself!
I have limited experience with WP Plugins, just don't have capacity atm. If I were to do it over again now, I will defintley not use Wordpress at all. Back then I didn't know how/what is involved in running your own blog. Things like OpenGraph tags, social media sharing buttons, how to do "recomendation" blogs based on current, SEO related configs.
*When I am doing the next face-lift change I will defintley move to a Vue + Markdown type of solution. The image compressing is also an extremly CPU intensive task that runs on my local machine, so I will also move that to some CI/CD pipeline running on the cloud.
Glad to see the Plugin is still active, great write up!
I'm using Smush for image-optimization which is pretty good, although sadly S3 doesn't appear to understand the mime type of WebP by default, and the S3 addon for WP2Static doesn't (yet) have an option to specify your own metadata based on file type. Still many, many optimizations left to exploit :)
Thanks for the mention, haven't really looked at doing image optimization on WP, might transition to that. Currently using npmjs.com/package/gulp-imagemin.
I will soon do something similar to this medium.com/nona-web/converting-ima... to do optimization on the "fly" for one of the internal company projects.
Oh now you've got me thinking about a Lambda@Edge integration to rewrite any jpg/png requests to a generated WebP version, I could even fix the metadata in the response to guarantee it works! adds to list
That's exactly the plan :) Also if you just want to change metadata/headers look into the newly released CloudFront Functions, those are less expensive and a perfect use case for header rewrites etc.
I've gone through multiple iterations of my personal site over the past 19 years and found there isn't any cheaper option to run a PHP based Wordpress than the cheapest VPS you can find for a couple of dollars a month. This static site approach is interesting, I'm going to take a closer look. However, seeing WAF in your diagram and knowing 1x WAF ACL is $5/month and 1x WAF Rule is $1/month, you already lost me. Reading through your longer article and seeing "WAF has fixed costs which completely break the ‘$0.01 a day’ clickbait" , yeah, you got me, I fell for your clickbait.
Sorry @kevinhooke - there's only so much you can optimize and if your use case requires a WAF (it's optional), there's no clever way around it aside from mitigating costs with a CloudFront Security Savings bundle. Consider though that there's not that much a WAF offers for a static site except perhaps DDoS mitigation from known bad IPs.
Hand on heart, one of my sites with this set-up (minus WAF) does indeed cost $0.01 a day.
I'm still interested in the approach, so will still be taking a look. $0.01 is still less than my current $4/month VPS, although not by that much.
Haha, I feel your pain! I've looked several times at how to offer free or really cheap hosting to people. Best I can do today is lokl.dev - as you mentioned it's a personal site, it may fit your workflow OK. Else, would love to hear why not, maybe can adjust it.
I'm currently looking to migrate a few (<10) Wordpress sites off my VM in Azure to static hosting, so I can turn off the VM entirely and save ~$20/mnth. My plan presently is to use Lokl to host each site instance in a container (thus not polluting my local machine with WP stuff), and export static pages via the wordpress.org/plugins/export-wp-pa... plug-in. Testing so far indicates this is workable, and I will be inflicting it on the nice person who owns the sites to run on their Linux PC, leaving me with only cloud storage/CDN costs :)
Future plans involve dumping WP entirely and moving to Hugo but this first step provides the cost savings for me!
Good stuff @phlash909 - Lokl is another tool by Leon Stafford - the guy that made WP2Static bundled here. The only sticky bit in the plugin you mentioned is that terribly manual step where you have to 'download html files' and then do something useful with them! This is why I regarded a native S3 integration to be so important for this one. I want to do almost nothing to make it publish!
True, publishing (in my case to Azure storage) might need me to write some glue scripts and poke the WP API, unless anyone knows of a WP->Static plug-in that supports Azure?
Hi Phil,
Great to hear of Lokl being used!
My next plans (after a little Xdebug profiling support) for Lokl, will be adding wizard options to easily setup and deploy sites to Azure/GAE/Cloudflare, etc, using their CLI tools. These tend to have better performance and saves me having to write/maintain custom code to work with their APIs.
Feel free to message me more about the Azure needs - the more I hear about it, the more front of mind it is for me to work on :)
Detailed blog. 👍🏻
I'm always getting a "standard_init_linux.go:219: exec user process caused: no such file or directory" when trying to launch the ECS task.
I had to run docker push and code build manually.
Do you know how can I fix it?
Task configuration: dev-to-uploads.s3.amazonaws.com/up...
Hi Federico!
I'm researching the problems people faced with AWS and I'd like to have a short talk to you about your case (I suppose you've tried this solution because you have smth in your mind about static WP or AWS automation).
PS: I'm new here and I didn't find any personal messages, so, sorry for off-topic.
Hey Ruslan! I have added you to linkedin, let's coordinate there.
Can you confirm the code build job completed successfully? The initial push of the docker base image and triggering the build are the two manual steps if you don't use the helper examples.
The code build job completed successfuly on the first try and generated the "wordpress_docker.zip" (3.8 MB) in the Build S3 bucket.
The command ran was:
aws codebuild start-build --project-name viajesconanto-serverless-wordpress-docker-build --profile default --region us-east-2
So the wordpress_docker.zip merely contains the assets for the build job - it uses this to build the container and push it into ECR within the same region. You should see two containers there, one tagged base (which is the uncustomized docker image) and one tagged latest, which is the result of the CodeBuild job. With this present you should be able to launch the Wordpress container.
If you are still experiencing issues, please feel free to raise an issue on GitHub with some more information about your set-up and I can investigate.
Yeah, I'm seeing both. The base one was generated about 5 minutes before than the latest one: dev-to-uploads.s3.amazonaws.com/up...
Also I am was able to verify that the task is running the latest image (241712483418.dkr.ecr.us-east-2.amazonaws.com/viajesconanto-serverless-wordpress:latest).
So my guess is that the image is throwing the "no such file or directory" error.
I will report this problem (major one that does not allow me to continue). Also I have find minor other ones, like:
Thank you, your feedback to track as an issue would be welcomed!
Thanks for sharing! I just recently moved my (small!) 3 WP sites hosting from a small Azure VM (leveraging AIO WP+SSL+Nginx reverse proxy thanks to github.com/selloween/docker-multi-...) back to my NAS (Synology) on Docker to save cost.
I looked at static WP before, but my sites were not looking exactly the same. I have not tried WP2Static, which may be the way to go!
Going forward, I will look at keeping the wp-admin piece on my NAS, and moving the site to S3/CDN.
I already do something like this for my Gatsby static sites. I left WP alone years ago after developing with it for 9 years on GoDaddy services.
I'd definitely try a setup like this if ever touching WP again.
I would choose Jekyll, Hugo or hexo for a tech personal blog,
If one day I need a WP I will try your solution! Bookmarked!
This is great so thank you for this. I'm assuming this won't work for any sites containing sign-up forms or with comments enabled correct? Being serverless, no forms would work correct?
If you need comments, and your target audience are devs (well, GitHub users), then I can only recommend giscus, which leverages GitHub discussions for comments: giscus.vercel.app/
Here is how it looks: techdebtburndown.com/episode8/
@capdragon Indeed not at the moment, although I recognise a form mailer will be essential for many. I'm looking to integrate something like this: github.com/DJAndries/terraform-aws... along with a bundled plugin that'll let you formpost as you usually expect.
For my own sites, I'm using a Facebook comment plugin, but again I think I could rustle up an acceptable alternative to this in time.
@capdragon , here's some options, there are more recent ones which also look good, not yet listed there:
tnd.dev/?tnd-dev%5BrefinementList%...
Great work!
So do you have to press a button in the AWS console to get your ECS instance running? Or does it automatically spin up an ECS instance when you navigate to the WP url?
Having multiple content editors have to ping me to turn on the instance or teach them to use the AWS console to turn on the instance themselves is far from ideal.
Very very good article.
I wonder if you pay for the database with this setup?
Yes you do, but it's serverless so only for the time that it is in use to make edits and publish.
Thanks Pete for your reply.
Will the changes happen on the fly or do we have to uncache cloud front first?
The CloudFront cache settings will be respected, so depending on what you set in the cache-control:expiry header as in the S3-Addon, the results will continue to be cached in CloudFront for that amount of time. The default set on the distribution is 7 days if there are no other values.
You can speed this up with a CloudFront invalidation, but that must be used with caution.
Got it. Thank you so much :)
Sorry about that, I've fixed the typo. The module repo has all the instructions, but there is a 'launch' attribute to the module that you toggle to 1, then plan/apply, this launches the container. When you're done, you set it to 0 and plan/apply again.
Hi , great post and even greater tf module. Only thing I don't seem to find info on is what we are supposed to do the first time when running, I added the extra modules for the ecr and codebuild stuff but the module complains it cant find the wordpress_docker.zip, which isn't there obviously. Is there anything I should do manually to create that file the first run?
Thank you for sharing this Pete, this was very useful! Can't wait for the future improvements. 😀
Well done with this Pete, looking forward to trying it.
URL typo, third mention of the long version (in the paragraph mentioning Leon Stafford), you typed https:// as httsp//
Its breaking here:
╷
│ Error: error creating public access block policy for S3 bucket (tech.ayzom.com): OperationAborted: A conflicting conditional operation is currently in progress against this resource. Please try again.
│ status code: 409, request id: TEQBA7CHAF3XPEGD, host id: P6nl4kcOmTs6L0fmqN9zzMXBtv8Dr2k63HZWaUoaPOwk7B3sms0XKhPeGK9KYf9NLsrYvkdZol4=
│
│ with module.cloudfront.aws_s3_bucket_public_access_block.wordpress_bucket,
│ on modules\cloudfront\distribution.tf line 16, in resource "aws_s3_bucket_public_access_block" "wordpress_bucket":
│ 16: resource "aws_s3_bucket_public_access_block" "wordpress_bucket" {
│
╵
╷
│ Error: Creating CloudWatch Log Group failed: OperationAbortedException: A conflicting operation is currently in progress against this resource. Please try again. '/aws/lambda/us-east-1.ayzom_redirect_index_html'
│
│ with module.cloudfront.aws_cloudwatch_log_group.object_redirect_ue1,
│ on modules\cloudfront\main.tf line 21, in resource "aws_cloudwatch_log_group" "object_redirect_ue1":
│ 21: resource "aws_cloudwatch_log_group" "object_redirect_ue1" {
│
This looks related to AWS trying to recreate resources that have just been deleted: aws.amazon.com/premiumsupport/know...
I would say wait a few moments and try again to resolve.
Fantastic! I have a website hosted onto a VPS 2CPU 2 CORE Intel Xeon 2GB RAM and using 23GB out of 250 GB available and a network bandwidth of 10GB/s for £13/month in a monthly rolling contract. I thought about EC2 but I will have to reserve an instance for 1 year to get the best value from it.
Then I thought about serverless, but I had little experience with Lambda. Now, I have been learning AWS through ACloudGuru, and I am definitely going to try this method. Cheers!
Hey Pete,
Thankful for this and definitely gave it a shot.
After a while I encountered a problem
Error: Creating CloudWatch Log Group failed: OperationAbortedException: A conflicting operation is currently in progress against this resource. Please try again. '/aws/lambda/us-east-1.inexhaustiblelifestatictryouts_redirect_index_html'
│
│ with module.cloudfront.aws_cloudwatch_log_group.object_redirect_ue1_local,
│ on modules/cloudfront/main.tf line 14, in resource "aws_cloudwatch_log_group" "object_redirect_ue1_local":
│ 14: resource "aws_cloudwatch_log_group" "object_redirect_ue1_local" {
//
Now, this is after a clean assesment combo between terraform destroy and hand checking aws console if everything is clean.
I think this error is connected to version of terraform. Is there a proven version your code will run on?
Currently I have latest
Terraform v1.1.9
on linux_amd64
Hi @droshow - this is a known issue. I think it's a race condition when trying to manage the log group to set a retention policy. If you retry again it should work.
Went thru a lot of issues here and now it seems like my task would finally start running, but then it gets killed. The only path I have in cloudwatch is
Provided region_name 'inex.life.s3-website-us-east-1.ama...' doesn't match a supported format.
I thought that this is 403 object accessibility, but solved it. Now, I am really not sure what could be bad with this format as cloudwatch states.
Anybody know what could it be? Thanks
Hey @droshow - if you can open an issue on the Github issue page along with the code of how you're instantiating the module, I can give you a steer: github.com/TechToSpeech/terraform-...
Hi Pete! Do you think a similar serverless solution be built for Moodle?
wonderful