With the recent announcement of AWS EFS Elastic Throughput mode, I was curious to understand if it's actually any better than the Burstable throughput mode, which I've used as the file storage for a few of my WordPress sites.
I was encountering a few hiccups here and there during WordPress version or plugin upgrades, because of the way the Burstable throughput mode works. Basically as long as your app is not doing any IO operations, the EFS accumulates some burst credits, which are then used during periods of reads/writes, up to a certain limit, after which (when the burst credits are depleted) the EFS read/write operations become painfully slow (at least from my experience).
The announcement of Elastic throughput mode promises that you no longer have to worry about unpredictability of reads/writes to the file system and you should get a pretty consistent performance when using EFS, without resorting to Provisioned throughput mode, which can be pretty expensive due to over-provisioning during prolonged periods of low IO activity.
What better way to evaluate and compare two options than creating a benchmark that does it programmatically for me. Here are the results.
Writing 10 files, 1KB each
Writing 10 files, 1MB each
Writing 10 files, 100MB each
Elastic throughput seems to be completing the write operations faster in all of the benchmarks above, compared to Bursting throughput mode.
For simple sporadic file writes it seems like there is not much of a difference, but Elastic throughput really starts to show its benefits in larger file sizes. Writing 10 files of 100MB can easily save your app 5 seconds of waiting time; savings you can potentially propagate to your end users and improve user experience.
Long story short, I am definitely switching my existing EFS file systems to Elastic throughput mode after these results. The pricing is pretty much the same and there's nothing that stops me from doing the switch at this point.
Of course, don't take my word for it. Do your own due diligence and benchmarks before making a similar switch.
- The tests were done using an identical Lambda with EFS file system attached
- The two Lambdas ran exactly the same code
- The numbers above are adjusted to exclude potential side effects like Lambda cold starts, network latency and variability in any surrounding code inside the Lambda runtime. Timestamps are only snapshotted just before and right after the filesystem IO.
- Tests are repeated 5 times with a sleep time of 10 seconds in between each run, to give plenty of time for both EFS throughput modes to pick up the pace and trigger any internal caching or warming mechanisms that EFS might have. The results of all 5 test are averaged to come up with the numbers in the benchmark.
- Code used to benchmark is available in a GitHub repo
Hope you found this benchmark useful. Looking forward to read your findings in the comments below. You can also catch me at my AWS CDK blog, where you can learn more about corner cases like this or find interesting AWS CDK constructs you can use for your app infrastructure.
Top comments (0)