DEV Community

Cover image for Recap of the Re:Invent 2020 S3 announcements
Dave Stauffacher
Dave Stauffacher

Posted on

Recap of the Re:Invent 2020 S3 announcements

Hi there, I'm Dave. I'm an AWS Community Hero, Cloud Engineer, and have built a career focused on data storage and protection.

In my last post I talked about the io2 Block Express preview announcement. This week I thought it would be good to recap the list of S3 announcements that have been made thus far during the conference.

Strong Read-After-Write Consistency

At the top of the list is S3 Consistency. S3 has always had immediately consistent data on first-write, but eventually consistent data on updates and deletes. This meant that there was a time delay between when an update or delete operation is performed and when all clients see that update as having taken place.

Last week, AWS introduced strong consistency for all S3 GET, PUT, and LIST operations, in addition to changes to object tags, ACLs, and metadata. This means that for these operations, consumers of the data will immediately see the updates as soon as they are made.

Hats off to the S3 team for rolling out this update to every S3 bucket and object without impacting users. A fantastic feat!

S3 Replication Supports Multiple Destinations

When first launched in 2015, S3 replication supported same-region replication. Last year AWS introduced cross-region replication. Now, S3 replication supports multiple destinations!

There are many use cases for S3 Replication, including backup resiliency, malware defense, disaster recovery, and supporting development efforts. With multi-destination support, your S3 data can simultaneously be replicated to an alternate region for DR as well as alternate accounts for all your other uses. Learn more in the S3 Replication docs.

S3 Replication Supports Two-way Replication

Perfect for multi-region workloads that need access to data living in S3, you can now create replication rules to sync both data and metadata bidirectionally between S3 buckets.

This feature is great for multi-region workloads that need access to data stored in S3.

S3 bucket keys

When encrypting data in S3 with KMS, each encrypted object has an individual key. Accessing large numbers of encrypted objects creates a similarly large volume of requests to the KMS service. This update creates a KMS key for an S3 bucket, using that single key for all encryption and decryption activity within that bucket, greatly reducing the traffic to KMS. As there are fees associated with high volumes of traffic to KMS, this can create a cost savings for high-s3-volume workloads.

Other S3 releases announced before Re:Invent

S3 Intelligent Tiering supports archive

S3 Intelligent tiering eliminates the need to manage complicated sets of S3 data transition rules, instead leveraging machine learning to determine the best S3 storage tier to house your data.

With this announcement S3 Intelligent Tiering now supports moving data to an Archive Access Tier (same performance as S3 Glacier) and a Deep Archive Access Tier (same performance as S3 Glacier Deep Archive). If you haven't built solid patterns for automating data lifecycle in S3, I would strongly encourage you to take a look at S3 Intelligent Tiering.

Amazon S3 Storage Lens

It's impossible to be an effective Storage Engineer without having access to management tools and dashboards to visualize and report on how storage is performing and how it's being consumed. S3 Storage Lens starts solving that problem for Storage Engineers working in the cloud!

Since it's launch last month, I've been very successful using Storage Lens to help untangle storage consumption mysteries that were being reported by my teammate responsible for managing our AWS Spend. Working together with the data we pulled from Storage Lens, we were able to make a simple configuration change that resulted in reducing our S3 spend by 80% in our most critical AWS account. Storage Lens is available today - give it a look!

Wrapping Up

What has been your favorite S3 release from the last month? What is going to be the most useful S3 release for your organization? What do you want the S3 team to build next? I'd love to continue the conversation in the discussion section below.

-Dave

Top comments (0)