Here in this post, I will be sharing with you all the new features that the AWS storage Team has implemented in 2022. Some of them were based on customers feedback while other were just part of AWS consistent strive for performance.
Amazon EFS Elastic Throughput
AWS EFS got even better with a new throughput option . On top of the existing throughput mode ( Bursting throughput and provisioned Throughput) that we are already familiar with, now we have Elastic Throughput that is suitable for spiky or unpredictable workload and performance requirement that are tricky to predict. or for applications that drive throughput at 5% or less of the peak throughput on average (the average-to-peak ratio). Elastic Throughput can drive up to 3 GiBps for read operations and 1 GiBps for write operations per file system, in all AWS Regions
Lower latency for EFS.
Now EFS can deliver up to 60% lower Read Operation while working with frequently accessed data and metadata and also up to 40% lower Write Operation with working with small files (<64kb)
For example, in region like N Virginia read latencies are as low as 0.25 milliseconds for frequently-accessed data, and write latencies are as low as 1.6 milliseconds for EFS-One Zone (and 2.7 milliseconds for EFS-Standard)
Amazon File cache Now generally available.
As fully managed, scalable, and high-speed cache File cache allow for processing file data stored in disparate locations—including on premises. It Cache accelerates and simplifies cloud bursting and hybrid workflows including media and entertainment, financial services, health and life sciences, microprocessor design, manufacturing, weather forecasting, and energy. This allow companies running hybrid infrastructure to be fully efficient.
AWS Glacier retrieval time.
Now the restore throughput on amazon Glacier has been improved by up to 10x when retrieving large volumes of archived data.
This improvement allows your applications to initiate restore requests from S3 Glacier at a much faster rate, significantly reducing the restore completion time for datasets composed of small objects. In addition, with S3 Batch Operations, you can now automatically initiate requests at a faster rate, allowing you to restore billions of objects containing petabytes of data with just a few clicks in the S3 console, or with a single API request.
The retrieval performance benefit scales with the number of restored objects, and reduces data retrieval completion times by up to 90%
Now companies can save money on storage cost by utilizing cold storage without taking a hit of retrieval speed
S3 access point now support cross account access point.
Amazon S3 Access Points simplify data access for any AWS service or customer application that stores data in S3 buckets. With S3 Access Points, you create unique access control policies for each access point to more easily control access to shared datasets. Now, bucket owners are able to authorize access via access points created in other accounts. In doing so, bucket owners always retain ultimate control over data access, but can delegate responsibility for more specific IAM-based access control decisions to the access point owner. This allows you to securely and easily share datasets with thousands of applications and users, and at no additional cost.
S3 Storage lens has now 34 new metrics.
Amazon S3 Storage Lens is a cloud storage analytics feature that delivers organization-wide visibility into object storage usage and activity. Now 34 additional metrics have been added to uncover deeper cost optimization opportunities, identify data protection best practices, and improve the performance of application workflows.
AWS MSK tiered storage.
Amazon Managed Streaming for Apache Kafka (MSK) now offers Tiered storage that brings a virtually unlimited and low-cost storage tier. Tiered Storage lets you store and process data using the same Kafka APIs and clients , while saving your storage costs by 50% or more over existing MSK storage options .
Tiered Storage makes it easy and cost-effective when you need a longer safety buffer to handle unexpected processing delays or build new stream processing applications making it now possible to scale your compute and storage independently, simplifying operations.
AWS Security Lake
AWS announced the preview release of Amazon Security Lake, a purpose-built service that automatically centralizes an organization’s security data from cloud and on-premises sources into a purpose-built data lake stored in your account.
Amazon Security Lake automates the central management of security data, normalizing from integrated AWS services and third-party services and managing the lifecycle of data with customizable retention and also automates storage tiering.
Multi region access point fail over control
Amazon S3 Multi-Region Access Points failover controls let you shift S3 data access request traffic routed through an Amazon S3 Multi-Region Access Point to an alternate AWS Region within minutes to test and build highly available applications.
With S3 Multi-Region Access Points failover controls, you can operate S3 Multi-Region Access Points in an active-passive configuration where you can designate an active AWS Region to service all S3 requests and a passive AWS Region that will only be routed to when it is made active during a planned or unplanned failover. This make it easy shift S3 data access request traffic from an active AWS Region to a passive AWS Region typically within 2 minutes to test application resiliency and perform disaster recovery simulations.
EBS RULE LOCK FOR RECYCLE BIN
Now with EBS you can set up a Rule Lock for Recycle Bin so customers can lock their Region-level retention rules to prevent them from being unintentionally modified or deleted. This new setting adds an additional layer of protection for customers to recover their EBS Snapshots and EC2 AMIs in case of inadvertent or malicious deletions.
I will be diving deeper into each of these release next year . Until them , Happy New year. Stay Safe
Top comments (0)