DEV Community

Cover image for Best Practices Design Patterns: Optimizing Amazon S3 Performance
Mohamed Zahra for AWS MENA Community

Posted on

Best Practices Design Patterns: Optimizing Amazon S3 Performance

Abstract
When building applications that upload and retrieve storage from Amazon S3, follow the AWS best practices guidelines to optimize performance. AWS also offers more detailed

Introduction
Amazon S3 automatically scales to high request rates when uploading and retrieving storage from Amazon S3. Your application can achieve at least 3,500 PUT/COPY/POST/DELETE and 5,500 GET/HEAD requests per second per prefix in a bucket. You can increase your read or write performance by parallelizing reads. For example, if you create 10 prefixes in an Amazon S3 bucket to parallelize reads, you could scale your read performance to 55,000 reads per second. Data lake applications on Amazon S3 scan many millions or billions of objects for queries that run over petabytes of data. These applications aggregate throughput across multiple instances to get multiple terabits per second. Data lake applications achieve single- instance transfer rates that maximize the network interface use. Some applications can achieve consistent small object latencies (and first-byte-out latencies for larger objects) of around 100–200 milliseconds. Other AWS services can also help accelerate performance for different application architectures. For example, if you want higher transfer rates over a single HTTP connection or single-digit millisecond latencies, use Amazon CloudFront or Amazon ElastiCache. The following topics describe best practice guidelines and design patterns for optimizing performance for applications that use Amazon S3. This guidance supersedes any previous guidance on how you can optimize performance for Amazon's cloud storage service. If your workload uses server-side encryption with AWS Key Management Service (SSE-KMS), see AWS KMS Limits for information about the request rates supported for your use case. You no longer have to randomize prefix naming for performance for performance.

Performance Guidelines for Amazon S3
Measure Performance
When optimizing performance, look at network throughput, CPU, and Dynamic Random Access Memory (DRAM) requirements. Depending on the mix of demands for these different resources, it might be worth evaluating different Amazon EC2 instance types. It's also helpful to look at DNS lookup time, latency, and data transfer speed using HTTP analysis tools when measuring performance.
Scale Storage Connections Horizontally
Amazon S3 is a very large distributed system, not a single network endpoint like a traditional storage server. You can achieve the best performance by issuing multiple concurrent requests to Amazon S3. Spread these requests over separate connections to maximize the accessible bandwidth from Amazon S3.
Use Byte-Range Fetches
Using the Range HTTP header in a GET Object request, you can fetch a byte-range from an object, transferring only the specified portion. You can use concurrent connections to Amazon S3 to fetch different byte ranges from within the same object. This helps you achieve higher aggregate throughput versus a single whole-object request. Fetching smaller ranges also allows your application to improve retry times when requests are interrupted.
Retry Requests for Latency-Sensitive Applications
Aggressive timeouts and retries help drive consistent latency. Given the large scale of Amazon S3, if the first request is slow, a retried request is likely to take a different path and quickly succeed. The AWS SDKs have configurable timeout and retry values that you can tune to the tolerances of your specific application.
Combine Amazon S3 (Storage) and Amazon EC2 (Compute) in the Same AWS Region
Although S3 bucket names are globally unique, each bucket is stored in a Region that you select when you create the bucket. To optimize performance, we recommend that you access the bucket from Amazon EC2 instances in the same AWS Region when possible. This helps reduce network latency and data transfer costs.
Use Amazon S3 Transfer Acceleration to Minimize Latency Caused by Distance
Amazon S3 Transfer Acceleration manages fast, easy, and secure transfers of files over long geographic distances between the client and an S3 bucket. As the data arrives at an edge location, it is routed to Amazon S3 over an optimized network path. It's ideal for transferring gigabytes to terabytes of data regularly across continents. The Amazon S3 Transfer Acceleration Speed Comparison tool lets you compare upload speeds across Amazon's S3 Regions. The tool uses multipart uploads to transfer a file from your browser to other regions with and without the use of Amazon's S3 Transfer Accelerator. You can also see how much time it takes to upload a file to another region using the tool.
Use the Latest Version of the AWS SDKs
AWS SDKs provide a simpler API for taking advantage of Amazon S3 from within an application. The SDKs include logic to automatically retry requests on HTTP 503 errors and are investing in code to respond and adapt to slow connections. The latest version of the Amazon's AWS SDKs have improved performance optimization features. The Transfer Manager automates horizontally scaling connections to achieve thousands of requests per second, using byte-range requests where appropriate. It's important to use the latest version to obtain the latest performance optimization tools. You can also optimize performance when you are using HTTP REST API requests. When using the REST API, you should follow the same best practices that are part of the SDKs. Allow for timeouts and retries on slow requests, and multiple connections to allow fetching of object data in parallel
Performance Design Patterns for Amazon S3
When designing applications to upload and retrieve storage from Amazon S3, use our best practices design patterns for achieving the best performance for your application
1.Using Caching for Frequently Accessed Content
If a workload is sending repeated GET requests for a common
set of objects, you can use a cache such as Amazon CloudFront,
Amazon ElastiCache, or AWS Elemental MediaStore to optimize
performance.
Amazon CloudFront is a fast content delivery network (CDN)
that transparently caches data from Amazon S3 in a large set of
geographically distributed points of presence (PoPs).
Amazon ElastiCache is a managed, in-memory cache.
With ElastiCache, you can provision Amazon EC2 instances that
cache objects in memory.
AWS Elemental MediaStore is a caching and content distribution system specifically built for video workflows and media delivery from Amazon S3.
2.Timeouts and Retries for Latency-Sensitive Applications
• Amazon S3 automatically scales in response to sustained new request rates, dynamically optimizing performance.
• While Amazon S3 is internally optimizing for a new request rate, you will receive HTTP 503 request responses temporarily until the optimization completes.
• After Amazon S3 internally optimizes performance for the new request rate, all requests are generally served without retries.
• For latency-sensitive applications, Amazon S3 advises tracking and aggressively retrying slower operations.
•When you retry a request, we recommend using a new connection to Amazon S3 and performing a fresh DNS lookup.
• If additional retries are needed, the best practice is to back off.
• If your application makes fixed-size requests to Amazon S3, you should expect more consistent response times for each of these requests.
• In this case, a simple strategy is to identify the slowest 1 percent of requests and to retry them.
•Even a single retry is frequently effective at reducing latency.

3.Horizontal Scaling and Request Parallelization for High Throughput
Amazon S3 is a very large distributed system.
To help you take advantage of its scale, we encourage you to horizontally scale parallel requests to the Amazon S3 service endpoints. For high-throughput transfers, Amazon S3 advises using applications that use multiple connections to GET or PUT data in parallel. For some applications, you can achieve parallel connections by launching multiple requests concurrently in different application threads, or in different application instances. You can use the AWS SDKs to issue GET and PUT requests directly rather than employing the management of transfers in the AWS SDK. As a general rule, when you download large objects within a Region from Amazon S3 to Amazon EC2, we suggest making concurrent requests for byte ranges of an object at the granularity of 8–16 MB. Make one concurrent request for each 85–90 MB/s of desired network throughput. Measuring performance is important when you tune the number of requests to issue concurrently. Measure the network bandwidth being achieved and the use of other resources that your application uses in processing the data. If your application issues requests directly to Amazon S3 using the REST API, we recommend using a pool of HTTP connections and re-using each connection for a series of requests.
For information about using the REST API, see the Amazon S3 REST API Introduction. Finally, it's worth paying attention to DNS and double-checking that requests are being spread over a wide pool of Amazon S3 IP addresses. DNS queries for Amazon S3 cycle through a large list of IP endpoints. Network utility tools such as the netstat command line tool can show the IP addresses being used for communication with Amazon S3, and we provide guidelines for DNS configurations to use.
4.Using Amazon S3 Transfer Acceleration to Accelerate Geographically Disparate Data Transfers
Transfer Acceleration uses the globally distributed edge locations in CloudFront for data transport.
The AWS edge network has points of presence in more than 50 locations. The edge network also helps to accelerate data transfers into and out of Amazon S3. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path.
In general, the farther away you are from an Amazon S3 Region, the higher the speed improvement you can expect from using Transfer Acceleration. You can use a separate Amazon S3 Transfer Acceleration endpoint to use the AWS edge locations.
The best way to test whether Transfer Acceleration helps client request performance is to use the Amazon S3 Transfer Acceleration Speed Comparison tool.So, you are charged only for transfers where Amazon S3 Transfer Acceleration can potentially improve your upload performance.

Top comments (0)