Setting Cache-Control header for all Amazon S3 objects

brunofernandes profile image Bruno Fernandes ・1 min read

This week while optimizing a website for performance one of the issues that came up was that the cache headers were not set for assets served using CloudFront.

CloudFront does not allow Cache-Control headers to be set but allows the use of the Origin Cache Headers pulled from Amazon S3. Most assets uploaded to S3 over the last three years did not have the headers set 😱, and on Amazon S3 console you can only add headers to one object at a time, so had to use AWS CLI to add headers to all objects in one go.

Set Cache-Control header for all objects within a bucket

aws s3 cp s3://bucket-name/ s3://bucket-name/ --metadata-directive REPLACE --recursive --cache-control max-age=2592000

Max-age value is in seconds: 2592000 seconds = 30 days

Set default Cache-Control header for all objects uploaded using Laravel

This project uses Laravel, so I had to update the settings for the Cache-Control header to be set correctly for all new objects uploaded moving forward.

To set the Default Cache-Control header update the S3 settings within config/filesystems.php file as follow:

's3' => [
    'driver' => 's3',
    'key' => env('AMAZON_S3_KEY'),
    'secret' => env('AMAZON_S3_SECRET'),
    'region' => env('AMAZON_S3_REGION'),
    'bucket' => env('AMAZON_S3_BUCKET'),
    'options' => [
        'CacheControl' => 'max-age=2592000', // 2592000 seconds = 30 days 

With the default CacheControl settings defined all new objects uploaded to Amazon S3 will have the Cache-Control header set to max-age=2592000. These settings can be overridden for an individual object if needed.

This is my first post on Dev.to. Very cool! 😎

Posted on by:


markdown guide


Replace metdata directive will nuke all current metadata in favor of which supplied in the CLI.
According to AWS docs: --metadata-directive (string) Specifies whether the metadata is copied from the source object or replaced with metadata provided when copying S3 objects. Note that if the object is copied over in parts, the source object's metadata will not be copied over, no matter the value for --metadata-directive, and instead the desired metadata values must be specified as parameters on the command line. Valid values are COPY and REPLACE. If this parameter is not specified, COPY will be used by default. If REPLACE is used, the copied object will only have the metadata values that were specified by the CLI command. Note that if you are using any of the following parameters: --content-type, content-language, --content-encoding, --content-disposition, --cache-control, or --expires, you will need to specify --metadata-directive REPLACE for non-multipart copies if you want the copied objects to have the specified metadata values.


Be careful with the first command. It replaces all metadata and sets the content type to binary/octet-stream


That is incorrect... The first command passes through the aws-cli's mime-type checker and only when it can't find the type it will set it to octet-stream... More info here: github.com/aws/aws-cli/blob/9d8353...

However if you do the second command without (which probably uses the AWS SDK) that happens, you should define the 'Content-Type' header in the 'options' ;-)


For me, it did replace the content type from image/jpeg to binary/octet-stream, since I only had jpeg images I was able to just add that to the command.

It also changed the permission of the object, so I had to recreate the object acl

# List current object acl
aws s3api get-object-acl --bucket bucket-name --key object-key

# full copy command
aws s3 cp s3://bucket-name s3://bucket-name --metadata-directive REPLACE --recursive --cache-control max-age=2592000 --content-type image/jpeg --acl public-read

Useful info on acl