DEV Community

Satoshi Nakamatsu
Satoshi Nakamatsu

Posted on

How to buck up mysql's dump to AWS S3

#! /bin/bash

db_name="YOUR_DATABASE_NAME"
db_user="YOUR_DATABASE_USER"
db_host="YOUR_DATABASE_HOST"
s3_bucket="YOUR_S3_BUCKET_NAME"
s3_key="YOUR_S3_KEY_PATH"


filename="db_dump_$( date "+%y%m%d_%H:%M:%S" ).sql"

aws s3 cp <( mysqldump -u ${db_user} -p -h ${db_host}  ${db_name} ) \
  s3://${s3_bucket}/${s3_key}/${filename}
Enter fullscreen mode Exit fullscreen mode

Top comments (2)

Collapse
 
ferricoxide profile image
Thomas H Jones II

The AWS CLI's s3 sub-command allows reading-from STDIN and writing to STDOUT. If you're running MySQL as an EC2-hosted process (rather than taking advantage of RDS), you should be able to save yourself a step (and staging-space) by changing the above to:

#! /bin/bash

db_name="YOUR_DATABASE_NAME"
db_user="YOUR_DATABASE_USER"
db_host="YOUR_DATABASE_HOST"
s3_bucket="YOUR_S3_BUCKET_NAME"
s3_key="YOUR_S3_KEY_PATH"


filename="db_dump_$( date "+%y%m%d_%H:%M:%S" ).sql"

aws s3 cp <( mysqldump -u ${db_user} -p -h ${db_host}  ${db_name} ) \
  s3://${s3_bucket}/${s3_key}/${filename}
Collapse
 
satoshicano profile image
Satoshi Nakamatsu

Thank you for your comment.

The script has become very simple.
Thanks.