ES Index — S3 Snapshot & Restoration:
The question is... What brings you here? Fed up with all the searches on how to back-up and restore specific indices?
Fear not, for your search quest ends here.!
After going through dozens of tiny gists and manual pages, here it is.. We’ve done all the heavy lifting for you.
The following tutorial was tested on elasticsearch V5.4.0
And before we proceed, remember:
Do’s:
Make sure that the elasticsearch version of the backed-up
cluster’s version <=(lesser than or equals) Restoring Cluster’s version.
Dont’s:
Unless it’s highly necessary;
curl -XDELETE ‘http://localhost:9200/nameOfTheIndex
#deletes a specific index
Especially not, when you are drunk!:
curl -XDELETE ‘http://localhost:9200/_all
#deletes all indexes
(This is where the drunk part comes in..!!)
Step1: Install S3 plugin Support
sudo bin/elasticsearch-plugin install repository-s3
# (or)
sudo /usr/share/elasticsearch/bin/elasticsearch-plugin install repository-s3
It depends on where your elasticsearch-plugin executable is installed. This enables the elasticsearch instance to communicate with the AWS S3 buckets.
Step2: Input the Snapshot registration settings
METHOD : PUT
URL: http://localhost:9200/_snapshot/logs_backup?verify=false&pretty
PAYLOAD:
{
“type”: “s3”,
“settings”: {
“bucket”: “WWWWWW”,
“region”: “us-east-1”,
“access_key”: “XXXXXX”,
“secret_key”: “YYYYYY”
}
}
In the URL:
- logs_backup: Name of the snapshot file
In the payload JSON:
- bucket: “WWWWW” is where you enter the name of the bucket.
- access_key & secret_key: The values “XXXXXX” and “YYYYYY” is where we key in the access key and secret key for the buckets based on the IAM policies. If you need any help to find it, here’s a link that should guide you through (https://aws.amazon.com/blogs/security/wheres-my-secret-access-key/).
- region: region where the bucket is hosted (choose any from: http://docs.aws.amazon.com/general/latest/gr/rande.html).
This should give a response as ‘{“acknowledged”: “true”}’.
Step3: Cloud-Sync — list all Snapshots
URL: http://localhost:9200/_cat/snapshots/logs_backup?v
In the URL:
- logs_backup: Name of the snapshot file Time to sync up all the list of snapshots. If all our settings have been synced up just fine; we should end up with a list of indices, close to that of what is shown below:
Step4: Creating a Snapshot
METHOD : PUT
URL: http://localhost:9200/_snapshot/logs_backup/type_of_the_backup?wait_for_completion=true
PAYLOAD:
{
“indices”: “logstash-2017.11.21”,
“include_global_state”: false,
“compress”: true,
“encrypt”: true
}
In the URL:
logs_backup: Name of the snapshot file
type_of_the_backup: Could be any string
In the payload JSON:
- indices: Correspond to the index which is to be backed-up to S3 bucket. In the case of multiple indices to back up under a single restoration point, the indices can be entered in the form of an array.
- include_global_state: set to ‘false’ just to make sure there’s cross-version compatibility. WARNING If set to ‘true’, the index can be restored only to the ES of the source version.
- compress: enables compression of the index meta files backed up to S3.
- encrypt: In case if extra encryption on the indices is necessary.
This should give a response as ‘{“acknowledged”: “true”}’
Step5: Restoring a Snapshot:
METHOD : PUT
URL: http://localhost:9200/_snapshot/name_of_the_backup/index_to_be_restored/_restore
PAYLOAD:
{
“ignore_unavailable”: true,
“include_global_state”: false
}
In the URL:
logs_backup : Name of the snapshot file
index_to_be_restored: Any of the index from the id listed in Step:3
In the payload JSON:
- ignore_unavailable: It’s safe to set this to true, to avoid unwanted checks.
- include_global_state: set to ‘false’ just to make sure there’s cross-version compatibility. WARNING If set to ‘true’, the index can be restored only to the ES of the source version.
This should give a response as ‘ {“acknowledged”: “true”} ’
Et Voila! The restoration is complete.
And Don’t forget to recycle the space corresponding to the index by safely deleting it — Reuse, Reduce & Recycle :)
Happy Wrangling!!!
Originally published at https://www.datawrangler.in on December 15, 2017.
Top comments (0)