DEV Community

James Beswick
James Beswick

Posted on

Restoring a snapshot to the same EC2 instance

Snapshots in EC2 are easy to automate with Lifecycle Manager but when the time comes to restore a snapshot, how do you do it?

The best approach is usually to create an image and start a new EC2 instance from this image. This fits into the mindset of treating instances as disposable, fungible assets, and the process is straightforward.

But if you just want to access the snapshot to restore a couple of files back to the original instance, here's an alternative method that might be easier in many cases.

Step 1: Create a volume from the snapshot.

Remember, the snapshot itself cannot be mounted or attached to anything - it's like a frozen, immutable copy of the data. So you must first create a volume based on the snapshot:

  • In the Snapshots menu in the EC2 console, right-click the required snapshot and click "Create Volume".
  • Ensure the size of the volume is at least the size of the snapshot.
  • Critically, make sure the availability zone selected matches the AZ of the EC2 instance you intend to mount this to.

Step 2: Attach the volume to the EC2 instance.

Creating the volume from the snapshot copies the data from snapshot to the volume. Now you have a usable volume, you need to attach it to the EC2 instance:

  • From the Volumes menu in the EC2 console, right click the volume and click "Attach volume".
  • Select the instance and click 'Attach'.
  • Back in the Instances menu, if you select the instance you will need a new block devices called '/dev/sdf'.

Step 3: Mount the volume.

If you ssh into your EC2 instance and type lsblk, you will see two devices connected to the instance. Typically xvda will be the root device and xvdf is the new volume. You will also see the device is partioned, usually with the name xvdf1.

There are a couple of gotchas here that are important. If you try a typical mount command (sudo mount /dev/xvdf1 /mnt/), you'll see an error claiming the filesystem type is incorrect.

If you run sudo lsblk --output NAME,TYPE,SIZE,FSTYPE you'll see the filesystem is xfs, but if you then try to mount specifying the filesystem type (sudo mount /dev/xvdf1 /mnt/ -t xfs), you get the same error. So what's happening?

Taking a look at the error log provides a little more insight - run dmsg | tail and you'll see the command fails because 'Filesystem has duplicate UUID'. And this makes sense, since we are restoring a snapshot back to the original instance where it was created.

To solve this problem, simply add the -o nouuid flag to the original command:

sudo mount -o nouuid /dev/xvdf1 /mnt/

You'll now find the new volume is mounted in the /mnt directory and you can access all the files from snapshot.

For a full walkthrough of this process, see my YouTube video.

Snapshots versus Volumes

I've heard plenty of questions about the differences between the two so wanted to summarize some Q&A:

Does every volume have a snapshot?
No, you have to choose to make a snapshot.

If I delete a volume what happens to its snapshots?
Nothing - the snapshots will not be deleted.

If I delete a snapshot what happens to its volume?
Nothing - the volume will not be deleted.

Does every snapshot have only one associated volume?
A snapshot knows which volume it is created from, but you can later delete a volume without deleting the snapshot. You can also create multiple volumes later from the same snapshot.

Is the data in the volume or the volume's snapshot?
Both. Think of the snapshots as immutable, so if you want to use the snapshot you have to create a copy as a volume.

Does AWS charge for snapshots or volumes or both?
Both.

Have an AWS question? Ask me on Twitter @jbesw. Thanks!

Top comments (0)