When in production preventing data loss is often one of the highest priorities as an engineer, usually next to availability. Typically preventing data loss is focused on customer data, but what about losing the data the drives your application? Many applications today store static assets in some object store in the cloud and serve via CDN. Included in these static assets are typically things such as front-end application bundles which are the heart of the end-user experience.
If you’re working in the cloud, most cloud providers offer extremely durable storage solutions, AWS S3 offers eleven 9’s durability for example. This durability doesn’t help if someone accidentally deletes something though. Of course there are access controls to mitigate this scenario; you could lock out all engineers from resources that you feel are critical to your operation, but then you’ve created another problem - “zero trust” is a roadblock to productivity for most teams and in many cases is overused.
Assuming you trust your engineers to access your static resources, how can you mitigate production mistakes - i.e., deletion of resources? There are many solutions depending on your RTO or RPO objectives. Of course you need to weigh your tolerance for downtime with your cost of implementation and your ROI.
When talking about storing objects in S3 there are a lot of options at your disposal. A feature introduced at the end of 2018, S3 Object Lock, is a great feature that enables you to prevent the deletion of objects in S3 during a defined retention period. A caveat of this feature is that it doesn’t actually prevent overwriting objects. Another caveat of this feature, is that if you have a bucket that was created before November 26, 2018 it isn’t available to you!
If object lock is available to you though - you created a bucket after November 26, 2018 - you are well on your way to preventing accidental deletion of objects. What can you do to prevent overwriting objects though? Enable object versioning. Object versioning alone is going to protect your objects for the most-part. When coupled object lock and object versioning are going to give you a lot of protection against accidental deletion and modification. The difference is, object lock will prevent malicious deletion.
Great. What if you’re on working with a bucket created before November 26, 2018? And what if you do want to have zero trust to some extent - i.e., pure immutability. Enter S3 replication. With S3 replication you can choose to replicate cross region or same region to another bucket - replication effectively copies all of your newly added objects - after you have created the replication policy.
S3 replication will replicate your objects to another bucket of your choice cross region or same region. There is some nuance to how your buckets are configured and you can read more about that HERE but in general once you have replication configured, you can also configure zero trust on the destination bucket, object lock, and versioning giving you piece of mind that your data is backed-up, secure, and highly available (S3 offers 99.99% availability).
With data replicated to another bucket either cross region or same region you are well on your way to creating a highly available system for your customers. In a future post we will show you how you can use your replicated storage to use as a failover when needed.
Top comments (0)