DEV Community

Viraj Nadkarni
Viraj Nadkarni

Posted on

Optimizing Data Management on AWS


In the cloud setup, having a good data management plan on Cloud is of utmost importance for multiple reasons. Reasons for this are many. Cloud costs are typically linked to usage and as such poor data handling can result in rising costs due to unnecessary or outdated data. Further, since cloud data might cross several regions, it's essential to adhere to different regulations and uphold strict security standards, which includes measures like encryption and controlling access.

This blog touches upon 5 critical areas you should be looking at with regards to data management. For more exhaustive details, please refer to the AWS well architected whitepaper.

Do not assume Uniform Data Storage and Access Patterns

Too often, we assume that all data storage can be managed uniformly in the same storage type. But for multiple workloads of any sizeable complexity, this is not likely to be the case. Every workload has its unique data storage and access requirements. Hence assuming that that all workloads have similar patterns or using a single storage tier for all workloads can lead to inefficiencies. The sooner these patterns are recognized the better. The advantage of recognizing and catering to these patterns is that it reduces the resources required to meet business needs, and thereby the overall efficiency of the cloud workload. To address this issue, regularly evaluate your data characteristics, your access patterns and plan to migrate data to the appropriate storage type that best aligns with these patterns. Also, understand that this will not be a one time activity but an exercise that needs to be conducted regularly.

For a comprehensive evaluation, the decision guides provided by AWS for storage and database services should be a good starting point.

Have a solid data classification strategy

Data classification is the process of categorizing data based on its sensitivity and criticality. A common mistake is not identifying the types of data they process and stored based on its sensitivity and importance. This ends up being a massive oversight and not having a classification strategy could lead to other consequences such as inappropriate security controls in place and even lead to compliance, regulatory or legal issues.
By having a proper data classification policy, organizations can determine the most performant and cost optimized storage(and even energy efficient, if sustainability is one of the key drivers) tier for their data. To come up with a data classification strategy, one should conduct an inventory of the various data types and then determine their criticality. Also have a periodical audit the environment not just for untagged and unclassified data but also to re-evaluate the data classification conducted earlier to see if that needs any change as per changing business conditions.

AWS Data classification guide

Use policies to manage lifecycle of your data

Data has a lifecycle, just like everything else, and you need to have a plan to control it from the time it was first created to the time it is archived or deleted. As the data moves through this lifecycle, its storage requirements often change. If you are not managing the lifecycle of your data, chances are that your data is being stored in costly or inefficient storage. The recommended approach here is to first identify the lifecycle pattern for your data and then use automated lifecycle polices to manage the lifecycle of these datasets.
Doing so will ensure that data is stored in the most appropriate storage tier at each stage of its lifecycle. Note that the lifecycle management evaluation should include areas like understanding your data characteristics, data access patterns at each stage of data lifecycle, handling data that is old or rarely used, archival and finally data deletion.

Get rid of redundant or unneeded data.

Just as keeping underutilized or idle resources running on cloud costs money, so does data. Storing redundant or unneeded data not only consumes unnecessary storage resources but also increases costs. By removing such data, organizations can free up valuable storage resources and reduce their environmental impact. This problem often manifests in different forms such as data being unnecessarily backed up, duplicated or stored redundantly irrespective of its criticality(touched upon in item 2) or when the data itself is easy to recreate if the need arises.

Monitor data movement to reduce costs

Monitoring, optimizing and minimizing data movement across networks can help with reducing the overall resources need for supporting data movement and indirectly reduce your overall costs besides helping in other areas such as performance.

Ask yourself, have you considered proximity of data or users of your workload when selecting a region on where to store data ? Are you leveraging services such as Lambda@Edge or CloudFront Functions that help you run data closer to your users ? Is the serving of data itself optimized ? Is the data being served up in efficient file formats or compressed ? Is the data being moved in line with your business needs ? Have you evaulated that only relevant data and too in the level of granularity that is needed by your application is being passed around?

To conclude, data management plays a pivotal role in an organization's sustainability journey. By avoiding these anti-patterns and implementing the associated best practices, organizations can ensure that they are using their resources efficiently, and thereby reduce the costs and minimize environmental impact.

Top comments (0)