With the advancement of cloud technologies, many enterprises moving to the cloud. This is mainly due to reduced total cost of ownership. The cloud is also becoming the technology innovation center for enterprises providing cutting edge technology influencing the architectures of enterprise systems. At the heart of enterprises, cloud storage and compute takes a major role. You can find a range of tools and technologies, such as AWS Storage Gateway, Microsoft Azure StorSimple, NetApp ONTAP Cloud, etc. to manage enterprise cloud storage and compute. This article focuses on storage technologies that help to enhance their enterprise systems.
#1 Elasticity for Disk Storage with adjustable throughput and physical storage type selection
If you have been using cloud compute services such as Amazon EC2, Microsoft Azure Virtual Machines or Google App Engine before, you have already used block storage services. In the past it was needed to shutdown or restart the virtual machine to increase the volume size. Now cloud providers are making it possible to increase the volume size, adjust performance, or change the volume type while the volume is in use. You can continue to use your application while the change takes effect. AWS has recently introduced this feature while other cloud providers will soon follow the trend. This allows you to provision only the required amount of storage for your enterprise systems and increase it on demand (manually or automatically) paying only for what you use reduces the costs even further.
Cloud block storage not only allows you to adjust the disk throughput (Input Outputs per Second - IOPS) but also allows to choose the right storage technology for your enterprise application workloads. For example, you are able to select the storage technologies such as SSD, Magnetic Disks (Throughput Optimized, Cold Storage). To compare the Disk Storage Options across the popular public cloud providers, refer the following links.
- Amazon Elastic Block Storage Volume Types
- Azure Disk Storage
- Google Cloud Platform Storage Options
#2 Encryption at Rest
Today, almost all the popular public cloud providers allow to encrypt the disk storage to meet different compliance needs for enterprises. Cloud providers such as Google Cloud Platform, provides encryption by default while others provide it as an option to select based on the enterprise storage compliance requirements.
#3 Multi-Level Storage Redundancy
In the Cloud, the level of redundancy for storage comes with the type of the Storage we are using. For example disk storage provides redundancy by replication within the same data center or across multiple datacenters which are physically closer by connecting with low latency fiber optic networks.
When you use shared disk storage across multiple virtual machines, cloud providers like AWS (Elastic File System) allows the storage to be replicated across physically distant data centers allowing high availability and fault tolerance supporting disaster recovery.
For object storage services such as AWS S3, Azure Blob Storage and Google Cloud Storage, redundancy goes further deep by allowing multiple replications across several data centers as well as to replicate data across different geographical regions.
#4 Virtually Infinite Storage
Depending the storage type there are several limitations for file sizes, single storage size, maximum throughput & etc. Depending on the storage requirements, you should be able to select the best storage options that scales with your workloads. However, you should be able to provision multiple instances of the storage, virtually allowing infinite storage capacity for your enterprise workloads.
#5 Different Consistency Models
Cloud storage options come with different consistency models. It is mainly due to the level of redundancy these storage solutions provide. Depending on the storage type in use, your enterprise application required to be designed to support these consistency models. Two common consistency models Cloud Storage provides,
With strong consistency your application can expect once a record is written to the storage, subsequent read requests will get the latest update. However, with eventual subsequent reads after a write might retrieve the old object or the new object based on the replication lag. For a properly designed application this will provide better performance with high durability (With replication) since the underlying storage system doesn't wait for the reads to be retrieved while a replication happens.
Top comments (0)