DEV Community

Cover image for How to Horizontally Scale Amazon ElastiCache Instance
Manoj Chaurasiya
Manoj Chaurasiya

Posted on

How to Horizontally Scale Amazon ElastiCache Instance

Amazon ElastiCache provides in-memory data storage using Redis or Memcached, and scaling involves either vertical (instance size) or horizontal scaling (adding nodes or replicas). Horizontal scaling focuses on distributing the data and increasing capacity by adding more nodes to your ElastiCache cluster.

Redis Cluster Mode Enabled supports sharding out of the box, and this is the recommended way to horizontally scale Redis.

Lets start with horizontal scaling for ElastiCache instance, We can achieve scaling by updating Shard or Node, here will go for scaling with Nodes. ElastiCache cluster allows to create Dynamic scaling through AWS console but it will use only CPU Utilization metric.

If we have to apply dynamic scaling as per other metric then it can be created through AWS CLI.

Verify Metric Statistics

Before going further, let's cross check if we are getting proper statistics value for primary node, in our case we had considered primary node to set threshold value for auto scaling. we can select metric name as per uses of redis cluster.

aws cloudwatch get-metric-statistics \
  --namespace AWS/ElastiCache \
  --metric-name NetworkBytesOut \
  --dimensions Name=CacheClusterId,Value=primary-node-0001-001 \
  --start-time 2024-09-13T00:00:00Z \
  --end-time 2024-09-13T23:59:59Z \
  --period 300 \
  --statistics Average
Enter fullscreen mode Exit fullscreen mode

Register Scalable Target

By registering the cluster as a scalable target, you can set the cluster to have a minimum of 2 nodes and a maximum of 5 nodes (number of node depends on cluster load). Based on the load and defined scaling policies (like Network Byte Out), Application Auto Scaling will automatically increase or decrease the number of nodes in your cluster within those limits.

  aws application-autoscaling register-scalable-target \
    --service-namespace elasticache \
    --resource-id replication-group/cluster-name-1 \
    --scalable-dimension elasticache:replication-group:Replicas \
    --min-capacity 1 \
    --max-capacity 5
Enter fullscreen mode Exit fullscreen mode

Auto Scaling Config File

Here's an example configuration file for AWS ElastiCache autoscaling using Application Auto Scaling. This configuration defines the CloudWatch metric used for scaling (e.g., NetworkBytesOut) and details of the primary node and replicas. The auto scaling policy will leverage CloudWatch data points to adjust the number of nodes based on usage patterns. let's store this configuration in config.json

{
  "TargetValue": 7000000000,
  "CustomizedMetricSpecification":
  {
    "MetricName": "NetworkBytesOut",
    "Namespace": "AWS/ElastiCache",
    "Dimensions": [
      {"Name": "CacheClusterId","Value":"primary-node-0001-001"},
      {"Name": "CacheNodeId","Value": "0001"}
    ],
    "Statistic": "Average"
  }
}
Enter fullscreen mode Exit fullscreen mode

Scaling Policy

The scaling policy defines the conditions under which the cluster should scale in (remove nodes) or scale out (add nodes). These conditions are usually based on metrics collected by Amazon CloudWatch (like CPU or network usage), in our case it is NetworkByteOut.

aws application-autoscaling put-scaling-policy \
    --policy-name elastic-cache-asg \
    --policy-type TargetTrackingScaling \
    --resource-id replication-group/cluster-name-1 \
    --service-namespace elasticache \
    --scalable-dimension elasticache:replication-group:Replicas \
    --target-tracking-scaling-policy-configuration file://config.json
Enter fullscreen mode Exit fullscreen mode

After you configure the scaling policy, two CloudWatch alarms will automatically be created: HighAlarm and LowAlarm. These alarms monitor the selected metric (like CPU utilization) and trigger scaling actions when thresholds are crossed.

HighAlarm: This alarm triggers when the metric exceeds the upper threshold, resulting in a scale-out action to add more nodes to the cluster, improving performance.
LowAlarm: This alarm triggers when the metric drops below the lower threshold, resulting in a scale-in action to remove unnecessary nodes, optimizing resource usage.

The alarms ensure that the number of nodes in the ElastiCache cluster is adjusted dynamically, maintaining both performance and cost-efficiency. You can monitor these alarms in CloudWatch to confirm that scaling actions are taking place as expected.

Summary

In this post, we explored the implementation of Horizontal Auto Scaling for an ElastiCache Redis Cluster. One important point to note is that AWS ElastiCache supports autoscaling only for instance types m7g.large and above. This means that if you're using smaller instance types, such as t3 or m6g.medium, autoscaling will not be available. To take advantage of automatic scaling, ensure that your cluster is using at least the m7g.large instance type or higher.

Top comments (0)