Introduction
Dragonfly excels at high performance and vertical scaling, making it a top choice for demanding modern data workloads.
Soon, Dragonfly Cluster1 will offer horizontal scalability as well, expanding its capabilities even further.
In this article, I want to show you how to run Dragonfly Cluster and provide a concise overview of the Dragonfly Cluster internal processes.
Dragonfly Cluster Overview
Like Redis Cluster, Dragonfly Cluster achieves horizontal scalability through sharding.
A cluster comprises one or more shards, each consisting of a master node (primary) and zero or more replicas.
Data is distributed across shards using a slot-based approach.
- The hashing space in the cluster is divided into 16,384 slots.
- Each key is hashed into a slot. The hash slot is computed from the key name using the CRC16 algorithm and then taking the modulus of 16,384.
- Each node in a cluster is responsible for a subset of these slots.
Dragonfly Cluster supports dynamic rebalancing without downtime.
Hash slots can be seamlessly migrated from one node to another, allowing for the addition, removal, or resizing of nodes without interrupting operations.
Dragonfly Cluster supports multi-key operations as long as all involved keys (in a multi-key command, in a transaction, or in a Lua script) reside within the same hash slot.
To ensure this, Dragonfly employs hash tags.
With hash tags, the system calculates the hash slot based solely on the content within curly braces {}
of a key.
This mechanism allows users to explicitly control key distribution across shards.
If a client requests keys from a node, but those keys belong to a hash slot managed by a different node, the client receives a -MOVED
redirection error.
This ensures that the client can always find the correct node handling the requested keys.
Dragonfly only provides a server but not a control plane to manage cluster deployments.
Node health monitoring, automatic failovers, and slot redistribution are out of the scope of Dragonfly backend functionality
and will be provided as part of the Dragonfly Cloud service.
Dragonfly Cluster offers seamless migration for existing Redis Cluster clients.
It fully adheres to Redis Cluster's client-facing behavior, ensuring zero code changes for applications.
However, Dragonfly takes a fundamentally different approach to cluster management.
Unlike Redis Cluster's distributed consensus model, Dragonfly adopts a centralized management strategy.
Nodes operate independently, without direct communication or shared state.
This design choice provides a single source of truth and enhances simplicity, reliability, and performance.
Cluster Modes
Dragonfly has two cluster modes.
Emulated Cluster Mode (which can be enabled by --cluster_mode=emulated
) is fully compatible with the stand-alone mode,
supporting SELECT
and multi-key operations while also providing cluster commands like CLUSTER SHARDS
.
It functions as a single-node Dragonfly instance and does not include horizontal scaling, resharding, or certain advanced cluster features.
This mode is ideal for:
- Development & Testing Environments: Provide a simplified setup for rapid iteration.
- Migration Phases: Serve as an interim solution when transitioning from a stand-alone to a clustered setup.
- Resource-Constrained Scenarios: A Dragonfly instance in emulated cluster mode can optimize resource utilization by acting as a replica for multiple shards, allowing a single node to replicate several cluster nodes efficiently.
Multi-Node Cluster Mode (which can be enabled by --cluster_mode=yes
) is the Dragonfly Cluster we are talking about in this blog post.
It has certain limitations compared to stand-alone or emulated modes:
- The
SELECT
command is not permitted. - All keys in a multi-key operation (i.e., multi-key commands, transactions, and Lua scripts) must belong to the same slot. Otherwise, a
CROSSSLOT
error is returned.
Dragonfly supports some CLUSTER
commands for compatibility with Redis Cluster clients. These commands primarily provide informational data about the cluster setup:
-
CLUSTER HELP
: Lists available CLUSTER commands. -
CLUSTER MYID
: Returns the node ID. -
CLUSTER SHARDS
: Displays information about cluster shards. -
CLUSTER SLOTS
: Lists all slots and their associated nodes. -
CLUSTER NODES
: Shows information about all nodes in the cluster. -
CLUSTER INFO
: Provides general information about the cluster.
Dragonfly Cluster Management
DFLYCLUSTER
Commands
DFLYCLUSTER
commands are specific to Dragonfly and offer more advanced cluster management capabilities:
-
DFLYCLUSTER CONFIG
: Manages node roles, slot assignments, and migration processes. -
DFLYCLUSTER GETSLOTINFO
: Provides in-depth statistics about slot utilization, including key count, memory usage, and read/write operations. -
DFLYCLUSTER FLUSHSLOTS
: Efficiently clears data from specific slots. -
DFLYCLUSTER SLOT-MIGRATION-STATUS
: Monitors the progress of slot migrations, indicating the current state and completion status.
Cluster Creation
To begin building a Dragonfly cluster, we'll start by launching two separate Dragonfly instances in cluster mode.
Each instance will have a unique ID.
$> ./dragonfly --cluster_mode=yes --admin_port=31001 --port 30001
$> ./dragonfly --cluster_mode=yes --admin_port=31002 --port 30002
Once the instances are running, we need to retrieve their unique IDs by using the following commands:
$> redis-cli -p 31001 CLUSTER MYID
$> redis-cli -p 31002 CLUSTER MYID
In my case, the two unique IDs have prefixes 97486c...
and 728cf2...
respectively.
Now we can create a cluster config in JSON format (as a string) by plugging in the unique IDs and IP addresses of the two nodes above.
[
{
"slot_ranges": [
{
"start": 0,
"end": 999
}
],
"master": {
"id": "97486c9d7e0507e1edb2dfba4655224d5b61c5e2",
"ip": "localhost",
"port": 30001
},
"replicas": []
},
{
"slot_ranges": [
{
"start": 1000,
"end": 16383
}
],
"master": {
"id": "728cf25ecd4d1230805754ff98939321d72d23ef",
"ip": "localhost",
"port": 30002
},
"replicas": []
}
]
By using the DFLYCLUSTER CONFIG
command, the JSON string is sent to both our nodes and the Dragonfly Cluster is created.
Slot Migration
Slot migration is a critical operation that can potentially lead to data loss if not executed carefully.
It's essential for adjusting cluster configuration to meet changing demands.
Dragonfly supports concurrent slot migrations, but only one migration can be in progress between any two specific nodes at a given time.
This means multiple migrations can be initiated simultaneously across different node pairs within a cluster.
To initiate and complete a slot migration, use the DFLYCLUSTER CONFIG
command, specifying an additional migrations
field in the JSON configuration.
In the example below, I decided to move slots [1000, 8000]
to another node, and here's how the JSON configuration string looks:
[
{
"slot_ranges": [
{
"start": 0,
"end": 999
}
],
"master": {
"id": "97486c9d7e0507e1edb2dfba4655224d5b61c5e2",
"ip": "localhost",
"port": 30001
},
"replicas": []
},
{
"slot_ranges": [
{
"start": 1000,
"end": 16383
}
],
"master": {
"id": "728cf25ecd4d1230805754ff98939321d72d23ef",
"ip": "localhost",
"port": 30002
},
"replicas": [],
"migrations": [
{
"slot_ranges": [
{
"start": 1000,
"end": 8000
}
],
"node_id": "97486c9d7e0507e1edb2dfba4655224d5b61c5e2",
"ip": "localhost",
"port": 31001
}
]
}
]
To maintain cluster consistency during slot migrations, the DFLYCLUSTER CONFIG
command is propagated to all nodes, even those not directly involved.
This ensures that all nodes have an up-to-date view of the cluster configuration, preventing inconsistencies that might arise from concurrent migration processes or failures.
To monitor the progress of a slot migration, use the following command:
$> redis-cli -p 31002 DFLYCLUSTER SLOT-MIGRATION-STATUS
Once the migration status shows FINISHED
, the new cluster configuration (with updated slot_ranges
) can be applied to all nodes:
[
{
"slot_ranges": [
{
"start": 0,
"end": 8000
}
],
"master": {
"id": "97486c9d7e0507e1edb2dfba4655224d5b61c5e2",
"ip": "localhost",
"port": 30001
},
"replicas": []
},
{
"slot_ranges": [
{
"start": 8001,
"end": 16383
}
],
"master": {
"id": "728cf25ecd4d1230805754ff98939321d72d23ef",
"ip": "localhost",
"port": 30002
},
"replicas": []
}
]
Upon applying the new cluster configuration, data from migrated slots is permanently erased from the source node.
Migrations can be canceled by removing the migration field from the configuration while preserving slot assignments.
If the updated configuration isn't applied promptly after migration completion, the cluster enters a transitional state where nodes have inconsistent slot information.
Clients may initially be redirected to the source node for migrated slots, but subsequent requests to the source node will be correctly routed to the target node.
Although this temporary inconsistency exists, it doesn't compromise data integrity.
Replicas
Dragonfly replication is configured using the standard REPLICAOF
command, identical to non-clustered setups.
Despite replication details being included in the cluster configuration, replicas function independently, copying data directly from the master node.
This means replicas replicate all data from the master mode, regardless of slot assignment.
Slot Migration Process Under the Hood
Dragonfly utilizes a set of internal commands, prefixed with DFLYMIGRATE
, to manage the slot migration process.
The slot migration process involves several carefully coordinated steps to ensure data integrity and seamless transitions between nodes.
-
Initiating Migration:
The process begins with the
DFLYCLUSTER CONFIG
command sent to the source and target nodes to configure migration parameters. -
Preparing the Target Node:
The source node sends
DFLYMIGRATE INIT [SOURCE_NODE_ID, SHARDS_NUM, SLOT_RANGES]
to the target node. The target node respondsOK
, indicating it is ready to receive data. -
Setting Up Data Transfer:
For each storage thread-shard (a data segment handled by a single thread), the source node sends
DFLYMIGRATE FLOW [SOURCE_NODE_ID, FLOW_ID]
. EachDFLYMIGRATE FLOW
command sets up a connection for data transfer, confirmed by anOK
response. -
Transferring Data:
Using the established connections from the
DFLYMIGRATE FLOW
commands, the source node serializes and transfers data to the target node. - Finalizing Migration: After the data transfer, the migrated slots on the source node are blocked to prevent further data changes. The source node then sends a finalization request for each FLOW connection to conclude the data transfer.
-
Completing the Process:
The source node issues
DFLYMIGRATE ACK [SOURCE_NODE_ID, ATTEMPT_ID]
to the target node to finalize the entire migration. The target node responds withATTEMPT_ID
, completing the migration. TheATTEMPT_ID
is used to handle errors that may arise during the finalization process.
While most steps in the migration process are straightforward, step 4 above requires a more detailed explanation due to its complexity.
In Dragonfly, there are two sources of data that need to be sent to the target node: the snapshot and the journal.
We will dive deeper into these two sources of data below.
Snapshot Creation
To create a snapshot, Dragonfly iterates through each storage shard, serializing the data.
In the absence of write requests, this is a linear process where each bucket is serialized one by one.
Periodic pauses are incorporated to allow the system to process new requests, ensuring minimal disruption.
The process becomes more complex when write requests occur during serialization.
Unlike Redis, which uses a fork mechanism to prevent data changes during serialization,
Dragonfly employs a more sophisticated mechanism, incorporating versioning and pre-update hooks, to create snapshots without spiking the memory usage or causing latency issues.
Journal Serialization
While handling the snapshot, Dragonfly also manages a journal, which logs all recent write operations.
These journal entries are serialized and sent to the target node along with the snapshot data.
Let's look at a small example to illustrate the process:
- There are several data entries: A, B, C, and D.
- Data entries A and B are serialized and sent to the target node.
- A new
MSET
command is issued by the client, updating B and D. - Because B is already serialized previously, we do nothing with it for now.
- Data entry D is serialized and sent to the target node.
- The journal gets the update about B and D, serializes it, and sends it to the target node.
- Finally, data entry C is serialized and sent to the target node.
By following this process, Dragonfly ensures that the target node receives a consistent version of the source node's data,
including all recent write operations during the slot migration process.
Conclusion
Dragonfly Cluster is a powerful addition to the Dragonfly ecosystem, offering horizontal scalability for even the most demanding workloads.
Modern servers can come equipped with over a hundred cores and several hundred gigabytes of memory.
While Dragonfly Cluster offers significant scalability advancements, vertical scaling should be prioritized if feasible.
Therefore, it is advisable to evaluate the potential for vertical scaling before implementing a cluster.
If you're uncertain about future vertical scaling needs, you can start with an emulated cluster and switch to a real cluster as your requirements grow.
In the meantime, if you are curious to see how Dragonfly can scale with your needs and workloads, the easiest way to get started is by using the cloud service backed by the Dragonfly core team.
Try Dragonfly Cloud today and experience the power of seamless scaling firsthand!
-
At the time of writing, Dragonfly Cluster is not officially released yet.
However, many features and cluster-related commands described in this article are already available in the Dragonfly main branch.
We are actively testing and improving this amazing feature. ↩
Top comments (0)