This post explains the principles behind "Consistent Hashing" with the help of some interactive React + SVG demos here and there.
The source of the interactive demos can be found in the accompanying GitHub repo.
The Problem
Consistent hashing originally was applied in the late 90s for caching websites. The goal was to have a shared cache for many nearby users, e.g. on a university campus. If one of these users requested a website, the cache would first be checked and only in case of a cache miss the request would be routed to the server hosting the website. The apparent benefits of such a cache are an overall better user experience due to reduced response times and less internet traffic.
However, the catch is that a single machine can hardly provide enough memory for storing cached websites. Depending on the number of users accessing websites through a shared cache, hundreds of servers or higher magnitudes are required in this regard. A shared website cache thus comprises a lot of servers on which the cached websites are somehow distributed.
The naive approach to look up a particular website in the cache would be to iterate over all the involved servers and look if it's there, which is obviously not very optimal. It would be nice if we had some sort of lookup function that tells us which server to ask for a given website right away.
f(URL) -> server
Hash Functions
Luckily there are hash functions which will help us here. A hash function maps values of an arbitrarily large domain (e.g. strings representing website URLs) to a smaller domain with a restricted set of values (e.g. 32-bit integers) and comes with these properties:
- cheap to compute
- deterministic - the same input always results in the same output
- kind of random behaviour - maps input randomly across possible values in target domain without noticeable correlation
You find a comprehensive list of hash functions here.
Note that there is a class of hash functions called cryptographic hash functions with some additional properties:
- it is infeasible to generate a hash function input that yields a given hash value (i.e. to reverse the process that generated the given hash value)
- it is infeasible to find two different hash function inputs with the same hash value
Since our problem of determining the cache server based on a URL is free of any security concerns, we're good to go with a simple non-cryptographic hash function. Of course, any cryptographic hash function would work - but with the downside of a higher computing cost.
Now let's assume we have chosen a suitable hash function h, which gives us a 32-bit integer for an arbitrary input string (all the demos below use xmur3). How do we map the hash value to our set of a few hundred or thousand cache servers, considering that the number of cache servers can change over time?
Naive Approach
Given that we have m servers addressed from 0 to m-1, the most straightforward way to get a server associated with a specific URL would be:
server = h(URL) % m
Applying the modulo here works reasonably well if the number of cache servers is known in advance and unlikely to change over time. But if m changes (e.g. a server goes down, or we have to add a couple more servers to increase our cache capacity), potentially all URLs cached so far would be reassigned to another server and invalidated. While that may seem borderline acceptable for our use case of caching websites, it is not. If the number of servers on which data is distributed is constantly changing, applications will suffer drastically as affected data parts have to relocate frequently.
π€ Applying the modulo is a common technique to map potentially large integers to a smaller domain. Change the number of nodes in the demo below. You can observe that often almost all of the URLs would be reassigned to another node.
Consistent Hashing
Consistent caching is a surprisingly simple approach (once you get it) that keeps the redistribution of URLs to servers to a minimum. Even if the number of cache servers m changes over time, most of our cached websites stay assigned to the same cache server.
Let's briefly rephrase our problem statement in a more general fashion and stick to this terminology for the rest of this post.
We want to evenly distribute an arbitrary amount of data to a limited set of nodes so that a change in their number causes a minimal set of data to be reassigned to other nodes.
Let's define d as the key identifying a certain piece of data (e.g. a URL representing a website) we want to associate with a node n. Furthermore, let's assume we use a suitable hash function h.
The main twist of consistent hashing is that in addition to hashing the keys (a shorter way of saying applying the hash function to the keys), we also hash the node identifiers (something unique like an URL or an IP address). That way, we have both our keys and nodes represented as hash values.
A key d is then associated with that node, whose hash value is the nearest successor to the hash value of d. If there is no such node (which can certainly happen), the node with the overall minimum hash value is taken. That means we basically wrap around by forming a hash ring (the end of the hash space connects to the start).
Put another way, we clockwise search for the next hashed node h(n) on our hash ring starting from our hashed key h(d).
With consistent hashing, only k/m nodes are reassigned on average, where k is the number of keys, and m is the number of nodes.
π€ The demo below shows three nodes and a key on our hash ring. The wide arc represents the key's partition, with an arrow pointing to the assigned node. You can fiddle around by entering other key values.
You can ignore the suffix _0
in the shown node identifiers for now. I'll explain it in the next section.
(Note that this demo and the following ones are pre-bundled in Glitch. If you want to poke around the sources, have a look at the GitHub repo. See the last section about the reasons for pre-bundling.)
π€ The following demo shows nine nodes, of which three are active. The current key is assigned to node-11. Turn off this one and afterwards node-13. Observe how the key gets reassigned. Play around, toggle other nodes and try out different keys.
You may have noted that the distribution of nodes on the hash ring in the demos is not so bad, given that we place them randomly. Well, I cheated a bit to make the visualization easier to understand and to let the nodes not overlap each other. This brings us to the next topic.
Virtual Nodes
This basic version of consistent hashing - while certainly better than the (modulo-based) naive one - still has some drawbacks:
- Due to hashing, an even distribution of nodes on the hash cannot be guaranteed so that the space (partition size) between two adjacent nodes can vary to a high degree. It is possible to have partitions that are very small or large.
- Similarly, the keys may not get distributed uniformly on the hash ring, resulting in empty or overcrowded partitions.
To mitigate these issues, real-world implementations of consistent hashing often represent a node multiple times on the hash ring via virtual nodes. This can be done simply by hashing the concatenation of a node identifier with a number. For example, if we wanted to have each node represented three times on the hash ring, a node identifier node-11 could be described with the virtual identifiers node-11_0, node-11_1 and node-11_2. (I applied this naming schema in the demos, in case you're wondering.)
Alternatively, instead of having virtual node identifiers according to the virtual node count, we could also apply different hash functions to each node identifier as described in this excellent Stanford lecture notes. However, since this approach is more involved I used the naming scheme for simplicity.
Instead of having the same virtual node count for each of our server nodes, we could also think about a different number of representations for nodes on the hash ring depending on their capacity (e.g. CPU or storage). Nodes with a higher capacity could be configured to have more virtual nodes, summing up to a larger partition on the hash ring and a higher probability of keys assigned.
π€ The demo below shows the effect virtual nodes have on the partition size. It emphasizes all belonging partitions of the selected node. Initially, each node is represented only by a single virtual node as in the previous demos. Go ahead and try out cranking up and down the number of virtual nodes!
Implementation Notes
I won't walk you through the implementation of consistent hashing or any of the demos shown in this post. That would go beyond the scope I've planned for this write-up. Instead, just some short general remarks. (If you're interested in more implementation details, let me know in the comments. Maybe I'll then find time for a follow-up post.)
To make the node lookup as fast as possible, we should undoubtedly refrain from sequentially iterating over all our (virtual) nodes and computing their hashes each time we wan't to lookup the node assigned to a key. A good approach would be to store the nodes in a data structure optimized for fast retrieval. Particularly the task "Here is a key hash; return the smallest of all your current node hashes greater than that." should perform well.
A binary search tree (BST) is an excellent option here. The BST would be sorted by node hashes and additionally each node hash would be associated with the corresponding node identifier for a reverse lookup of the (virtual) node based on the found hash. Adding or removing a node and adjusting the number of virtual nodes would update the binary search tree accordingly.
Another data structure in need would be a map, which allows us to look up a physical node based on a virtual one.
Finally, the very essential operations a consistent cache must provide to be useful (in Typescript notation):
type ConsistentHash = {
addNode(node: string): void;
removeNode(node: string): void;
lookupNode(key: string): string;
};
This would assume a fixed virtual node count, either as implementation detail or as a parameter during initialization. If we wanted more flexibility in this regard, i. e. adjusting the virtual node count at runtime, we could extend our consistent hash API with:
type ConsistentHash = {
//...
setVirtualNodeCount(count: number, node?: string): void;
};
This way, we are able to set the virtual node count per single node or globally.
Looking for a finger exercise? Why don't you try to implement consistent hashing then?
Summary
Consistent hashing as an approach originated from the problem of building an efficient distributed cache for websites and has found wide adoption in a broad range of distributed system scenarios.
Data partitioning is undoubtedly one of the main applications of consistent hashing, but there are other limited resources a node in a distributed system can have (besides storage capacity). For example, if you wanted to design a large scale chat application with millions of users, you would quickly realize that the number of web socket connections a single server can handle is limited. Thus, assigning web clients to web socket servers is yet another use case consistent hashing can handle.
Take care & happy coding π
Meta Note
I wanted to write a short explanatory text sprinkled with some interactive demos.
Given that all demos in this post (except the first) exceed the amount of code I'm willing to write in an online IDE (capable of showing previews here on dev.to), I was kind of lost at first and wondered how to embed these interactions. After some tries, I eventually decided to deploy them as pre-bundled static websites to Glitch. And yes, I'm very aware this is not how Glitch wants you to use it.
I wished I could simply import the demos in an MDX-like way, as these are all React components. That feature, along with some fine-grained control over the imported component's size, would really be awesome.
Very interested to hear about your approaches regarding embedding apps for demo purposes in your posts!
Top comments (0)