There are always times when the available space in the database is exhausted, and we need to take action.
Generally, the most common approach is vertical scaling, also known as scale-up, which involves increasing the specifications of the machine directly to expand the available space. Another alternative solution is horizontal scaling, also known as scale-out or sharding, which enhances overall available space by distributing data across different machines.
However, both scale-up and scale-out involve costs, and these can be substantial. For organizations with budget constraints, neither of these approaches may be immediately feasible. So, what can be done?
In such cases, the only option is to squeeze out as much space as possible from the existing machines, no matter what it takes, just pure squeezing.
In this article, we take MongoDB as an example to explore the viable approaches for squeezing out available space.
Indexes are a trade-off of space for time to improve query performance. Therefore, if unused indexes can be identified and removed, the initially consumed space can be reclaimed.
Why do indexes become redundant?
There are several common reasons as follows:
- Because MongoDB's WiredTiger is a B-tree-based storage engine, it has a feature known as the leftmost-prefix. Thus, if a new index and an old index have the same prefix, the old index can be safely removed without affecting query performance.
- Due to feature iterations, some queries that were required initially are no longer in use.
Regarding the first point, we can easily identify which indexes can be removed by carefully scanning each index. However, for the second point, some additional preparation is necessary.
We need to know which indexes are not being used. Apart from tracing clues from the code, there is a more efficient approach, which is to directly examine the index metrics. In the case of MongoDB, the
$indexStats aggregation operation can provide statistical data on the indexes.
By comparing the statistics from two time periods, we can determine which indexes are not being used.
The above-mentioned approach of deleting indexes is a relatively straightforward solution that does not affect the production environment. However, if more available space needs to be squeezed out, consider deleting unused data.
Defining what constitutes unused data entirely depends on the application. For instance, some applications use flags to mark softly deleted documents, allowing these softly deleted documents to be removed or archived to cold storage.
Another scenario involves time-series data, where defining how old the data should be before it can be deleted becomes relevant. However, whichever solution is chosen depends entirely on the application.
This is a more advanced approach compared to the previous two, directly impacting the performance of the production environment.
If the MongoDB cluster is already a sharded cluster but some collections have not been sharded, setting the appropriate shard key and enabling sharding can be a relatively straightforward process.
However, if all collections have already been sharded, it's essential to examine which collections have uneven data distribution. Consider modifying the shard key and even taking further steps to rebalance.
I previously wrote an article describing how to correctly design a shard key to achieve as even a data distribution as possible.
We can determine whether data distribution is uniform by using the command
sh.status(). This command provides explicit information in the output regarding how many shards are being used for a collection and how many chunks each shard holds. A chunk represents the unit of data distribution.
It seems like we have several approaches that should all be effective if carefully implemented, right?
In reality, after implementing the second and third approaches, observing the available space reveals no improvement. In fact, it may even worsen, especially after executing the third approach.
The following diagram illustrates the usage space of a particular shard on the y-axis over time on the x-axis. Here,
t1 denotes the initiation of sharding, while
t2 represents the completion of the sharding process.
We aimed to shard a collection that had not been sharded yet to release the available space in the original shard. However, as we can see from the diagram, after the sharding process was completed, the situation worsened, which was entirely contrary to our expectations.
Why did this happen?
The reason lies in WiredTiger's behavior. After deleting documents, it does not immediately release the space. Instead, it retains the already allocated space, ensuring prepared chunks for future data writes in the collection.
This intention is well-meaning, as disk I/O performance is poor, and having a pre-arranged "clean slate" is valuable. However, this contradicts our goal of having the data disappear immediately after deletion. Hence, we see that although data is deleted, the available space does not increase.
Why did the diagram not only fail to decrease but also grow?
During the sharding process, to enable more efficient sharded queries, an index is created on the shard key. The additional space consumption represents the space occupied by these indexes.
Can we determine how much space WiredTiger has covertly consumed? The answer is yes.
Executing the command
db.collection.stats() yields an output segment that describes "file bytes available for reuse," representing the space that has been covertly taken.
If we can find it, we can certainly reclaim it. Running the
compact command accomplishes this. It's worth noting that the space regained through
compact can only be used by the same collection, so the problem remains unresolved.
To enable all collections to reuse the occupied space, a complex procedure is required. Let me simplify the explanation.
When we add a new member to a ReplicaSet in MongoDB, the new member executes an
Init Sync to synchronize the current data. However, this
Init Sync only synchronizes actual data and not the occupied space. In other words, the new member does not occupy the available space.
Thus, if we gradually replace all the members in a ReplicaSet, we can obtain a new cluster (or Ship of Theseus) with the same dataset but without the occupied space. However, this process requires a "slight" additional budget to enable an extra machine as a new member, which is no longer necessary after the entire replacement process is complete, resulting in significant savings compared to scaling up or scaling out.
For any organization, finances are always a significant concern, particularly when it comes to database expenses, which can be quite substantial. While we aim to minimize costs wherever possible, the options available to us are limited. However, let's quickly summarize the three methods to squeeze out available space:
- Remove useless indexes
- Remove useless data
These three methods are in his order of precedence, with the first one being the quickest to implement and having the least impact on the production environment, and vice versa.
Moreover, executing solutions 2 and 3 requires additional processes to ensure that the space is genuinely available and not occupied.
Perhaps there are some secret techniques that I haven't thought of yet. Feel free to share them with me.