Prior warning: this post may turn into a rant.
Prologue
In the second half of 2019 at my company we have got an expected notice from our MongoDB provider, Atlas.
The notice was about the usual pain that they do every now and then: force upgrade for old versions. At that time we were running MongoDB v3.4 and so now we got the notice to ensure that we have a driver that supports v3.6 as all the clusters will be upgraded on the end of January 2020.
All is well we look at these upgrades as a necessary evil, that causes pain in short therm, but will bring benefits in the long run. The benefits with newer MongoDB versions was the performance. We have tested some of our heavier queries - with which we already have had issues in production - and behold they become 10 times faster. (We were comparing MongoDB v3.4 with v4.2 at that time)
We thought cool 10x the power! Lets do this!
So we have started our long journey of upgrades, tests, fixes and further upgrades and tests, tears and cries, laughter and anger.
Once we were satisfied with the upgrade we have deployed our first services,that was already in need of that performance boost. Cool we thought, surely we gonna have some co-workers coming to us saying: Boys don’t know what has happened, but the service is blazing fast!
Man, we were wrong! Surely the queries looked fast, but there was a tiny bit of issue: some of our calls to the database started to timeout. Worst of it those calls were actually fast previously. As an icing on the cake this didn’t come to our attention straightaway, but only a week later, when another new service wanted to sync data.
Once noted, we jumped into debugging. Looking at the database’s real-time operations (db.currentOp()
) we were seeing aggregation
calls on the biggest collection being called. As we didn’t recall to have used such heavy aggregations on that collection, we searched through our code base to find what could issue this command.
We have managed to find a couple of places where we have used aggregation, but none of them fit the match that we seen in the operations list.
Eventually one team member suggested that that aggregation is the way MongoDB does the count. I couldn’t believe it at first, but then we have read a bit more about the new countDocuments
method that was suggested method by the documentation to use instead of the count
and turned out that it is indeed slower as it is more accurate.
From MongoDB’s JIRA ticket NODE-1638:
count has been deprecated for a number of reasons. First, and foremost, the existing count command could not be run within transactions, so an alternative approach needed to be taken. Secondarily, the count command in most cases was not accurate. We have replaced count with countDocuments and estimatedDocumentCount. Please use estimatedDocumentCount for the performance you are looking for, and parity with the original count.
So the reasons against count
:
- not giving accurate results and
- not transaction friendly
From my point of view those two are not really reasons to deprecate a core command, which think is quite needed.
count
is not accurate
Okay it isn’t, but honestly what was accurate in MongoDB before? Like with iterating a cursor
(with mongoose stream
), you could easily miss documents or see others twice in the process. Unless you set read preference to snapshot, but even then if the process is long running and you have inserts in the mean time,then you won’t see the new documents, so it is still a meh solution.
For processing all the data in the database, even the ones that didn’t exist when we started the process, we were using a practice where we sorted the _id
in ascending order, retrieving data in batches and using the last _id in the list with a greater than filter: { _id: { $gt: lastId } }
. Like this we could processes all the documents without duplicates and if there were new documents created while the process run, no problem, still got them.
Now in case of the count, so far I haven’t seen a case where would have needed pinpoint accuracy. I can imagine that there are case where one needs it, but then just like with the streaming above there is a solution for it. The solution in this case comes in aggregation and I’m sure that before the countDocuments
command developers were using it to get the accurate count that they needed.
It is nice that now there is a method in mongo, that can give you the accurate count, without fiddling around with aggregation. It is convenient for those who need it. Still in my point it is not a reason to deprecate count
.
Not transaction safe
Well okay. It isn’t. Don’t know, never tried it. As I tend to work with micro-services, I never missed or wanted to use transactions. It is hard to implement across services. My preference for data consistency is to make operations idempotent and so it is safe to put them into job queues, that guarantee to run it at least once, hence getting eventual consistency.
Just to emphasize it: I do respect that in some cases transactions could be the best or only solutions and it is nice that countDocuments
is transaction safe. Just still not a reason to deprecate count
.
Solution
So count
has been marked as deprecated in MongoDB v4.0, it is still well and alive in v4.2. Since the two replacement suggested to be used instead:
-
countDocuments
- way too slow for us -
estimatedDocumentCount
- cannot provide a query
are both unsuitable for us, we have reverted all our calls to use the poor old count
method and we have accepted that our terminal will be showing the deprecation warnings for a while.
For now we hope that they won’t remove it or they will improve the performance of the new countDocuments
method to be on pair with count
.
Finale
Okay, this has indeed become a rant, but you have been warned. :D Sorry.
Top comments (0)