DEV Community

Hans-Peter Grahsl
Hans-Peter Grahsl

Posted on

A slightly closer look at MongoDB 5.0 time series collections - Part 1

Banner Image Charts

Setting the Scene

Recently, at MongoDB.live 2021 one of the bigger feature announcements was the fact that MongoDB version 5.0 introduces so-called time-series collections. The information about it was primarily high-level and also the current documentation doesn't give away some of the details. This is why I decided to dig just a little bit deeper to improve my personal understanding about what is going on behind the scenes, when storing time-series data with this new collection type in MongoDB.

The Past

For several years already, people have been using MongoDB to store their time-series data. Some of them struggled initially and had to learn the hard way, that one doesn’t simply store time-series data as is. The biggest mistake I’ve seen over and over again in the wild was that data hasn’t been stored in an optimized way. What I mean by that is people didn’t invest any further thoughts into proper schema design for their documents, but instead just inserted e.g. raw sensor measurements directly into collections. In almost all cases, doing so eventually led to a lot of storage and processing overhead, unnecessarily large index structures and oftentimes poor performance overall. The way to properly tackle time-series data storage with MongoDB in the past was to apply a schema design trick called the bucket pattern. The main idea behind this pattern is to store several measurements which logically belong together - e.g. data from one specific sensor over a certain period of time - into a single document which contains a bucket holding multiple of these measurements. Since it’s impractical to indefinitely grow one document and its bucket, the application layer sees to it that it starts a new document based on certain thresholds and rules, which are depending on the granularity of time and the ingestion frequency / interval of sensor data. To give a concrete example, there could be a single document and its bucket, which stores all measurements happening every second for one specific hour of the day. This single document would then contain up to 3600 measurements ingested at a 1 second interval during a particular hour of the day, before a new document would then be created to store all the measurements of the same sensor for the next hour of the day.

While this approach can work pretty fine, one needs to invest upfront thoughts regarding schema design and in addition, it means a higher burden for developers. They have to implement as well as tweak and tune the bucketing logic for such time-series ingestion scenarios in the application layer. Also when it comes to certain types of queries there is more effort involved when targeting collections that contain documents which are structured according to the bucket pattern. This is because for queries against such collections, the particular bucketing strategy has to be known and considered accordingly.

The Present

Fast forward to the MongoDB release 5.0 which now brings “native” support for time-series collections. The promise is, that developers don’t need to agonize over schema design tricks such as the bucket pattern any longer. Instead, they can simply insert and query their time-series data directly, without any further considerations on the application layer. But how does this exactly work and how does it look behind the scenes from a document storage perspective?

The following explorations are based on raw measurements. The data contains 3 fields and looks as follows:

{ ts: 2021-07-10T00:00:03.000Z,
  metadata: { sensorId: 31096, type: 'windspeed' },
  value: 32.53987084180961 }
Enter fullscreen mode Exit fullscreen mode
  • ts represents the timestamp of sensor data
  • metadata stores which sensor and type of data we are dealing with
  • value holds the actual sensor reading, a windspeed value in this case

Note that in general, you can have much more complex measurement documents containing more payload fields with varying data types and nested elements, too. It's kept simple here on purpose.

Step 1: Creating a time series collection

The command to create this new time series collection type is as follows:

db.createCollection("windsensors", { timeseries: { timeField: "ts", metaField: "metadata", granularity: "seconds" } } )
Enter fullscreen mode Exit fullscreen mode

Besides the name, we specify time series related settings. Most importantly and the only obligatory config is to provide the name of the field which holds the timestamp of measurements, "ts" in this case. The "metaField" is a descriptive label for the sensor data and the "granularity" (hours, minutes or seconds = default) defines the expected ingestion interval for the sensor readings in question.

Step 2: Insert sample documents

With our empty time series collection in place, let’s ingest the following 10 sample documents, originating from 4 different sensors:

db.windsensors.insertMany([
   {"metadata":{"sensorId":52396,"type":"windspeed"},"ts":ISODate("2021-07-10T00:00:02Z"),"value":18.263742590570686},
   {"metadata":{"sensorId":31096,"type":"windspeed"},"ts":ISODate("2021-07-10T00:00:03Z"),"value":32.53987084180961},
   {"metadata":{"sensorId":52396,"type":"windspeed"},"ts":ISODate("2021-07-10T00:00:03Z"),"value":18.106480571706808},
   {"metadata":{"sensorId":62088,"type":"windspeed"},"ts":ISODate("2021-07-10T00:00:04Z"),"value":20.306831899199864},
   {"metadata":{"sensorId":31096,"type":"windspeed"},"ts":ISODate("2021-07-10T00:00:04Z"),"value":0.6909954039798452},
   {"metadata":{"sensorId":62088,"type":"windspeed"},"ts":ISODate("2021-07-10T00:00:06Z"),"value":0.031065898581725086},
   {"metadata":{"sensorId":27470,"type":"windspeed"},"ts":ISODate("2021-07-10T00:00:07Z"),"value":6.878726412679837},
   {"metadata":{"sensorId":31096,"type":"windspeed"},"ts":ISODate("2021-07-10T00:00:07Z"),"value":3.9089926192773534},
   {"metadata":{"sensorId":52396,"type":"windspeed"},"ts":ISODate("2021-07-10T00:00:07Z"),"value":28.03679268099916},
   {"metadata":{"sensorId":52396,"type":"windspeed"},"ts":ISODate("2021-07-10T00:00:07Z"),"value":1.0575968433736358}
])
Enter fullscreen mode Exit fullscreen mode

Step 3: Run simple find query against time series collection

db.windsensors.find()
Enter fullscreen mode Exit fullscreen mode

The result set shows that all 10 documents are returned separately, which might be surprising at first sight, because this pretty much resembles what we would expect from a "normal" collection, i.e. without any kind of time series optimized storage.

{ ts: 2021-07-10T00:00:02.000Z,
  metadata: { sensorId: 52396, type: 'windspeed' },
  _id: ObjectId("60f3350afbb696c9ace09a19"),
  value: 18.263742590570686 }
{ ts: 2021-07-10T00:00:03.000Z,
  metadata: { sensorId: 52396, type: 'windspeed' },
  _id: ObjectId("60f3350afbb696c9ace09a1b"),
  value: 18.106480571706808 }
{ ts: 2021-07-10T00:00:07.000Z,
  metadata: { sensorId: 52396, type: 'windspeed' },
  _id: ObjectId("60f3350afbb696c9ace09a21"),
  value: 28.03679268099916 }
{ ts: 2021-07-10T00:00:07.000Z,
  metadata: { sensorId: 52396, type: 'windspeed' },
  _id: ObjectId("60f3350afbb696c9ace09a22"),
  value: 1.0575968433736358 }
{ ts: 2021-07-10T00:00:03.000Z,
  metadata: { sensorId: 31096, type: 'windspeed' },
  _id: ObjectId("60f3350afbb696c9ace09a1a"),
  value: 32.53987084180961 }
{ ts: 2021-07-10T00:00:04.000Z,
  metadata: { sensorId: 31096, type: 'windspeed' },
  _id: ObjectId("60f3350afbb696c9ace09a1d"),
  value: 0.6909954039798452 }
{ ts: 2021-07-10T00:00:07.000Z,
  metadata: { sensorId: 31096, type: 'windspeed' },
  _id: ObjectId("60f3350afbb696c9ace09a20"),
  value: 3.9089926192773534 }
{ ts: 2021-07-10T00:00:04.000Z,
  metadata: { sensorId: 62088, type: 'windspeed' },
  _id: ObjectId("60f3350afbb696c9ace09a1c"),
  value: 20.306831899199864 }
{ ts: 2021-07-10T00:00:06.000Z,
  metadata: { sensorId: 62088, type: 'windspeed' },
  _id: ObjectId("60f3350afbb696c9ace09a1e"),
  value: 0.031065898581725086 }
{ ts: 2021-07-10T00:00:07.000Z,
  metadata: { sensorId: 27470, type: 'windspeed' },
  _id: ObjectId("60f3350afbb696c9ace09a1f"),
  value: 6.878726412679837 }
Enter fullscreen mode Exit fullscreen mode

In fact, when we refer to windsensors in our query, we are working with a logical abstraction which is officially deemed to be a "writable, non-materialized view". We can verify this by inspecting the currently existing views as follows. Running

db.getCollection('system.views').find()
Enter fullscreen mode Exit fullscreen mode

shows

{ _id: 'mytsdemo.windsensors',
  viewOn: 'system.buckets.windsensors',
  pipeline: 
   [ { '$_internalUnpackBucket': 
        { timeField: 'ts',
          metaField: 'metadata',
          bucketMaxSpanSeconds: 3600,
          exclude: [] } } ] }
Enter fullscreen mode Exit fullscreen mode

The view definition informs us, that it is based on a collection called system.buckets.windsensors. Other than for "normal", user created views, the pipeline field for this special view shows a placeholder called $_internalUnpackBucket together with the time series related config settings which were used during the creation of the respective collection. Worth noting is the bucketMaxSpanSeconds field which is 3600 here. It is a value in seconds and depends on the chosen granularity which was set during creation time. For this example it means that a bucket would span at most 3600 seconds, i.e. 1 hour. The take away from this is that the actual storage optimized time series data can be found in separate, "internal" collection specified in the viewOn field of the logical view abstraction.

Step 4: Run simple find query against the native underlying collection

Even if there usually shouldn’t be a need to directly access the storage optimized version of the time series data, let’s do it anyway to learn what happens behind the scenes. The following query retrieves just one document from this underlying collection:

db.getCollection('system.buckets.windsensors').findOne()
Enter fullscreen mode Exit fullscreen mode

We get back a result set like this:

{ _id: ObjectId("60e8e30043c83ccb1994f6d5"),
  control: 
   { version: 1,
     min: 
      { ts: 2021-07-10T00:00:00.000Z,
        value: 1.0575968433736358,
        _id: ObjectId("60f3350afbb696c9ace09a19") },
     max: 
      { ts: 2021-07-10T00:00:07.000Z,
        value: 28.03679268099916,
        _id: ObjectId("60f3350afbb696c9ace09a22") } },
  meta: { sensorId: 52396, type: 'windspeed' },
  data: 
   { _id: 
      { '0': ObjectId("60f3350afbb696c9ace09a19"),
        '1': ObjectId("60f3350afbb696c9ace09a1b"),
        '2': ObjectId("60f3350afbb696c9ace09a21"),
        '3': ObjectId("60f3350afbb696c9ace09a22") },
     value: 
      { '0': 18.263742590570686,
        '1': 18.106480571706808,
        '2': 28.03679268099916,
        '3': 1.0575968433736358 },
     ts: 
      { '0': 2021-07-10T00:00:02.000Z,
        '1': 2021-07-10T00:00:03.000Z,
        '2': 2021-07-10T00:00:07.000Z,
        '3': 2021-07-10T00:00:07.000Z } } }
Enter fullscreen mode Exit fullscreen mode

Let’s inspect the document structure by taking a closer look at a subset of the contained fields:

  • control.min holds the bucket’s lower bound timestamp value which depends on the chosen granularity, additionally the lowest value measured in this bucket and the ObjectId referring to the first entry stored in this document’s bucket.

  • control.max holds the most recent timestamp value stored in this bucket, additionally the highest value measured in this bucket and the ObjectId referring to the last entry stored in this document’s bucket so far.

Obviously the contained data for both, control.min and control.max is updated on-the-fly as new sensor readings are ingested into this document and its bucket. In general, those two sub-documents would store the min and max values for each field contained in the original measurement’s payload. In our case it was only the value field with a single windspeed measurement.

  • data is a complex object that holds all the information of every sensor data payload that has been ingested so far. There are just 3 fields in this particular example: the document identifier (_id), the timestamp (ts) and the sensor data (value). If there were more fields in the original measurement document besides just value, they would all be stored here in a similar fashion. The measurements themselves are referred to by using a field name given by the bucket index i.e. 0 .. N for every single measurement. In general the data field would hold sub-documents for all payload fields of the original measurement document.

Based on this single document it is possible to reconstruct every original measurement document which was ever ingested into this bucket, simply by combining the meta field with every 3-tuple, e.g. { id.0, ts.0, value.0 } … { _id.N, ts.N, value.N } taken from the _data field. In general, this would be an N-tuple, since the data field would hold sub-documents for all payload fields of the original measurement document. One concrete example for our sample data resulting in the first original measurement document which was stored in this bucket is:

{
_id: ObjectId("60f3350afbb696c9ace09a19"),
 ts: 2021-07-10T00:00:02.000Z,
 value: 18.263742590570686,
 meta: { sensorId: 52396, type: 'windspeed' }
}
Enter fullscreen mode Exit fullscreen mode

If we inspect the other 3 documents in the underlying storage-optimized collection they look all very similar. The only structural difference of the buckets is that currently, each bucket has a different number of entries, which is exactly as it should be because the 10 original documents originated from 4 different sensors each having a varying number of readings being ingested until that point.

{ _id: ObjectId("60e8e30043c83ccb1994f6d6"),
  control: 
   { version: 1,
     min: 
      { ts: 2021-07-10T00:00:00.000Z,
        value: 0.6909954039798452,
        _id: ObjectId("60f3350afbb696c9ace09a1a") },
     max: 
      { ts: 2021-07-10T00:00:07.000Z,
        value: 32.53987084180961,
        _id: ObjectId("60f3350afbb696c9ace09a20") } },
  meta: { sensorId: 31096, type: 'windspeed' },
  data: 
   { _id: 
      { '0': ObjectId("60f3350afbb696c9ace09a1a"),
        '1': ObjectId("60f3350afbb696c9ace09a1d"),
        '2': ObjectId("60f3350afbb696c9ace09a20") },
     value: 
      { '0': 32.53987084180961,
        '1': 0.6909954039798452,
        '2': 3.9089926192773534 },
     ts: 
      { '0': 2021-07-10T00:00:03.000Z,
        '1': 2021-07-10T00:00:04.000Z,
        '2': 2021-07-10T00:00:07.000Z } } }
{ _id: ObjectId("60e8e30043c83ccb1994f6d7"),
  control: 
   { version: 1,
     min: 
      { ts: 2021-07-10T00:00:00.000Z,
        value: 0.031065898581725086,
        _id: ObjectId("60f3350afbb696c9ace09a1c") },
     max: 
      { ts: 2021-07-10T00:00:06.000Z,
        value: 20.306831899199864,
        _id: ObjectId("60f3350afbb696c9ace09a1e") } },
  meta: { sensorId: 62088, type: 'windspeed' },
  data: 
   { _id: 
      { '0': ObjectId("60f3350afbb696c9ace09a1c"),
        '1': ObjectId("60f3350afbb696c9ace09a1e") },
     value: { '0': 20.306831899199864, '1': 0.031065898581725086 },
     ts: { '0': 2021-07-10T00:00:04.000Z, '1': 2021-07-10T00:00:06.000Z } } }
{ _id: ObjectId("60e8e30043c83ccb1994f6d8"),
  control: 
   { version: 1,
     min: 
      { ts: 2021-07-10T00:00:00.000Z,
        value: 6.878726412679837,
        _id: ObjectId("60f3350afbb696c9ace09a1f") },
     max: 
      { ts: 2021-07-10T00:00:07.000Z,
        value: 6.878726412679837,
        _id: ObjectId("60f3350afbb696c9ace09a1f") } },
  meta: { sensorId: 27470, type: 'windspeed' },
  data: 
   { _id: { '0': ObjectId("60f3350afbb696c9ace09a1f") },
     value: { '0': 6.878726412679837 },
     ts: { '0': 2021-07-10T00:00:07.000Z } } }
Enter fullscreen mode Exit fullscreen mode

Step 5: Playing with bucket growth and bucket limits

The bucket document for meta: { sensorId: 52396, type: 'windspeed' } currently holds 4 sensor readings. The question is how many more measurements can we ingest into this bucket? Obviously, buckets cannot grow indefinitely so there has to be an upper bound.

Earlier, we inspected the view definition of the logical abstraction and briefly mentioned the maxBucketSpanSize setting. When choosing a granularity of seconds during the creation of a time series collection the value for maxBucketSpanSize is 3600. In other words, this means buckets like this can span 1 hour worth of data. If we were to ingest one measurement per second we might assume that such a bucket can store up to 3600 sensor readings, one every second. However, when trying this we see a different behaviour. It seems that there is some kind of fixed upper bound of 1000 entries per bucket in a time series collection. The example document below shows a "full bucket" for sensorId 52396 with their first and last bucket entries respectively while omitting the rest of the data for reasons of brevity.

{ _id: ObjectId("60e8e30043c83ccb19952b3f"),
  control: 
   { version: 1,
     min: 
      { _id: ObjectId("60f2ef27f161e04419383dbb"),
        ts: 2021-07-10T00:00:00.000Z,
        value: 0.13698301036640723 },
     max: 
      { _id: ObjectId("60f2ef3bf161e04419477ec3"),
        ts: 2021-07-10T00:39:23.000Z,
        value: 37.4522108368789 } },
  meta: { sensorId: 52396, type: 'windspeed' },
  data: 
   { value: 
      { '0': 18.106480571706808,

      ...

        '999': 3.6329149494110027 },
     ts: 
      { '0': 2021-07-10T00:00:03.000Z,

      ...

        '999': 2021-07-10T00:39:23.000Z },
     _id: 
      { '0': ObjectId("60f2ef36f161e0441944f2df"),

      ...

      '999': ObjectId("60f2ef2ff161e044193f73ff") 
      } 
   }
}
Enter fullscreen mode Exit fullscreen mode

I haven’t found any indication in the current official documentation about this "magic constant" of limiting buckets to 1000 entries. In this case, it cannot be related e.g. to the document size limit because storing 1000 entries with this sample data doesn’t come anywhere close to a hard document size limit. Maybe the source code would reveal more about this, but so far I didn’t take the time to study the implementation itself.

We can also see from the control.min and control.max timestamps that this particular bucket span size is "only" 2363 seconds which is less than the maximum possible value of 3600. This is because the bucket hit its 1000 entries limit before the span size could be reached. In general, a bucket is closed and a new document created, if either its maxBucketSpanSize is reached or its maximum entries are exceeded (currently 1000), whichever happens first.

Another learning based on these observations explains the recommendation found in the official docs, namely that the chosen granularity settings should match the actual data ingestion rate as closely as possible. In our example of a time series collection with "seconds" granularity the bucket size is 1 hour (3600 sec). If, however, we would only ingest 2 - 3 values per hour this would mean, that we would get many new documents in the underlying time series collection with very small buckets of only 2 - 3 entries each. Clearly, this would drastically impact performance in a negative way and reduce the whole storage optimization mechanism of time series collections to absurdity. So choose the granularity of your time series collections wisely.

Conclusion and Outlook

MongoDB 5.0 introduced a new, natively optimized collection type for storing time series data. It makes the lives of developers easier because working with time series collections is a whole lot easier and more convenient when contrasting this to the past, where it was necessary to explicitly implement the bucket pattern. I hope this article contributed a bit to your understanding about what exactly happens behind the scenes of time series collections from a document storage perspective and the corresponding schema which implicitly reflects the ideas behind the bucket pattern. The most important thing to keep in mind is to take my observations with a grain of salt because it was my first quick exploration of this new MongoDB 5.0 feature.

I plan to write more parts in this series. The 2nd article, should discuss different kinds of aggregation queries over time series collections focusing on the newly introduced window functions.

Stay tuned!

Image Credits:
(c) lukechesser @ Unsplash - https://unsplash.com/photos/JKUTrJ4vK00

Top comments (5)

Collapse
 
valyala profile image
Aliaksandr Valialkin

It would be great comparing MongoDB performance and resource utilization (ram, cpu, disk io, disk space usage) for both time series data ingestion and querying to other solutions such as TimescaleDB, InfluxDB and VictoriaMetrics.

Collapse
 
hpgrahsl profile image
Hans-Peter Grahsl

Absolutely! The thing is, doing serious performance measurements is very very difficult. Ideally you know all compared technologies equally well and even then it's hard to produce something which is reasonable and representative. But I'd be happy to see such a benchmark being done by someone - you? ;-)

Collapse
 
alexbevi profile image
Alex Bevilacqua

I haven’t found any indication in the current official documentation about this "magic constant" of limiting buckets to 1000 entries.

The limits look to have been set in SERVER-52526. For 5.0 it appears the bucket limits are either 1000 measurements or 125KB (see timeseries.idl)

Collapse
 
hpgrahsl profile image
Hans-Peter Grahsl

Thanks Alex for the links. Really appreciated. I'd think it would be helpful to state the current bucket limits explicitly in the docs, too.

Collapse
 
hunghoangct profile image
hunghoang-ct

Thank for your sharing. Recently I faced this issue:
{"t":{"$date":"2021-10-30T16:40:21.605+07:00"},"s":"F", "c":"-", "id":23081, "ctx":"conn54","msg":"Invariant failure","attr":{"expr":"batch->getResult().getStatus() == ErrorCodes::TimeseriesBucketCleared","msg":"Got unexpected error (DuplicateKey{ keyPattern: { _id: 1 }, keyValue: { _id: ObjectId('617c60d0980af88b77a3179b') } }: E11000 duplicate key error collection: adstats.system.buckets.AdStatsViewTimeSeries dup key: { _id: ObjectId('617c60d0980af88b77a3179b') }) preparing time-series bucket to be committed for adstats.AdStatsViewTimeSeries: { insert: \"AdStatsViewTimeSeries\", bypassDocumentValidation: false, ordered: true, documents: [ { _id: ObjectId('617d130132f5ffc893af4243'), metadata: { listID: 89121562 }, timestamp: new Date(1635586800000), count: 1 }

I don't know what cause this issue and how to fix it. Could you lend me some help?