0

I am using mongodb 3.6.8.

I have a collection (called states) with an ObjectId field (sensor_id), a date, as well as a few other fields. I created a compound index on the collection:

db.states.createIndex({ "sensor_id" : 1, "date" : 1 });

I am using the aggregation framework with a match stage, for example:

{ 
  "$match" : {
    "sensor_id" : { "$oid" : "5b8fd62c4f0cd13c05296df7"},
    "date" : { 
        "$gte" : { "$date" : "2018-10-06T04:19:00.000Z"}, 
         "$lt" : { "$date" : "2018-10-06T10:19:09.000Z"}
    }
  }
}

My problem: as the states collection gets bigger, the pipeline aggregation gets slower and slower, even when the documents that are added fall outside the dates in the match filter. Using this index, I really expected performance not to vary very much as the collection gets bigger.

Other info:

  • The states collection does not have very many documents (about 200,000), of which about 20,000 are matched by the above filter.
  • The indexes in the collection (and other collections) are just a few megabytes and easily fit in memory.
  • The aggregation pipeline does not insert any documents in the database.

Can anyone suggest what I should investigate to explain the pretty drastic fall in performance as the collection grows (with new documents outside the date range in $match)?

Thank you.

1 Answers1

0

Solved. Another stage in the pipeline was treating every more data, as time evolved. Nothing wrong with the indexes.