0

I have a self-hosted MongoDB deployment on an AWS EKS cluster, version 1.24.

Every time I put some workload on the cluster, the MongoDB shards eat most of RAM of the node. I'm running on t3.medium instances, and every shard uses ~2GB. Since there are multiple shards on each node, it just fills the memory, and the node becomes unavailable.

I've tried limiting the WiredTiger cache size to 0.25GB, but it doesn't seem to work.

I've also tried manually clearing the cache with db.collection.getPlanCache().clear, but it's doing nothing.

db.collection.getPlanCache().list() returns an empty array.

I've also tried checking the storage engine. But bothdb.serverStatus().wiredTiger and db.serverStatus().storageEngine are undefined in the mongoshell.

I'm using the bitnami mongodb-sharded chart, with the current values:

mongodb-sharded:
  shards: 8
  shardsvr:
    persistence:
      resourcePolicy: "keep"
      enabled: true
      size: 100Gi
  configsvr:
    replicaCount: 2
  mongos:
    replicaCount: 2
    configCM: mongos-configmap

The mongos config map is this one

apiVersion: v1
kind: ConfigMap
metadata:
  name: mongos-configmap
data:
  mongo.conf: |
    storage:
      wiredTiger:
        engineConfig:
          cacheSizeGB: 0.25
      inMemory:
        engineConfig:
          inMemorySizeGB: 0.25
Stichiboi
  • 79
  • 1
  • 5
  • Why do you have multiple shards on a single node? The main purpose of sharding is, to distribute the load over multiple machines, i.e. just the opposite. Parameter `storage.inMemory.engineConfig.inMemorySizeGB` is for [In-Memory Storage Engine](https://www.mongodb.com/docs/manual/core/inmemory/), actually I am surprised that you don't get an error at startup when you set `storage.inMemory` and `storage.wiredTiger` parameters. Apart from that, `mongos` does not store any data at all, so `storage` parameters are ignored completely. – Wernfried Domscheit Feb 03 '23 at 07:05
  • I'm surprised I don't get the error as well. That's why I think there is an issue loading the config file. Regarding the `mongos`, I've also tried putting the same config file on the `shardsrv` and `configsrv`, still not working. – Stichiboi Feb 03 '23 at 07:07
  • Loaded configuration you can check with `db.serverCmdLineOpts()` – Wernfried Domscheit Feb 03 '23 at 07:09
  • Ummm interesting: it's loading the `mongos.conf`, whereas if I shell in the shard, I see a second file called `mongo.conf` (without the `s`) that has my configuration. So I guess it's not loading the correct one – Stichiboi Feb 03 '23 at 07:13

1 Answers1

0

Solved the various issues:

  1. I had typo in the ConfigMap -> mongo.conf instead of mongos.conf. This meant that it was creating a different unused config file.
  2. mongos are not the ones with the storage engine: that's on the mongod (the shards). So the config should be put in shardsvr.dataNode.configCM
  3. Setting a custom config means that I'm overwriting the default one deployed by bitnami -> you would need to copy it all, and then modify what you need. A much better option would be to just add flags at shardsvr.dataNode.mongodbExtraFlags

In my case this is how I setup the values.yaml

  shardsvr:
    dataNode:
      mongodbExtraFlags:
        - "--wiredTigerCacheSizeGB .3"

Another note: the reason db.serverStatus().storageEngine and db.serverStatus().wiredTiger were undefined was that I was running the mongosh from MongoDBCompass, which acutally connects to the mongos (which does not have a storage engine)

If instead you shell into one of the shards, and run mongosh (in my case it's at /opt/bitnami/monogdb/bin/) the commands work properly.

Stichiboi
  • 79
  • 1
  • 5