I have a self-hosted MongoDB deployment on an AWS EKS cluster, version 1.24
.
Every time I put some workload on the cluster, the MongoDB shards eat most of RAM of the node. I'm running on t3.medium
instances, and every shard uses ~2GB. Since there are multiple shards on each node, it just fills the memory, and the node becomes unavailable.
I've tried limiting the WiredTiger
cache size to 0.25GB, but it doesn't seem to work.
I've also tried manually clearing the cache with db.collection.getPlanCache().clear
, but it's doing nothing.
db.collection.getPlanCache().list()
returns an empty array.
I've also tried checking the storage engine. But bothdb.serverStatus().wiredTiger
and db.serverStatus().storageEngine
are undefined
in the mongoshell.
I'm using the bitnami mongodb-sharded
chart, with the current values:
mongodb-sharded:
shards: 8
shardsvr:
persistence:
resourcePolicy: "keep"
enabled: true
size: 100Gi
configsvr:
replicaCount: 2
mongos:
replicaCount: 2
configCM: mongos-configmap
The mongos config map is this one
apiVersion: v1
kind: ConfigMap
metadata:
name: mongos-configmap
data:
mongo.conf: |
storage:
wiredTiger:
engineConfig:
cacheSizeGB: 0.25
inMemory:
engineConfig:
inMemorySizeGB: 0.25