We are deploying multiple Bitnami/MongoDB via helm on an AKS Cluster running on Kubernetes 1.19.7 on 4 Nodes (DS2_V2).
Each MongoDb is installed separately via helm install, however, completely randomically one of them fails and go in timedout condition reporting:
Unable to attach or mount volumes: unmounted volumes=[datadir], unattached volumes=[datadir-db-token-p55m7]: timed out waiting for the condition
This is totally random and occuring since last week. As Storage Class we use the default from aks, based on azure-storage-disk. We have tried to change the volume binding move from WaitingForFirstConsumer to Immediate without success.
The full list of event is:
- Successfully assigned tp-xxxxxx/xxxxxx-api-db-0 to aks-default-26759343-vmss000003
- AttachVolume.Attach succeeded for volume "pvc-adf7e23f-2264-4eb5-bc29-9e1707a3f5a9"
- Unable to attach or mount volumes: unmounted volumes=[datadir], unattached volumes=[datadir xxxxx-api-db-token-p55m7]: timed out waiting for the condition
- MountVolume.WaitForAttach failed for volume "pvc-adf7e23f-2264-4eb5-bc29-9e1707a3f5a9" : timed out waiting for the condition 5.Unable to attach or mount volumes: unmounted volumes=[datadir], unattached volumes=[xxxx-api-db-token-p55m7 datadir]: timed out waiting for the condition
This lead to helm install to reach the timeout and then installation fails.