0

I tried to follow this link about Kubernetes MongoDB with persistent volume (exactly same).

but aftrer I deploy everything and take a look on the pod, I got these errors:

Name:           mongodb-standalone-0
Namespace:      default
Priority:       0
Node:           <none>
Labels:         app=database
                controller-revision-hash=mongodb-standalone-7688499856
                selector=mongodb-standalone
                statefulset.kubernetes.io/pod-name=mongodb-standalone-0
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  StatefulSet/mongodb-standalone
Containers:
  mongodb-standalone:
    Image:      mongo:4.0.8
    Port:       <none>
    Host Port:  <none>
    Environment:
      MONGO_INITDB_ROOT_USERNAME_FILE:  /etc/k8-training/admin/MONGO_ROOT_USERNAME
      MONGO_INITDB_ROOT_PASSWORD_FILE:  /etc/k8-training/admin/MONGO_ROOT_PASSWORD
    Mounts:
      /config from mongodb-conf (ro)
      /data/db from mongodb-data (rw)
      /docker-entrypoint-initdb.d from mongodb-scripts (ro)
      /etc/k8-training from k8-training (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xbl5z (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  k8-training:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  k8-training
    Optional:    false
  mongodb-scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      mongodb-standalone
    Optional:  false
  mongodb-conf:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      mongodb-standalone
    Optional:  false
  mongodb-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  mongodb-standalone
    ReadOnly:   false
  default-token-xbl5z:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-xbl5z
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  kubernetes.io/hostname=mongodb-node
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  0/1 nodes are available: 1 node(s) didn't match node selector.
  Warning  FailedScheduling  <unknown>  default-scheduler  0/1 nodes are available: 1 node(s) didn't match node selector.

Result of kubectl get nodes --show-labels:

minikube   Ready    master   43h   v1.17.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=minikube,kubernetes.io/os=linux,node-role.kubernetes.io/master=

I've tried to find a way to debug this issue, got nothing..

I successfully run my MongoDB with my apps in Kubernetes, but the problem is I want to have persistent volume for my data and so far I couldn't find the right approach to make it works. I appreciate any help, thank you.


UPDATE

I have followed to change the nodeSelector, but still got same error:

Name:           mongodb-standalone-0
Namespace:      default
Priority:       0
Node:           <none>
Labels:         app=database
                controller-revision-hash=mongodb-standalone-74895d955f
                selector=mongodb-standalone
                statefulset.kubernetes.io/pod-name=mongodb-standalone-0
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  StatefulSet/mongodb-standalone
Containers:
  mongodb-standalone:
    Image:      mongo:4.0.8
    Port:       <none>
    Host Port:  <none>
    Environment:
      MONGO_INITDB_ROOT_USERNAME_FILE:  /etc/k8-training/admin/MONGO_ROOT_USERNAME
      MONGO_INITDB_ROOT_PASSWORD_FILE:  /etc/k8-training/admin/MONGO_ROOT_PASSWORD
    Mounts:
      /config from mongodb-conf (ro)
      /data/db from mongodb-data (rw)
      /docker-entrypoint-initdb.d from mongodb-scripts (ro)
      /etc/k8-training from k8-training (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xbl5z (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  k8-training:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  k8-training
    Optional:    false
  mongodb-scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      mongodb-standalone
    Optional:  false
  mongodb-conf:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      mongodb-standalone
    Optional:  false
  mongodb-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  mongodb-standalone
    ReadOnly:   false
  default-token-xbl5z:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-xbl5z
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  kubernetes.io/hostname=minikube
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.
  Warning  FailedScheduling  <unknown>  default-scheduler  0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.
Darryl RN
  • 7,432
  • 4
  • 26
  • 46

2 Answers2

2

I guess you are missing label kubernetes.io/hostname: mongodb-node on your node.

  • you can remove nodeSelector from your yaml:
      nodeSelector:
        kubernetes.io/hostname: mongodb-node
  • you can label your node with kubectl label node <your_node_name> kubernetes.io/hostname=mongodb-node --overwrite but I do not recommend the this approach.
  • you can change nodeSelector to a proper one, juest check your kubernetes.io/hostname with kubectl get no --show-labels
FL3SH
  • 2,996
  • 1
  • 17
  • 25
  • This is one of default Kubernetes label, I don't know what consequences it might have - it may break your cluster. – FL3SH Feb 09 '20 at 19:00
1

Change the nodeselector in your deployment to kubernetes.io/hostname=minikube

Edit:

In your persistent volume you have a nodeAffinity which you need to modify and provide it correct value

nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
            - minikube
Arghya Sadhu
  • 41,002
  • 9
  • 78
  • 107