-1

We have success creating the pods, services and replication controllers according to our project requirements. Now we are planning to setup persistence storage in AWS using Kubernetes. I have created the YAML file to create an EBS volume in AWS, it's working fine as expected. I am able to claim volume and successfully mount to my pod (this is for single replica only).

I am able to successfully create the file.Volume also creating but my Pods is going to pending state, volume still shows available state in aws. I am not able to see any error logs over there.

Storage file:

kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: mongo-ssd
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2

Main file:

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
    name: web2
spec:
  selector:
    matchLabels:
      app: mongodb
  serviceName: "mongodb"
  replicas: 2
  template:
    metadata:
      labels:
        app: mongodb
      annotations:
         pod.alpha.kubernetes.io/initialized: "true"
    spec:
      containers:
      - image: mongo
        name: mongodb
        ports:
        - name: web2
          containerPort: 27017
          hostPort: 27017
        volumeMounts:
        - mountPath: "/opt/couchbase/var"
          name: mypd1
  volumeClaimTemplates:
  - metadata:
      name: mypd1
      annotations:
        volume.alpha.kubernetes.io/storage-class: mongo-ssd
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 10Gi

Kubectl version:

Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T10:09:24Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.6", GitCommit:"6260bb08c46c31eea6cb538b34a9ceb3e406689c", GitTreeState:"clean", BuildDate:"2017-12-21T06:23:29Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Shahriar
  • 13,460
  • 8
  • 78
  • 95
Raju
  • 61
  • 1
  • 7

1 Answers1

2

I can see you have used hostPort in your container. In this case, If you do not have more than one node in your cluster, One Pod will remain pending. Because It will not fit any node.

  containers:
  - image: mongo
    name: mongodb
    ports:
    - name: web2
      containerPort: 27017
      hostPort: 27017

I am getting this error when I describe my pending Pod

  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  27s (x7 over 58s)  default-scheduler  No nodes are available that match all of the predicates: PodFitsHostPorts (1). 

HostPort in your container will be bind with your node. Suppose, you are using HostPort 10733, but another pod is already using that port, now you pod can't use that. So it will be in pending state. And If you have replica 2, and both pod is deployed in same node, they can't be started either.

So, you need to use a port as HostPort, that you can surely say that no one else is using.

Shahriar
  • 13,460
  • 8
  • 78
  • 95
  • Thank you for response output of "kubectl get pods --selector="app=mongodb"" web2-0 0/1 Pending 0 12h – Raju Jan 27 '18 at 16:28
  • If I understand properly your reply, If we use HostPort we should use replica more than one right. Can I remove that, I want to use hostport for multiple pods , how can we use? – Raju Jan 27 '18 at 16:30
  • Thank you, sir, your valuable time to understand my issue very clearly. I ran my file without hostport, it is successfully running now. May I know how can we use host port for multiple containers?. – Raju Jan 27 '18 at 16:42
  • If you set host port, your host port need to be free. Suppose, pod 1 is scheduled in node 1. But in node 1, your host port is not free. In this case your pod will be pending. You can try using different hostport. Let me know – Shahriar Jan 27 '18 at 16:48
  • Hi Sir, Now I am able running pods successfully irrespective of zones. when I have removed hostport. When I have seen in AWS console there also its showing me in-use state. When I have described the pod" I am seeing below messge " Warning FailedMount 6m attachdetach AttachVolume.Attach failed for volume "pvc-6a07a6cf-0384-11e8-aaf8-1227c2722234" : Error attaching EBS volume "vol-01f88af8973b5ba8b" to instance "i-0538a658950d85f45": "IncorrectState: vol-01f88af8973b5ba8b is not 'available'.\n\tstatus code: 400, request id: b023f856-5a92-44ef-ad0b-952b62b1362f" – Raju Jan 27 '18 at 17:16
  • May I know is there any reason why its showing message like this – Raju Jan 27 '18 at 17:16
  • `vol-01f88af8973b5ba8b` is already attached with your node. So, its in-use state. If there is no valuable data, you can clean pvc,pv and volume from aws. – Shahriar Jan 27 '18 at 17:20
  • Thank you for your valuable time and gave the reply. I understand the problem – Raju Jan 27 '18 at 18:34
  • Upvote please. And accept answer if you want. @mahesh – Shahriar Jan 27 '18 at 18:40
  • https://stackoverflow.com/help/someone-answers. click up-arrow beside answer. https://ibb.co/b9XhjG – Shahriar Jan 28 '18 at 03:58