2

In my docker image I have a directory /opt/myapp/etc which has some files and directories. I want to create statefulset for my app. In that statefulset I am creating persistent volume claim and attach to /opt/myapp/etc. Statefulset yaml is attached below. Can anyone tell me how to attach volume to container in this case?

apiVersion: apps/v1
kind: StatefulSet
metadata:
 name: statefulset
labels:
 app: myapp
spec:
  serviceName: myapp
 replicas: 1
selector:
matchLabels:
  app: myapp
template:
metadata:
  labels:
    app: myapp
spec:
  containers:
  - image: 10.1.23.5:5000/redis
    name: redis
    ports:
    - containerPort: 6379
      name: redis-port
  - image: 10.1.23.5:5000/myapp:18.1
    name: myapp
    ports:
    - containerPort: 8181
      name: port
    volumeMounts:
    - name: data
      mountPath: /opt/myapp/etc
volumeClaimTemplates:
- metadata:
  name: data
  spec:
   accessModes: [ "ReadWriteOnce" ]
  storageClassName: standard
  resources:
    requests:
        storage: 5Gi

Here is the output of describe pod

   Events:
  Type     Reason                  Age              From                     Message
  ----     ------                  ----             ----                     -------
  Warning  FailedScheduling        3m (x4 over 3m)  default-scheduler        pod has unbound PersistentVolumeClaims
  Normal   Scheduled               3m               default-scheduler        Successfully assigned controller-statefulset-0 to dev-k8s-2
  Normal   SuccessfulMountVolume   3m               kubelet, dev-k8s-2       MountVolume.SetUp succeeded for volume "default-token-xpskd"
  Normal   SuccessfulAttachVolume  3m               attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-77d2cef8-a674-11e8-9358-fa163e3294c1"
  Normal   SuccessfulMountVolume   3m               kubelet, dev-k8s-2       MountVolume.SetUp succeeded for volume "pvc-77d2cef8-a674-11e8-9358-fa163e3294c1"
  Normal   Pulling                 2m               kubelet, dev-k8s-2       pulling image "10.1.23.5:5000/redis"
  Normal   Pulled                  2m               kubelet, dev-k8s-2       Successfully pulled image "10.1.23.5:5000/redis"
  Normal   Created                 2m               kubelet, dev-k8s-2       Created container
  Normal   Started                 2m               kubelet, dev-k8s-2       Started container
  Normal   Pulled                  1m (x4 over 2m)  kubelet, dev-k8s-2       Container image "10.1.23.5:5000/myapp:18.1" already present on machine
  Normal   Created                 1m (x4 over 2m)  kubelet, dev-k8s-2       Created container
  Normal   Started                 1m (x4 over 2m)  kubelet, dev-k8s-2       Started container
  Warning  BackOff                 1m (x7 over 2m)  kubelet, dev-k8s-2       Back-off restarting failed container

storageclass definition

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
 name: standard
 namespace: controller
provisioner: kubernetes.io/cinder
reclaimPolicy: Retain
parameters:
 availability: nova
Karthik
  • 744
  • 2
  • 7
  • 23
  • Are you getting any errors or is it not getting attached? – not 0x12 Aug 23 '18 at 04:08
  • I am not getting any errors. 'Kubectl describe pod' says Normal SuccessfulAttachVolume 3m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-77d2cef8-a674-11e8-9358-fa163e3294c1" and Normal SuccessfulAttachVolume 3m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-77d2cef8-a674-11e8-9358-fa163e3294c1" Container starting is failing. I attached kubectl describe pod results in the question – Karthik Aug 23 '18 at 05:13

2 Answers2

2

check if you have storage class defined in your cluster. kubectl get storageclass If your are using default storage class as host-path(in case of minikube) then you do not need to include storage class into your template.

volumeClaimTemplates: - metadata: name: data spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 5Gi by specifying no storage class k8s will go ahead and schedule the persistent volume with the default storage class which would be host-path in case of minikube also make sure /opt/myapp/etc exist on the node where pod is going to be scheduled.

captainchhala
  • 831
  • 1
  • 7
  • 14
  • I have strogeclass as standard which is openstack. Kubectl describe pod results show it has created PV successfully. Problem is with attaching it to the container. after attaching to the container its crashing. You can see the kubectl describe pod result in the question. Warning BackOff 1m (x7 over 2m) kubelet, dev-k8s-2 Back-off restarting failed container. – Karthik Aug 23 '18 at 05:08
  • I dont think it has to do anything with PV can you check the logs of container with `kubectl logs your-pod-name` to get more insight. – captainchhala Aug 23 '18 at 05:15
  • I have a doubt say /opt/etc is a directory in container. It has some files and sub directories. Is it possible to attach a PV to the container during start up to /op/etc directory? My intention is any changes done to /opt/etc should be available in volume always. – Karthik Aug 23 '18 at 05:23
  • Apparently they should be, as long as container has read-write access to that volume. – captainchhala Aug 23 '18 at 05:49
0

Kubernetes will not allow the mounting 2 volumes to a same directory. second mount will overwrite the files created by the first. In my case docker image had some files in etc directory, which were removed after mounting the volume. Solved the problem using subpath.

Karthik
  • 744
  • 2
  • 7
  • 23