4

I'm trying to attach the dummy-attachable FlexVolume sample for Kubernetes which seems to initialize normally according to my logs on both the nodes and master:

Loaded volume plugin "flexvolume-k8s/dummy-attachable

But when I try to attach the volume to a pod, the attach method never gets called from the master. The logs from the node read:

flexVolume driver k8s/dummy-attachable: using default GetVolumeName for volume dummy-attachable
operationExecutor.VerifyControllerAttachedVolume started for volume "dummy-attachable"
Operation for "\"flexvolume-k8s/dummy-attachable/dummy-attachable\"" failed. No retries permitted until 2019-04-22 13:42:51.21390334 +0000 UTC m=+4814.674525788 (durationBeforeRetry 500ms). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"dummy-attachable\" (UniqueName: \"flexvolume-k8s/dummy-attachable/dummy-attachable\") pod \"nginx-dummy-attachable\"

Here's how I'm attempting to mount the volume:

apiVersion: v1
kind: Pod
metadata:
  name: nginx-dummy-attachable
  namespace: default
spec:
  containers:
    - name: nginx-dummy-attachable
      image: nginx
      volumeMounts:
        - name: dummy-attachable
          mountPath: /data
      ports:
        - containerPort: 80
  volumes:
    - name: dummy-attachable
      flexVolume:
        driver: "k8s/dummy-attachable"

Here is the ouput of kubectl describe pod nginx-dummy-attachable:

Name:               nginx-dummy-attachable
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               [node id]
Start Time:         Wed, 24 Apr 2019 08:03:21 -0400
Labels:             <none>
Annotations:        kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container nginx-dummy-attachable
Status:             Pending
IP:                 
Containers:
  nginx-dummy-attachable:
    Container ID:   
    Image:          nginx
    Image ID:       
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:        100m
    Environment:  <none>
    Mounts:
      /data from dummy-attachable (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-hcnhj (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  dummy-attachable:
    Type:       FlexVolume (a generic volume resource that is provisioned/attached using an exec based plugin)
    Driver:     k8s/dummy-attachable
    FSType:     
    SecretRef:  nil
    ReadOnly:   false
    Options:    map[]
  default-token-hcnhj:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-hcnhj
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason       Age                From                                    Message
  ----     ------       ----               ----                                    -------
  Warning  FailedMount  41s (x6 over 11m)  kubelet, [node id]  Unable to mount volumes for pod "nginx-dummy-attachable_default([id])": timeout expired waiting for volumes to attach or mount for pod "default"/"nginx-dummy-attachable". list of unmounted volumes=[dummy-attachable]. list of unattached volumes=[dummy-attachable default-token-hcnhj]

I added debug logging to the FlexVolume, so I was able to verify that the attach method was never called on the master node. I'm not sure what I'm missing here.

I don't know if this matters, but the cluster is being launched with KOPS. I've tried with both k8s 1.11 and 1.14 with no success.

kellanburket
  • 12,250
  • 3
  • 46
  • 73
  • Is the pod scheduled on the master and is it available? – krjw Apr 24 '19 at 08:15
  • no, the pod is scheduled on one of the nodes. Isn't the idea behind flexVolumes that the master is supposed to attach the volumes? I've added the output from `describe pod` – kellanburket Apr 24 '19 at 12:21
  • I think that is not necessarily correct. From the documentation: `Flexvolume enables users to write their own drivers and add support for their volumes in Kubernetes. Vendor drivers should be installed in the volume plugin path on every node, and on master if the driver requires attach capability (unless --enable-controller-attach-detach Kubelet option is set to false, but this is highly discouraged because it is a legacy mode of operation).` https://github.com/kubernetes/community/blob/master/contributors/devel/sig-storage/flexvolume.md – krjw Apr 24 '19 at 12:45
  • Also in the Design Spec: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/flexvolume-deployment.md - it says that the drivers need to be present on all nodes. VolumeManager is part of kubelet and watches the directory and then calls the installed driver (the example in your case). That's how I understood it at least. The question is, have you installed the driver on both your nodes? volume manager: https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/volumemanager/volume_manager.go – krjw Apr 24 '19 at 12:53
  • I'm sorry, I misunderstood your question. The FlexVolume plugin IS installed on the master and nodes--I thought you meant the pod trying to mount the FlexVolume – kellanburket Apr 24 '19 at 13:02
  • No, no I did ask that and thanks for the `describe pod`! :) I just didn't know how you expected FlexVolume to work. I guess you did everything correct then. – krjw Apr 24 '19 at 13:06
  • There must be some reason why the master doesn't see it, even though the master has initialized the plugin. Maybe its a configuration parameter? Is there any way to verify, other than the logs, that the master has actually installed the plugin? – kellanburket Apr 24 '19 at 13:12
  • In my understanding there is not really a 'installation process', because (VolumeManager) kubelet just watches the directory where the plugins reside and then calls the functions of the Volume. I think that pods which are scheduleed on nodes also call the plugins which are installed on this particular node. That's why you need to 'install' the plugin on every node. Otherwise kubelet won't find it. maybe you could check the logs of kubelet on the node or are this the logs from the question? – krjw Apr 24 '19 at 13:24
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/192321/discussion-between-kellanburket-and-krjw). – kellanburket Apr 24 '19 at 13:26

1 Answers1

4

So this is a fun one.

Even though kubelet initializes the FlexVolume plugin on master, kube-controller-manager, which is containerized in KOPs, is the application that's actually responsible for attaching the volume to the pod. KOPs doesn't mount the default plugin directory /usr/libexec/kubernetes/kubelet-plugins/volume/exec into the kube-controller-manager pod, so it doesn't know anything about your FlexVolume plugins on master.

There doesn't appear to be a non-hacky way to do this other than to use a different Kubernetes deployment tool until KOPs addresses this problem.

kellanburket
  • 12,250
  • 3
  • 46
  • 73