2

I have followed the steps from this guide this guide to deploy the efs-provider for Kubernetes and bind an EFS filesystem. I have not succed.

I am implementing Kubernetes with Amazon EKS and I use EC2 instances as worker nodes, all are deployed using eksctl.

After I applied this adjusted manifest file, the result is:

$ kubectl get pods
NAME                               READY   STATUS    RESTARTS
efs-provisioner-#########-#####   1/1     Running   0       

$ kubectl get pvc
NAME       STATUS    VOLUME
test-pvc   Pending   efs-storage

No matter how much time I wait, the status of my PVC stucks in Pending.

After the creation of a Kubernetes cluster and worker nodes and the configuration of the EFS filesystem, I apply the efs-provider manifest with all the variables pointing to the EFS filesystem. In the StorageClass configuration file is specified the spec.AccessModes field as ReadWriteMany.

At this point my efs-provider pod is running without errors and the status of the PVCis Pending. What can it be? How can I configure the efs-provider to use the EFS filesystem? How much should I wait to get the PVC status in Bound?


Update

About the configuration of the Amazon Web Services, these is what I have done:

  • After the creation of the EFS filesystem, I have created a mount point for each subnet where my nodes are.
  • To each mount point is attached a security group with a inbound rule to grant the access to the NFS port (2049) from the security group of each nodegroup.

The description of my EFS security group is:

{
    "Description": "Communication between the control plane and worker nodes in cluster",
    "GroupName": "##################",
    "IpPermissions": [
        {
        "FromPort": 2049,
        "IpProtocol": "tcp",
        "IpRanges": [],
        "Ipv6Ranges": [],
        "PrefixListIds": [],
        "ToPort": 2049,
        "UserIdGroupPairs": [
            {
            "GroupId": "sg-##################",
            "UserId": "##################"
            }
        ]
        }
    ],
    "OwnerId": "##################",
    "GroupId": "sg-##################",
    "IpPermissionsEgress": [
        {
        "IpProtocol": "-1",
        "IpRanges": [
            {
            "CidrIp": "0.0.0.0/0"
            }
        ],
        "Ipv6Ranges": [],
        "PrefixListIds": [],
        "UserIdGroupPairs": []
        }
    ],
    "VpcId": "vpc-##################"
}

Deployment

The output of the kubectl describe deploy ${DEPLOY_NAME} command is:

$ DEPLOY_NAME=efs-provisioner; \
> kubectl describe deploy ${DEPLOY_NAME}
Name:               efs-provisioner
Namespace:          default
CreationTimestamp:  ####################
Labels:             app=efs-provisioner
Annotations:        deployment.kubernetes.io/revision: 1
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{},"name":"efs-provisioner","namespace":"default"},"spec"...
Selector:           app=efs-provisioner
Replicas:           1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:       Recreate
MinReadySeconds:    0
Pod Template:
Labels:           app=efs-provisioner
Service Account:  efs-provisioner
Containers:
efs-provisioner:
Image:      quay.io/external_storage/efs-provisioner:latest
Port:       <none>
Host Port:  <none>
Environment:
FILE_SYSTEM_ID:    <set to the key 'file.system.id' of config map 'efs-provisioner'>    Optional: false
AWS_REGION:        <set to the key 'aws.region' of config map 'efs-provisioner'>        Optional: false
DNS_NAME:          <set to the key 'dns.name' of config map 'efs-provisioner'>          Optional: true
PROVISIONER_NAME:  <set to the key 'provisioner.name' of config map 'efs-provisioner'>  Optional: false
Mounts:
/persistentvolumes from pv-volume (rw)
Volumes:
pv-volume:
Type:      NFS (an NFS mount that lasts the lifetime of a pod)
Server:    fs-#########.efs.##########.amazonaws.com
Path:      /
ReadOnly:  false
Conditions:
Type           Status  Reason
----           ------  ------
Available      True    MinimumReplicasAvailable
OldReplicaSets:  <none>
NewReplicaSet:   efs-provisioner-576c67cf7b (1/1 replicas created)
Events:
Type    Reason             Age   From                   Message
----    ------             ----  ----                   -------
Normal  ScalingReplicaSet  106s  deployment-controller  Scaled up replica set efs-provisioner-576c67cf7b to 1

Pod Logs

The output of the kubectl logs ${POD_NAME} command is:

$ POD_NAME=efs-provisioner-576c67cf7b-5jm95; \
> kubectl logs ${POD_NAME}
E0708 16:03:46.841229       1 efs-provisioner.go:69] fs-#########.efs.##########.amazonaws.com
I0708 16:03:47.049194       1 leaderelection.go:187] attempting to acquire leader lease  default/kubernetes.io-aws-efs...
I0708 16:03:47.061830       1 leaderelection.go:196] successfully acquired lease default/kubernetes.io-aws-efs
I0708 16:03:47.062791       1 controller.go:571] Starting provisioner controller kubernetes.io/aws-efs_efs-provisioner-576c67cf7b-5jm95_f7c5689f-a199-11e9-a152-def1285e1be5!
I0708 16:03:47.062877       1 event.go:221] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"default", Name:"kubernetes.io-aws-efs", UID:"f7c682cd-a199-11e9-80bd-1640944916e4", APIVersion:"v1", ResourceVersion:"3914", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' efs-provisioner-576c67cf7b-5jm95_f7c5689f-a199-11e9-a152-def1285e1be5 became leader
I0708 16:03:47.162998       1 controller.go:620] Started provisioner controller kubernetes.io/aws-efs_efs-provisioner-576c67cf7b-5jm95_f7c5689f-a199-11e9-a152-def1285e1be5!

StorageClass

The output of the kubectl describe sc ${STORAGE_CLASS_NAME} command is:

$ STORAGE_CLASS_NAME=aws-efs; \
> kubectl describe sc ${STORAGE_CLASS_NAME}
Name:            aws-efs
IsDefaultClass:  No
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"aws-efs"},"provisioner":"aws-efs"}
Provisioner:           aws-efs
Parameters:            <none>
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>

PersistentVolumeClaim

The output of the kubectl describe pvc ${PVC_NAME} command is:

$ PVC_NAME=efs; \
> kubectl describe pvc ${PVC_NAME}
Name:          efs
Namespace:     default
StorageClass:  aws-efs
Status:        Pending
Volume:
Labels:        <none>
Annotations:   kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"volume.beta.kubernetes.io/storage-class":"aws-efs"},"name":"...
volume.beta.kubernetes.io/storage-class: aws-efs
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type       Reason              Age                 From                         Message
----       ------              ----                ----                         -------
Warning    ProvisioningFailed  43s (x12 over 11m)  persistentvolume-controller  no volume plugin matched
Mounted By:  <none>

About the questions

  1. Do you have the EFS filesystem id properly configured for your efs-provisioner?

    • Yes, both (from the fs and the configured) match.
  2. Do you have the proper IAM credentials to access this EFS?

    • Yes, my user has and also the eksctl tool configures it.
  3. Does that EFS path specified for your provisioner exist?

    • Yes, is only the root (/) path.
  4. Did you add an EFS endpoint to the subnet that your worker node(s) are running on, or ensure that your EFS subnets have an Internet Gateway attached?

    • Yes, I have added the EFS endpoints to the subnet that my worker node(s) are running on.
  5. Did you set your security group to allow the Inbound for NFS port(s)?

    • Yes.
CryogenicNeo
  • 937
  • 12
  • 25
  • 1
    We need more information. Please update the question by providing the `kubectl describe` of the pvc and deployment. Also, provide the log output from one of the pods. It looks like there are security groups required to allow the nodes to connect to EFS. It would also be good to show the security group output and that any nodes are associated to it. – Andy Shinn Jul 06 '19 at 00:28
  • A few things you may want to check: 1. Do you have the EFS filesystem id properly configured for your efs-provisioner? 2. Do you have the proper IAM credentials to access this EFS? 3. Does that EFS path specified for your provisioner exist? 4. Did you add an EFS endpoint to the subnet that your worker node(s) are running on, or ensure that your EFS subnets have an Internet Gateway attached? 5. Did you set your security group to allow the Inbound for NFS port(s)? – Frank Yucheng Gu Jul 08 '19 at 03:23
  • Ok! I have updated my question with the information you request. – CryogenicNeo Jul 08 '19 at 16:41

1 Answers1

3

I have solved my issue by replacing the provisioner name of my StorageClass from kubernetes.io/aws-efs to only aws-efs.

As we can read on this issue comment on Github posted by wongma7:

The issue is that provisioner is kubernetes.io/aws-efs. It can't begin with kubernetes.io as that is reserved by kubernetes.

That solves the ProvisioningFailed on the Events produced on the PersistentVolumeClaim by the persistentvolume-controller.

CryogenicNeo
  • 937
  • 12
  • 25