While mounting my EBS volume to the kubernetes cluster I was getting this error :
Warning FailedMount 64s kubelet Unable to attach or mount volumes: unmounted volumes=[ebs-volume], unattached volumes=[ebs-volume kube-api-access-rq86p]: timed out waiting for the condition
Below are my SC, PV, PVC, and Deployment files
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
mountOptions:
- debug
volumeBindingMode: Immediate
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: ebs-pv
labels:
type: ebs-pv
spec:
storageClassName: standard
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
volumeID: vol-0221ed06914dbc8fd
fsType: ext4
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ebs-pvc
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
kind: Deployment
metadata:
labels:
app: gitea
name: gitea
spec:
replicas: 1
selector:
matchLabels:
app: gitea
template:
metadata:
labels:
app: gitea
spec:
volumes:
- name: ebs-volume
persistentVolumeClaim:
claimName: ebs-pvc
containers:
- image: gitea/gitea:latest
name: gitea
volumeMounts:
- mountPath: "/data"
name: ebs-volume
This is my PV and PVC which I believe is connected perfectly
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/ebs-pv 1Gi RWO Retain Bound default/ebs-pvc standard 18m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/ebs-pvc Bound ebs-pv 1Gi RWO standard 18m
This is my storage class
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
standard kubernetes.io/aws-ebs Retain Immediate false 145m
This is my pod description
Name: gitea-bb86dd6b8-6264h
Namespace: default
Priority: 0
Node: worker01/172.31.91.105
Start Time: Fri, 04 Feb 2022 12:36:15 +0000
Labels: app=gitea
pod-template-hash=bb86dd6b8
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/gitea-bb86dd6b8
Containers:
gitea:
Container ID:
Image: gitea/gitea:latest
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/data from ebs-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rq86p (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
ebs-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: ebs-pvc
ReadOnly: false
kube-api-access-rq86p:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 20m default-scheduler Successfully assigned default/gitea-bb86dd6b8-6264h to worker01
Warning FailedMount 4m47s (x2 over 16m) kubelet Unable to attach or mount volumes: unmounted volumes=[ebs-volume], unattached volumes=[kube-api-access-rq86p ebs-volume]: timed out waiting for the condition
Warning FailedMount 19s (x7 over 18m) kubelet Unable to attach or mount volumes: unmounted volumes=[ebs-volume], unattached volumes=[ebs-volume kube-api-access-rq86p]: timed out waiting for the condition
This is my ebs-volume the last one which I have connected to the master node on which I am performing operations right now...
NAME FSTYPE LABEL UUID MOUNTPOINT
loop0 squashfs /snap/core18/2253
loop1 squashfs /snap/snapd/14066
loop2 squashfs /snap/amazon-ssm-agent/4046
xvda
└─xvda1 ext4 cloudimg-rootfs c1ce24a2-4987-4450-ae15-62eb028ff1cd /
xvdf ext4 36609bbf-3248-41f1-84c3-777eb1d6f364
The cluster I have created manually on the AWS ubuntu18 instances, there are 2 worker nodes and 1 master node all on Ubuntu18 instances running on AWS.
Below are the commands which I have used to create the EBS volume.
aws ec2 create-volume --availability-zone=us-east-1c --size=10 --volume-type=gp2
aws ec2 attach-volume --device /dev/xvdf --instance-id <MASTER INSTANCE ID> --volume-id <MY VOLUME ID>
sudo mkfs -t ext4 /dev/xvdf
After this the container was successfully created and attached, so I don't think there will be a problem in this part.
I have not done one thing which I don't know if it is necessary or not is the below part
The cluster also needs to have the flag --cloud-provider=aws enabled on the kubelet, api-server, and the controller-manager during the cluster’s creation
This thing I found on one of the blogs but at that moment my cluster was already set-up so I didn't do it but if it is a problem then please notify me and also please give some guidance about how to do it.
I have used Flannel as my network plugin while creating the cluster.
I don't think I left out any information but if there is something additional you want to know please ask.
Thank you in advance!