I get error Unable to attach or mount volumes: unmounted volumes=[file-store], unattached volumes=[file-store kube-api-access-p7btw]: timed out waiting for the condition.
kubectl describe pod kickstar-backend-5577cf96cf-rlpht
Name: kickstar-backend-5577cf96cf-rlpht
Namespace: default
Priority: 0
Service Account: default
Node: gke-kickstar-prod-workloads-clus-main-ab940238-1ktc/10.128.0.5
Start Time: Tue, 08 Aug 2023 03:31:56 +0000
Labels: app=kickstar-backend
pod-template-hash=5577cf96cf
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/kickstar-backend-5577cf96cf
Containers:
kickstar-backend:
Container ID:
Image: asia-southeast1-docker.pkg.dev/kickstar-prod/kickstar/kickstar.backend:13fb26c
Image ID:
Port: 1026/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/app/wwwroot from file-store (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p7btw (ro)
Readiness Gates:
Type Status
cloud.google.com/load-balancer-neg-ready True
Conditions:
Type Status
cloud.google.com/load-balancer-neg-ready True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
file-store:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: file-store-pvc
ReadOnly: false
kube-api-access-p7btw:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal LoadBalancerNegNotReady 27m (x2 over 27m) neg-readiness-reflector Waiting for pod to become healthy in at least one of the NEG(s): [k8s1-0912e870-default-kickstar-backend-80-4d4b640a]
Normal NotTriggerScaleUp 27m cluster-autoscaler pod didn't trigger scale-up:
Warning FailedScheduling 24m (x2 over 27m) default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Normal Scheduled 24m default-scheduler Successfully assigned default/kickstar-backend-5577cf96cf-rlpht to gke-kickstar-prod-workloads-clus-main-ab940238-1ktc
Warning FailedMount 21m (x6 over 22m) kubelet MountVolume.MountDevice failed for volume "pvc-9cd5a129-d237-4074-95ae-de42446fc75c" : rpc error: code = Aborted desc = An operation with the given volume key modeInstance/asia-southeast1-a/pvc-9cd5a129-d237-4074-95ae-de42446fc75c/vol1 already exists.
--- Most likely a long process is still running to completion. Retrying.
Normal LoadBalancerNegTimeout 12m neg-readiness-reflector Timeout waiting for pod to become healthy in at least one of the NEG(s): [k8s1-0912e870-default-kickstar-backend-80-4d4b640a]. Marking condition "cloud.google.com/load-balancer-neg-ready" to True.
Warning FailedMount 4m12s (x2 over 11m) kubelet Unable to attach or mount volumes: unmounted volumes=[file-store], unattached volumes=[kube-api-access-p7btw file-store]: timed out waiting for the condition
Warning FailedMount 118s (x8 over 22m) kubelet Unable to attach or mount volumes: unmounted volumes=[file-store], unattached volumes=[file-store kube-api-access-p7btw]: timed out waiting for the condition
Warning FailedMount 5s (x7 over 22m) kubelet MountVolume.MountDevice failed for volume "pvc-9cd5a129-d237-4074-95ae-de42446fc75c" : rpc error: code = DeadlineExceeded desc = context deadline exceeded
The PVs and PVCs are all seemed to be successfully applied.
kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-9cd5a129-d237-4074-95ae-de42446fc75c 1Ti RWX Delete Bound default/file-store-pvc file-store 31m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/file-store-pvc Bound pvc-9cd5a129-d237-4074-95ae-de42446fc75c 1Ti RWX file-store 34m
Following are the configs for SC, PV and PVC:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: file-store
provisioner: filestore.csi.storage.gke.io
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
tier: standard
network: default
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: file-store-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: file-store
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kickstar-backend
spec:
replicas: 1
revisionHistoryLimit: 4
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- image: backend:latest
name: backend
ports:
- containerPort: 1026
volumeMounts:
- mountPath: /app/wwwroot
name: file-store
volumes:
- name: file-store
persistentVolumeClaim:
claimName: file-store-pvc
Can anyone help me where I'm wrong exactly? Thanks in advance and best regards.
Can anyone help me where I'm wrong exactly? Thanks in advance and best regards.