Consider the below PersistentVolumeClaim, as well as the Deployment using it.
Being ReadWriteOnce, the PVC can only be mounted by one node at the time. As there shall only be one replica of my deployment, I figured this should be fine. However, upon restarts/reloads, two Pods will co-exist during the switchover.
If Kubernetes decides to start the successor pod on the same node as the original pod, they will both be able to access the volume and the switchover goes fine. But - if it decides to start it on a new node, which it seems to prefer, my deployment ends up in a deadlock:
Multi-Attach error for volume "pvc-c474dfa2-9531-4168-8195-6d0a08f5df34" Volume is already used by pod(s) test-cache-5bb9b5d568-d9pmd
The successor pod can't start because the volume is mounted on another node, while the original pod/node, of course, won't let go of the volume until the pod is taken out of service. Which it won't be until the successor is up.
What am I missing here?
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: vol-test
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: do-block-storage
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-cache
spec:
selector:
matchLabels:
app: test-cache-deployment
replicas: 1
template:
metadata:
labels:
app: test-cache-deployment
spec:
containers:
- name: test-cache
image: myrepo/test-cache:1.0-SNAPSHOT
volumeMounts:
- mountPath: "/test"
name: vol-mount
ports:
- containerPort: 8080
imagePullPolicy: Always
volumes:
- name: vol-mount
persistentVolumeClaim:
claimName: vol-name
imagePullSecrets:
- name: regcred