When I force my pod to run on a new Node the persistent volume data (FileSystem) is left behind. How can I move it along with my Pod?
I am deploying portainer with the following yamls:
---
# Source: portainer/templates/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: portainer
---
# Source: portainer/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: portainer-sa-clusteradmin
namespace: portainer
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
---
# Source: portainer/templates/pvc.yaml
kind: "PersistentVolume"
apiVersion: "v1"
metadata:
name: "portainer-pv"
namespace: "portainer"
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
spec:
capacity:
storage: "10Gi"
volumeMode: Filesystem
accessModes:
- 'ReadWriteOnce' # Only 1 pod can access at the same time
persistentVolumeReclaimPolicy: "Retain"
hostPath:
path: "/opt/kubernetes/volumes/portainer"
---
# Source: portainer/templates/pvc.yaml
kind: "PersistentVolumeClaim"
apiVersion: "v1"
metadata:
name: portainer-pv-claim
namespace: portainer
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "10Gi"
---
# Source: portainer/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: portainer
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
namespace: portainer
name: portainer-sa-clusteradmin
---
# Source: portainer/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: portainer
namespace: portainer
labels:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
spec:
type: NodePort
ports:
- port: 9000
targetPort: 9000
protocol: TCP
name: http
nodePort: 30777
selector:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
---
# Source: portainer/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer
namespace: portainer
labels:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
spec:
replicas: 1
strategy:
type: "Recreate"
selector:
matchLabels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
template:
metadata:
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
spec:
nodeSelector:
{}
serviceAccountName: portainer-sa-clusteradmin
volumes:
- name: "data"
persistentVolumeClaim:
claimName: portainer-pv-claim
containers:
- name: portainer
image: "portainer/portainer:2.13.1"
imagePullPolicy: Always
volumeMounts:
- name: data
mountPath: /data # Mount inside the container
ports:
- name: http
containerPort: 9000
protocol: TCP
resources:
{}
On first deployment everything works, but when I tested a migration of my Pod to another Node it just started a new fresh portainer Pod without the retained persistent volume data.
I was expecting the persistent-volume data to move with it to the new Node, but it didn't.
What I did to migrate my pod was:
kubectl cordon {nodeName}
kubectl delete pod {podName} -n portainer
Then my pod was moved to a new Node, but the persistent volume data got left behind.
How can I make the (FileSytem) persistent volumes migrate along with my Pods incase such an event, of pod migration to a new Node, happens?
Edit:
I also tried like suggested to use 'local' type of PersistentVolume:
kind: "PersistentVolume"
apiVersion: "v1"
metadata:
name: portainer
namespace: portainer
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
spec:
capacity:
storage: "10Gi"
volumeMode: Filesystem
accessModes:
- 'ReadWriteOnce' # Only 1 pod can access at the same time
persistentVolumeReclaimPolicy: "Retain"
local:
path: "/opt/kubernetes/volumes/portainer"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/worker
operator: In
values:
- "true"
But the results were the same