1

When I force my pod to run on a new Node the persistent volume data (FileSystem) is left behind. How can I move it along with my Pod?

I am deploying portainer with the following yamls:

---
# Source: portainer/templates/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: portainer
---
# Source: portainer/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: portainer-sa-clusteradmin
  namespace: portainer
  labels:
    app.kubernetes.io/name: portainer
    app.kubernetes.io/instance: portainer
---
# Source: portainer/templates/pvc.yaml
kind: "PersistentVolume"
apiVersion: "v1"
metadata:
  name: "portainer-pv"
  namespace: "portainer"
  labels:
    app.kubernetes.io/name: portainer
    app.kubernetes.io/instance: portainer
spec:
  capacity:
    storage: "10Gi"
  volumeMode: Filesystem
  accessModes:
    - 'ReadWriteOnce'  # Only 1 pod can access at the same time
  persistentVolumeReclaimPolicy: "Retain"
  hostPath:
    path: "/opt/kubernetes/volumes/portainer"
---
# Source: portainer/templates/pvc.yaml
kind: "PersistentVolumeClaim"
apiVersion: "v1"
metadata:
  name: portainer-pv-claim
  namespace: portainer  
spec:
  accessModes:
    - "ReadWriteOnce"
  resources:
    requests:
      storage: "10Gi"
---
# Source: portainer/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: portainer
  labels:
    app.kubernetes.io/name: portainer
    app.kubernetes.io/instance: portainer
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  namespace: portainer
  name: portainer-sa-clusteradmin
---
# Source: portainer/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: portainer
  namespace: portainer
  labels:
    io.portainer.kubernetes.application.stack: portainer
    app.kubernetes.io/name: portainer
    app.kubernetes.io/instance: portainer
spec:
  type: NodePort
  ports:
    - port: 9000
      targetPort: 9000
      protocol: TCP
      name: http
      nodePort: 30777  
  selector:
    app.kubernetes.io/name: portainer
    app.kubernetes.io/instance: portainer
---
# Source: portainer/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: portainer
  namespace: portainer
  labels:
    io.portainer.kubernetes.application.stack: portainer
    app.kubernetes.io/name: portainer
    app.kubernetes.io/instance: portainer
spec:
  replicas: 1
  strategy:
    type: "Recreate"
  selector:
    matchLabels:
      app.kubernetes.io/name: portainer
      app.kubernetes.io/instance: portainer
  template:
    metadata:
      labels:
        app.kubernetes.io/name: portainer
        app.kubernetes.io/instance: portainer
    spec:
      nodeSelector:
        {}
      serviceAccountName: portainer-sa-clusteradmin
      volumes:
        - name: "data"
          persistentVolumeClaim:
            claimName: portainer-pv-claim
      containers:
        - name: portainer
          image: "portainer/portainer:2.13.1"
          imagePullPolicy: Always
          volumeMounts:
            - name: data
              mountPath: /data    # Mount inside the container            
          ports:
            - name: http
              containerPort: 9000
              protocol: TCP
          resources:
            {}

On first deployment everything works, but when I tested a migration of my Pod to another Node it just started a new fresh portainer Pod without the retained persistent volume data.
I was expecting the persistent-volume data to move with it to the new Node, but it didn't.

What I did to migrate my pod was:

  1. kubectl cordon {nodeName}
  2. kubectl delete pod {podName} -n portainer

Then my pod was moved to a new Node, but the persistent volume data got left behind.

How can I make the (FileSytem) persistent volumes migrate along with my Pods incase such an event, of pod migration to a new Node, happens?

Edit:
I also tried like suggested to use 'local' type of PersistentVolume:

kind: "PersistentVolume"
apiVersion: "v1"
metadata:
  name: portainer
  namespace: portainer
  labels:
    app.kubernetes.io/name: portainer
    app.kubernetes.io/instance: portainer
spec:
  capacity:
    storage: "10Gi"
  volumeMode: Filesystem
  accessModes:
    - 'ReadWriteOnce'  # Only 1 pod can access at the same time
  persistentVolumeReclaimPolicy: "Retain"
  local:
    path: "/opt/kubernetes/volumes/portainer"
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: node-role.kubernetes.io/worker
          operator: In
          values:
          - "true"  

But the results were the same

Davis8988
  • 298
  • 4
  • 16

2 Answers2

0

PV which has been created is with hostPath option.

hostPath - HostPath volume (for single node testing only; WILL NOT WORK in a multi-node cluster; consider using local volume instead)

https://kubernetes.io/docs/concepts/storage/persistent-volumes/

You need to create PV with different type persistent volume. you can refer above link which has different types of PV mentioned. Based on you requirement you can choose one.

Nataraj Medayhal
  • 980
  • 1
  • 2
  • 12
  • That didn't work for me. I updated the PersistentVolume type to 'local' like as suggested: local: path: "/opt/kubernetes/volumes/portainer" with: nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: node-role.kubernetes.io/worker operator: In values: - "true" to match any node, and after cordoning the Node and killing the pod, it got re-assigned to a new Node and the volume got mapped but the files from the previous Node not. The pod moved but the files from the old Node didn't, it just got new volume instead – Davis8988 Jun 16 '22 at 17:00
  • What you are looking for is sharing files across over network. you need to choose similar solution like NFS, EFS etc. – Nataraj Medayhal Jun 18 '22 at 05:30
0

The problem is the Access Mode defined for your PersistentVolume and PersistentVolumeClaim objects.

The ReadWriteOnce mode does not allow only one Pod at the same time, it actually allows multiple Pods at the same time, but it only allows one node to mount the volume:

ReadWriteOnce

the volume can be mounted as read-write by a single node. ReadWriteOnce access mode still can allow multiple pods to access the volume when the pods are running on the same node.

Hence the loss of the data when the Pod is recreated in another node.

The access mode needed in this situation is ReadWriteMany:

ReadWriteMany

the volume can be mounted as read-write by many nodes.

If your cluster is hosted in Google Kubernetes Engine (GKE), the PersistentVolumeClaim will fail because GKE does not support ReadWriteMany natively. In that case, the option is to use Cloud Filestore as described in this question.

  • Thanks for the reply, but sorry that didn't work too. I updated the Access Mode like you suggested but the result is the same. When the pod moves, it doesn't take the mounted volume with it, it just creates a new empty volume on the new Node and start writing to it. – Davis8988 Jun 21 '22 at 12:27
  • Take a look at the edited answer. – Gabriel Robledo Ahumada Jun 21 '22 at 15:10