The issue on your yaml is the access mode used, you should change access mode to ReadWriteMany
The access modes allowed are as below : Refer Link
ReadWriteOnce
– the volume can be mounted as read-write by a single node
ReadOnlyMany
– the volume can be mounted read-only by many nodes
ReadWriteMany
– the volume can be mounted as read-write by many nodes
Check this very basic example on how share file content between containers in a POD created via deployment using PV/PVC and is hared between replicas on scaling the deployment.
First Create a persistent volume refer below yaml example with hostPath configuration
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv-1
labels:
pv: my-pv-1
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /var/log/mypath
$ kubectl create -f pv.yaml
persistentvolume/my-pv-1 created
Second create a persistent volume claim using below yaml example
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc-claim-1
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
selector:
matchLabels:
pv: my-pv-1
$ kubectl create -f pvc.yaml
persistentvolumeclaim/my-pvc-claim-1 created
Verify the pv and pvc STATUS is set to BOUND
$ kubectl get persistentvolume
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv-1 1Gi RWX Retain Bound default/my-pvc-claim-1 62s
$ kubectl get persistentvolumeclaims
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
my-pvc-claim-1 Bound my-pv-1 1Gi RWX 58
Third consume the pvc in required both pods of a deployment refer below example yaml where the volume is mounted on two pods busy1 and busy2 of a multi-pod deployment where file written in first container is readable on second.
multi-pod-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: multipod
name: multipod
spec:
replicas: 1
selector:
matchLabels:
app: multipod
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: multipod
spec:
containers:
- command:
- sleep
- "3600"
image: busybox
name: busy1
volumeMounts:
- name: vol
mountPath: /var/log/mypath
- command:
- sleep
- "3600"
image: busybox
name: busy2
volumeMounts:
- name: vol
mountPath: /var/log/mypath
volumes:
- name: vol
persistentVolumeClaim:
claimName: my-pvc-claim-1
$ kubectl create -f multi-pod-deploy.yaml
deployment.apps/multipod created
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/multipod-5758475c69-fkl57 2/2 Running 0 36s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 37h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/multipod 1/1 1 1 36s
NAME DESIRED CURRENT READY AGE
replicaset.apps/multipod-5758475c69 1 1 1 36s
Test by connecting to container 1 and write to the file on mount-path.
$ kubectl exec -it multipod-5758475c69-fkl57 -c busy1 /bin/sh
/ # df -kh
Filesystem Size Used Available Use% Mounted on
overlay 38.7G 4.1G 34.6G 11% /
tmpfs 64.0M 0 64.0M 0% /dev
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/vda1 38.7G 4.1G 34.6G 11% /dev/termination-log
/dev/vda1 38.7G 4.1G 34.6G 11% /etc/resolv.conf
/dev/vda1 38.7G 4.1G 34.6G 11% /etc/hostname
/dev/vda1 38.7G 4.1G 34.6G 11% /etc/hosts
shm 64.0M 0 64.0M 0% /dev/shm
/dev/vda1 38.7G 4.1G 34.6G 11% /var/log/mypath
tmpfs 7.8G 12.0K 7.8G 0% /var/run/secrets/kubernetes.io/serviceaccount
tmpfs 7.8G 0 7.8G 0% /proc/acpi
tmpfs 64.0M 0 64.0M 0% /proc/kcore
tmpfs 64.0M 0 64.0M 0% /proc/keys
tmpfs 64.0M 0 64.0M 0% /proc/timer_list
tmpfs 64.0M 0 64.0M 0% /proc/timer_stats
tmpfs 64.0M 0 64.0M 0% /proc/sched_debug
tmpfs 7.8G 0 7.8G 0% /proc/scsi
tmpfs 7.8G 0 7.8G 0% /sys/firmware
# cd /var/log/mypath/
/var/log/mypath # date >> file_in_container1.txt
/var/log/mypath # date >> file_in_container1.txt
/var/log/mypath # cat file_in_container1.txt
Tue Feb 4 10:25:32 UTC 2020
Tue Feb 4 10:25:34 UTC 2020
Now connect tow second container in the deployment and it should see the file from first as below
$ kubectl exec -it multipod-5758475c69-fkl57 -c busy2 /bin/sh
/ # cd /var/log/mypath/
/var/log/mypath # ls
date file_in_container1.txt
/var/log/mypath # cat file_in_container1.txt
Tue Feb 4 10:25:32 UTC 2020
Tue Feb 4 10:25:34 UTC 2020
Now when we scale this deployment and create more replica (since i have used hostPath not nfs i will have to make sure all replica pod run on same node)
$ kubectl scale deployment --replicas=2 multipod
deployment.apps/multipod scaled
$ kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/multipod-5758475c69-7xl9j 2/2 Running 0 47s 192.168.58.112 k8s-node02-calico <none> <none>
pod/multipod-5758475c69-fkl57 2/2 Running 0 21m 192.168.58.111 k8s-node02-calico <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 38h <none>
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/multipod 2/2 2 2 21m busy1,busy2 busybox,busybox app=multipod
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/multipod-5758475c69 2 2 2 21m busy1,busy2 busybox,busybox app=multipod,pod-template-hash=5758475c69
New replica is also able to read the files as expected.
$ kubectl exec -it multipod-5758475c69-7xl9j /bin/sh
Defaulting container name to busy1.
Use 'kubectl describe pod/multipod-5758475c69-7xl9j -n default' to see all of the containers in this pod.
/ # cd /var/log/mypath/
/var/log/mypath # ls
file_in_container1.txt
/var/log/mypath # cat file_in_container1.txt
Tue Feb 4 10:25:32 UTC 2020
Tue Feb 4 10:25:34 UTC 2020
/var/log/mypath #