I manage a deployment of Kubernetes on Openstack.
My user pods mount a PersistentVolume
created dynamically using Openstack Cinder as their home folder.
What is strange is that if I create an (empty) file with file permissions 600:
bash-4.2$ ls -l
total 16
-rw------- 1 jovyan users 0 Jul 16 17:55 id_rsa
Then I kill the container and restart it, the volume gets mounted again, but the permissions now have rw
for group permissions:
bash-4.2$ ls -l
total 16
-rw-rw---- 1 jovyan users 0 Jul 16 17:55 id_rsa
Any suggestions on how to debug this further?
Details on Kubernetes configuration
- The volume
AccessMode
isReadWriteOnce
,volumeMode: Filesystem
- Volume filesystem is
ext4
:/dev/sdf: Linux rev 1.0 ext4 filesystem data, UUID=c627887b-0ff0-4310-b91d-37fe5ca9564d (needs journal recovery) (extents) (64bit) (large files) (huge files)
Check on Openstack
I first thought it was an Openstack issue, but if I detach the volume from the Openstack instance, then attach it again using Openstack commands, and mount it using the terminal on a node, permissions are ok. So I think it is Kubernetes messing with the permissions somehow.
Yaml resources
I pasted the YAML files for the pod, the PV and the PVC on a gist, see https://gist.github.com/zonca/21b81f735d0cc9a06cb85ae0fa0285e5
I also added the output of kubectl describe
for those resources.
It is a deployment of the Jupyterhub
0.9.0 Helm package.