In my k8s pod, I want to give a container access to a S3 bucket, mounted with rclone.
Now, the container running rclone needs to run with --privileged
, which is a problem for me, since my main-container
will run user code which I have no control of and can be potentially harmful to my Pod.
The solution I’m trying now is to have a sidecar-container
just for the task of running rclone, mounting S3 in a /shared_storage
folder, and sharing this folder with the main-container
through a Volume shared-storage
. This is a simplified pod.yml file:
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
volumes:
- name: shared-storage
emptyDir: {}
containers:
- name: main-container
image: busybox
command: ["sh", "-c", "sleep 1h"]
volumeMounts:
- name: shared-storage
mountPath: /shared_storage
# mountPropagation: HostToContainer
- name: sidecar-container
image: mycustomsidecarimage
securityContext:
privileged: true
command: ["/bin/bash"]
args: ["-c", "python mount_source.py"]
env:
- name: credentials
value: XXXXXXXXXXX
volumeMounts:
- name: shared-storage
mountPath: /shared_storage
mountPropagation: Bidirectional
The pod runs fine and from sidecar-container
I can read, create and delete files from my S3 bucket.
But from main-container
no files are listed inside of shared_storage
. I can create files (if I set readOnly: false
) but those do not appear in sidecar-container
.
If I don’t run the rclone mount to that folder, the containers are able to share files again. So that tells me that is something about the rclone process not letting main-container
read from it.
In mount_source.py
I am running rclone with --allow-other
and I have edit etc/fuse.conf
as suggested here.
Does anyone have an idea on how to solve this problem?