I am trying to start a postgres pod on microk8s kubernetes cluster. At the moment the postgres container with all its data is started locally on the host machine.
The question is: Is it possible to map the current volume (from local docker volume ) to the kubernetes pod deployment?
I have used kompose
to convert the docker-compose.yml
to appropriate .yaml
files for kubernetes deployment.
The above mentioned command kompose
creates postgres-deployment.yaml
, postgres-service.yaml
, and 2 persistantvolumeclaims
( from the volumes mapped in the docker-compose one for the pg_data and the other one for the init_db script).
Do I need to generate PersistantVolume
mappings alongside the PersistantVolumeClaims
that were automatically generated by kompose
and how would they look?
EDIT: Using the yaml below I made 2 volumes
and 2 volumeclaims
for the postgres container one for the data one for the init_db script. Running that and then exposing the service endpoints worked.
WARNING: Because the database was running on docker host machine container and kubernetes pod in same time data corruption happened.
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/lib/docker/volumes/dummy_pgdata/_data"