I have a cookiecutter django project inside a VM instance, a minimal one. Let's call VM_D. Here is the production.yml I am trying to run:
version: '2'
volumes:
caddy: {}
services:
django: &django
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
depends_on:
- redis
env_file: .env
command: /gunicorn.sh
caddy:
build:
context: .
dockerfile: ./compose/production/caddy/Dockerfile
depends_on:
- django
volumes:
- caddy:/root/.caddy
env_file: .env
ports:
- "0.0.0.0:80:80"
- "0.0.0.0:443:443"
redis:
image: redis:3.0
celeryworker:
<<: *django
depends_on:
- redis
command: /start-celeryworker.sh
celerybeat:
<<: *django
depends_on:
- redis
command: /start-celerybeat.sh
Besides, I have another VM instance with postgres running, but not inside a docker. Let's call it VM_P.
These two VMs are connected to each other (I can ping them from each other and even use psql remotely from VM_D) and the postgres one doesn't have access to Internet (for security reasons). So basically, the VM_D has two cards (en0 for external access to internet , and en1 for connection with VM_P) and VM_P has just one card (for connection with VM_D).
I want to be able to access VM_P from django container. I understand I have to connect the default "bridge" to the en1 card but I can't manage to do it yet without altering the connection to the en0.
Tries
- I tried to assign "bridge" and "host" networks to django in production.yml and just the "bridge" to others but it failed saying that only one "host" network is allowed.
- I tried to assign just "host" to django containers, the connection works to VM_P but other containers didn't have connection to django anymore.
- I checked Pipework solution but it is recommended to check a "native" way before using it.
Thanks for helping !