0

Please assist.

I found a blog post, https://blog.ssdnodes.com/blog/host-multiple-ssl-websites-docker-nginx/) regarding deploying multiple docker-compose application with the same nginx-proxy but with different VIRTUAL_HOST names

But for some reason, both applications are returning an Error 502 Bad Gateway

The following error is what I see when I run docker-compose logs nginx

2019/05/29 20:52:26 [error] 8#8: *15 connect() failed (111: Connection refused) while connecting to upstream, client: 52.209.30.187, server: gregsithole.com, request: "GET / HTTP/1.1", upstream: "http://172.20.0.5:80/", host: "gregsithole.com"

And I believe the upstream is using an internal docker network IP because that's not the IP for my Server. My upstream is determined by the following file: https://raw.githubusercontent.com/jwilder/nginx-proxy/master/nginx.tmpl

But I'm not too familiar with how it works.

The following is an example of my docker-compose files:

nginx-proxy/docker-compose.yaml

version: "3.6"
services:
  nginx:
    image: nginx
    container_name: nginx-proxy
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - conf:/etc/nginx/conf.d
      - vhost:/etc/nginx/vhost.d
      - html:/usr/share/nginx/html
      - certs:/etc/nginx/certs
    labels:
      - "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"

  dockergen:
    image: jwilder/docker-gen
    container_name: nginx-proxy-gen
    restart: always
    depends_on:
      - nginx
    command: -notify-sighup nginx-proxy -watch -wait 5s:30s /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
    volumes:
      - conf:/etc/nginx/conf.d
      - vhost:/etc/nginx/vhost.d
      - html:/usr/share/nginx/html
      - certs:/etc/nginx/certs
      - /var/run/docker.sock:/tmp/docker.sock:ro
      - ./nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro

  letsencrypt:
    image: jrcs/letsencrypt-nginx-proxy-companion
    container_name: nginx-proxy-le
    restart: always
    depends_on:
      - nginx
      - dockergen
    environment:
      NGINX_PROXY_CONTAINER: nginx-proxy
      NGINX_DOCKER_GEN_CONTAINER: nginx-proxy-gen
    volumes:
      - conf:/etc/nginx/conf.d
      - vhost:/etc/nginx/vhost.d
      - html:/usr/share/nginx/html
      - certs:/etc/nginx/certs
      - /var/run/docker.sock:/var/run/docker.sock:ro

volumes:
  conf:
  vhost:
  html:
  certs:

networks:
  default:
    external:
      name: nginx-proxy

dockerized-ghost/docker-compose.yaml

version: "3.6"
services:

  ghost:
    image: ghost
    restart: always
    expose:
      - 80
    volumes:
      - ../../ghost:/var/lib/ghost/content
    environment:
      NODE_ENV: production
      url: https://blog.gregsithole.com
      VIRTUAL_HOST: blog.gregsithole.com
      LETSENCRYPT_HOST: blog.gregsithole.com
      LETSENCRYPT_EMAIL: hidden-email

networks:
  default:
    external:
      name: nginx-proxy

Please assist

Greg Sithole
  • 145
  • 1
  • 2
  • 11
  • May you post your solution ? I need to know why my answer did not apply to this question. – filipe Jun 11 '19 at 10:13
  • 1
    @filipe, Your solution was that I need to assign the network (which I created) to each of the services, however as we spoke about it, that didn't fix it... I initially began with `nginx`, `docker-gen` & `letsencrypt-nginx-proxy-companion` My solution was based on an updated article from the website which I first listed... Instead of using all those 3 services I mentioned, they just used `jwilder/nginx-proxy` for the network. Which I then just added `letsencrypt-nginx-proxy-companion` after verifying that it works. Please see my answer below as I've added the solutions – Greg Sithole Jun 11 '19 at 13:15

3 Answers3

0

You should assign the network nginx-proxy to the service ghost:

  ghost:
    networks:
      - nginx-proxy
    ...

Also assign the newtwork to nginx

  nginx:
    networks:
      - nginx-proxy
    ...

Also put the network configuration like this:

networks:
  nginx-proxy:
    external: true
  default:

And its all you need. Remember that in the docker compose file you have to declare the network as external, but that's not enough. You have also assign it individually to each service you want to be part of the network.

I recommend you to upgrade to Træfik or envoy. Nginx is limited in terms of scalability unless you pay for it.

filipe
  • 1,957
  • 1
  • 10
  • 23
  • Thanks for responding, I tried this out and changed both the nginx, and ghost which resulted in a Service nginx uses an undefined network “nginx-proxy”. Which I then changed to `networks: nginx-proxy: external: true` and the upstream now points to the Virtual Host but why would I still be getting a 502 Bad Gateway error? – Greg Sithole May 29 '19 at 23:08
  • It means the nginx can't connect to upstream. Check the ghost container access logs docker ps and docker logs -f container_name or container_id. Is the container running ? – filipe May 29 '19 at 23:14
  • Also check the letsencrypt and dockergen logs. Is the certificate being successfully generated ? – filipe May 29 '19 at 23:18
  • I also suggest you take a look here: https://metamost.com/ghost-docker-setup/ or here https://github.com/LuisArteaga/docker-ghost-mariadb-letsencrypt – filipe May 29 '19 at 23:21
  • So I checked the containers, they all seem to be working no issues from what I can see. Also checked Let's encrypt which is working fine because both my applications are secure even if they are returning 502 Bad Gateway... Thanks I'll have a look at the links and get back to you – Greg Sithole May 29 '19 at 23:36
  • I had a look at those pages but none of them Bind the network and it seems they all have letsencrypt, nginx-proxy etc built into the one docker-compose whereas I have spilt it up... The error I still get is `[error] 20#20: *152 no live upstreams while connecting to upstream, client: 169.0.129.120, server: gregsithole.com, request: "GET /favicon.ico HTTP/2.0", upstream: "http://gregsithole.com/favicon.ico", host: "gregsithole.com", referrer: "https://gregsithole.com/"` – Greg Sithole May 30 '19 at 07:33
0

After spending days on this issue trying various solutions. Updating my repo several times. I managed to fix it.

The blog article which I used was out of date as it was written back in 2017, and on the same blog I found the latest article, (https://blog.ssdnodes.com/blog/host-multiple-websites-docker-nginx/) which when checking the differences my nginx-proxy used nginx, 'jwilder/docker-gen' and jrcs/letsencrypt-nginx-proxy-companion.

The latest article only uses jwilder/nginx-proxy but I modified it to also include jrcs/letsencrypt-nginx-proxy-companion see my solution below:

version: "3.6"
services:
  nginx-proxy:
    image: jwilder/nginx-proxy
    container_name: nginx-proxy
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - conf:/etc/nginx/conf.d
      - vhost:/etc/nginx/vhost.d
      - html:/usr/share/nginx/html
      - dhparam:/etc/nginx/dhparam
      - certs:/etc/nginx/certs:ro
      - /var/run/docker.sock:/tmp/docker.sock:ro

  letsencrypt-nginx-proxy-companion:
    image: jrcs/letsencrypt-nginx-proxy-companion
    environment:
      NGINX_PROXY_CONTAINER: nginx-proxy
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - vhost:/etc/nginx/vhost.d
      - html:/usr/share/nginx/html
      - certs:/etc/nginx/certs

volumes:
  conf:
  vhost:
  html:
  dhparam:
  certs:

networks:
  default:
    external:
      name: nginx-proxy

Also, another one of my issues was that the port which the ghost uses on default was 2368 so I had to bind it to use port 80. So my solution for that was instead of exposing port 80 on ghost, I create an nginx Service which exposes port 80.

The following is my ghost setup:

version: "3.6"
services:

  ghost:
    image: ghost
    restart: always
    volumes:
      - ../../ghost:/var/lib/ghost/content
    environment:
      - VIRTUAL_HOST=blog.domain.com
      - LETSENCRYPT_HOST=blog.domain.com
      - LETSENCRYPT_EMAIL=name@domain.com
      - NODE_ENV=production
      - url=https://blog.domain.com

  nginx:
    image: nginx
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
    expose:
      - 80
    depends_on:
      - ghost
    links:
      - ghost

networks:
  default:
    external:
      name: nginx-proxy

So I was able to have both my website (https://gregsithole.com) & my blog (https://blog.gregsithole.com) work under the same proxy

Greg Sithole
  • 145
  • 1
  • 2
  • 11
0

I was able to use Greg's answer to resolve this problem, with one tweak: rather than add an nginx service to the ghost docker-compose.yml file, I added VIRTUAL_PORT=2368 and at long last things started right up for me.