3

Surfing the net, I've found many tutorials about proxying to different Docker containers running on the same host, using Nginx/Confd (or Haproxy, or Vulcand). However, what I need to do it's different. Following an overview of my infrastructure:

  • An online CoreOS cluster with 5 nodes, all running etcd
  • On every node of the cluster are launched different Docker containers (Nginx webservers running WordPress apps) using fleet, without exposing a port and writing their ips (Docker ip taken with docker inspect) on etcd .
  • If a node goes down, my services are automatically moved on another available node

Now, what I need to do is to have let's say one Nginx proxy that route my traffic to various containers depending on the vhost. Following an example:

Nginx (with pub IP) receive request xxx.domain.com --> node-1 --> container with auto assigned ip (listening on port 80)

Nginx (with pub IP) receive request yyy.domain.com --> node-2 --> container with auto assigned ip(listening on port 80)

Here my questions:

  • It's my scenario correct? Am I thinking something wrong?
  • My Nginx proxy must be outside the CoreOS cluster? Or I have to run it on every CoreOS node?
  • How can I achieve this configuration? What's the best way?

Thank you in advance!

AcidCrash
  • 41
  • 5

3 Answers3

1

You need some type of service discovery for nginx to be able to "find" the containers running on the nodes. You could write a record into etcd when container starts and remove on exit and have nginx check those.

For moving services around, you could take a look at fleet for simple scheduling.

bakins
  • 21
  • 1
1

I don't know if I get you right. But here's the way I'd do it:

Put a load balancer (HAProxy, Nginx, Amazon ELB (if you're on EC2)) outside the cluster routing every traffic inside it.

Inside it you could give Gogeta a try: https://github.com/arkenio/gogeta

It's a reverse proxy running globally (on every node) and routing the traffic based on domain entries in etcd to the specific containers. you then could setup your service files adding and removing their presence to etcd which gogeta monitors.

ExecStart=<do something>    
ExecStartPost=/usr/bin/etcdctl set /services/<your_service>/%i/location '{\"host\": "%H", \"port\": <your containers exposed port>}'

ExecStop=/usr/bin/docker stop <your service>
ExecStopPost=/usr/bin/etcdctl rm --recursive /services/<your_service>/%i

It works and load balances request with Round Robin strategy. Though there seems to be an issue I filed https://github.com/arkenio/gogeta/issues/10

Does that help you?

Julian Kaffke
  • 213
  • 1
  • 2
  • 5
1

You could use a trio of nginx, etcd & confd to get this done. There is a great blog post titled "Load balancing with CoreOS, confd and nginx" that walks you through running three containers.

  1. You need a shared "data" container, where you can store the dynamically generated nginx configurations
  2. You need a container running confd, that will read values from etcd and dynamically generate the nginx configuration for you (this is saved in a volume from the shared "data" container)
  3. Lastly you'll need nginx that simply uses that shared "data" volume for its configurations.

The key then is to have each HTTP backend announce itself via etcd, and then confd will pick up the changes and reconfigure nginx on the fly. This process is very close to what @Julian mentioned in the previous answer:

ExecStart=<do something>    
ExecStartPost=/usr/bin/etcdctl set /services/<your_service>/%i/location '{\"host\": "%H", \"port\": <your containers exposed port>}'

ExecStop=/usr/bin/docker stop <your service>
ExecStopPost=/usr/bin/etcdctl rm --recursive /services/<your_service>/%i

Check out the confd template docs for more examples, but you'll have something roughly like this:

{{range $dir := lsdir "/services/web"}}
upstream {{base $dir}} {
    {{$custdir := printf "/services/web/%s/*" $dir}}{{range gets $custdir}}
    server {{$data := json .Value}}{{$data.IP}}:80;
    {{end}}
}

server {
    server_name {{base $dir}}.example.com;
    location / {
        proxy_pass {{base $dir}};
    }
}
{{end}}

Just to note, you'll only need one of these "trios" running, unless you want a higher availability setup, in which case you'll need two or more. When you go HA you'd probably want to front them with an ELB instance.

Kenneth Kalmer
  • 191
  • 1
  • 5