0

I have an TCP server and haproxy running in a docker container on Debian in docker's bridged network mode. I have increased the ephermeral port range so that I can connect around 50k clients per IP. In order to get past 50k clients, I exec into the haproxy container and create 4 virtual network interfaces with different IPs using these commands:
ifconfig eth0:1 172.17.0.100
ifconfig eth0:2 172.17.0.101
ifconfig eth0:3 172.17.0.102
ifconfig eth0:4 172.17.0.103
eth0 is already avaliable in the haproxy container. This way I can get around 200k client connected. And here's my haproxy configuration:

global
  ulimit-n 999999
  maxconn 500000
  maxpipes 200000
  tune.ssl.default-dh-param 2048
  nbproc 8
  cpu-map 1 0
  cpu-map 2 1
  cpu-map 3 2
  cpu-map 4 3
  cpu-map 5 4
  cpu-map 6 5
  cpu-map 7 6
  cpu-map 8 7

defaults
  timeout connect 5000
  timeout client 50000
  timeout server 50000

listen mqtt
  bind *:1883
  bind *:1884 ssl crt /etc/ssl/myapp.pem
  mode tcp
  maxconn 500000
  balance roundrobin
  server broker1 myapp:1883 source 172.17.0.100
  server broker2 myapp:1883 source 172.17.0.101
  server broker3 myapp:1883 source 172.17.0.102
  server broker4 myapp:1883 source 172.17.0.103

I have linked the myapp container to haproxy in the docker run command. So, is there a way to create the virtual network interfaces automatically when I run the haproxy docker container, or in a Dockerfile or using docker networks?
Please advise. Thanks

1 Answers1

0

you can manage this with OpenSVC (https://www.opensvc.com)

  • install the opensvc agent (https://repo.opensvc.com)
  • create a service (svcmgr -s xxxxxx create)
  • fill in the service configuration file (svcmgr -s xxxxxx edit config)
  • start/stop service to test your application stack (svcmgr -s xxxxxx start --local)
  • query status at the service level (svcmgr -s xxxxxx print status)
  • query status at the agent level (svcmon)

please find below a sample OpenSVC service config file which is ok for your needs :

[DEFAULT]                                                                                                                                                                  
id = 68ec6a49-d3ee-42ea-831d-78db92bab972                                                                                                                                  

[ip#0]                                                                                                                                                                     
type = docker                                                                                                                                                              
ipname = 172.17.0.100                                                                                                                                                      
ipdev = {env.bridge}                                                                                                                                                       
netmask = 255.255.255.0                                                                                                                                                    
container_rid = {env.networkcontainer}                                                                                                                                     
mode = bridge                                                                                                                                                              

[ip#1]                                                                                                                                                                     
type = docker                                                                                                                                                              
ipname = 172.17.0.101                                                                                                                                                      
ipdev = {env.bridge}                                                                                                                                                       
netmask = 255.255.255.0                                                                                                                                                    
container_rid = {env.networkcontainer}                                                                                                                                     
mode = bridge                                                                                                                                                              

[ip#2]                                                                                                                                                                     
type = docker                                                                                                                                                              
ipname = 172.17.0.102                                                                                                                                                      
ipdev = {env.bridge}
netmask = 255.255.255.0
container_rid = {env.networkcontainer}
mode = bridge

[ip#3]
type = docker
ipname = 172.17.0.103                                                                                                                                                      
ipdev = {env.bridge}                                                                                                                                                       
netmask = 255.255.255.0                                                                                                                                                    
container_rid = {env.networkcontainer}                                                                                                                                     
mode = bridge                                                                                                                                                              

[container#0]
type = docker
run_image = busybox:latest
run_args = -i -t --rm --net=none
    -v /etc/localtime:/etc/localtime:ro
run_command = /bin/sh

[container#1]
type = docker
run_image = toke/mosquitto
run_args = --rm --net=container:{svcname}.container.0
    -v {env.base_dir}/data/mqtt/config:/mqtt/config:ro
    -v {env.base_dir}/data/mqtt/log:/mqtt/log:ro
    -v {env.base_dir}/data/mqtt/data:/mqtt/data:ro
    -v /etc/localtime:/etc/localtime:ro
disable = true

[container#2]
type = docker
run_image = haproxy:latest
run_args = --rm --net=container:{svcname}.container.0
    -v {env.base_dir}/data/haproxy:/usr/local/etc/haproxy:ro
    -v /etc/localtime:/etc/localtime:ro
disable = true

[env]
networkcontainer = container#0
bridge = docker0
base_dir = /srv/{svcname}

Some logs with this setup, OpenSVC service is named "demovnic" and this is running on my "xps13" laptop :

[root@xps13 tmp]# svcmgr -s demovnic start --local
xps13.demovnic.ip#3        checking 172.17.0.103 availability                                                                                                 
xps13.demovnic.ip#2        checking 172.17.0.102 availability                                                                                                 
xps13.demovnic.ip#1        checking 172.17.0.101 availability                                                                                                 
xps13.demovnic.ip#0        checking 172.17.0.100 availability                                                                                                 
xps13.demovnic.container   container#2,container#1 disabled                                                                                                   
xps13.demovnic.container   container#2,container#1 disabled                                                                                                   
xps13.demovnic.container#0   docker run -d --name=demovnic.container.0 -i -t --rm --net=none -v /etc/localtime:/etc/localtime:ro busybox:latest /bin/sh       
xps13.demovnic.container#0   output:                                                                                                                          
xps13.demovnic.container#0   5da0f43b5b6eba14f0b04a240403237735a5ae0a97a88f54626e45c24024e245                                                                 
xps13.demovnic.container#0   wait for up status                                                                                                               
xps13.demovnic.container#0   wait for container operational
xps13.demovnic.ip#0        bridge mode
xps13.demovnic.ip#0        /sbin/ip link add name veth0pl30562 mtu 1500 type veth peer name veth0pg30562 mtu 1500
xps13.demovnic.ip#0        /sbin/ip link set veth0pl30562 master docker0
xps13.demovnic.ip#0        /sbin/ip link set veth0pl30562 up
xps13.demovnic.ip#0        /sbin/ip link set veth0pg30562 netns 30562
xps13.demovnic.ip#0        /usr/bin/nsenter --net=/var/run/docker/netns/9860d0aa3267 ip link set veth0pg30562 name eth0
xps13.demovnic.ip#0        /usr/bin/nsenter --net=/var/run/docker/netns/9860d0aa3267 ip addr add 172.17.0.100/24 dev eth0
xps13.demovnic.ip#0        /usr/bin/nsenter --net=/var/run/docker/netns/9860d0aa3267 ip link set eth0 up
xps13.demovnic.ip#0        /usr/bin/nsenter --net=/var/run/docker/netns/9860d0aa3267 ip route replace default dev eth0
xps13.demovnic.ip#0        /usr/bin/nsenter --net=/var/run/docker/netns/9860d0aa3267 /opt/opensvc/lib/arp.py eth0 172.17.0.100
xps13.demovnic.ip#1        bridge mode
xps13.demovnic.ip#1        /sbin/ip link add name veth1pl30562 mtu 1500 type veth peer name veth1pg30562 mtu 1500
xps13.demovnic.ip#1        /sbin/ip link set veth1pl30562 master docker0
xps13.demovnic.ip#1        /sbin/ip link set veth1pl30562 up
xps13.demovnic.ip#1        /sbin/ip link set veth1pg30562 netns 30562
xps13.demovnic.ip#1        /usr/bin/nsenter --net=/var/run/docker/netns/9860d0aa3267 ip link set veth1pg30562 name eth1
xps13.demovnic.ip#1        /usr/bin/nsenter --net=/var/run/docker/netns/9860d0aa3267 ip addr add 172.17.0.101/24 dev eth1
xps13.demovnic.ip#1        /usr/bin/nsenter --net=/var/run/docker/netns/9860d0aa3267 ip link set eth1 up
xps13.demovnic.ip#1        /usr/bin/nsenter --net=/var/run/docker/netns/9860d0aa3267 ip route replace default dev eth1
xps13.demovnic.ip#1        /usr/bin/nsenter --net=/var/run/docker/netns/9860d0aa3267 /opt/opensvc/lib/arp.py eth1 172.17.0.101
xps13.demovnic.ip#2        bridge mode
xps13.demovnic.ip#2        /sbin/ip link add name veth2pl30562 mtu 1500 type veth peer name veth2pg30562 mtu 1500
xps13.demovnic.ip#2        /sbin/ip link set veth2pl30562 master docker0
xps13.demovnic.ip#2        /sbin/ip link set veth2pl30562 up
xps13.demovnic.ip#2        /sbin/ip link set veth2pg30562 netns 30562
xps13.demovnic.ip#2        /usr/bin/nsenter --net=/var/run/docker/netns/9860d0aa3267 ip link set veth2pg30562 name eth2
xps13.demovnic.ip#2        /usr/bin/nsenter --net=/var/run/docker/netns/9860d0aa3267 ip addr add 172.17.0.102/24 dev eth2
xps13.demovnic.ip#2        /usr/bin/nsenter --net=/var/run/docker/netns/9860d0aa3267 ip link set eth2 up
xps13.demovnic.ip#2        /usr/bin/nsenter --net=/var/run/docker/netns/9860d0aa3267 ip route replace default dev eth2
xps13.demovnic.ip#2        /usr/bin/nsenter --net=/var/run/docker/netns/9860d0aa3267 /opt/opensvc/lib/arp.py eth2 172.17.0.102
xps13.demovnic.ip#3        bridge mode
xps13.demovnic.ip#3        /sbin/ip link add name veth3pl30562 mtu 1500 type veth peer name veth3pg30562 mtu 1500
xps13.demovnic.ip#3        /sbin/ip link set veth3pl30562 master docker0
xps13.demovnic.ip#3        /sbin/ip link set veth3pl30562 up
xps13.demovnic.ip#3        /sbin/ip link set veth3pg30562 netns 30562
xps13.demovnic.ip#3        /usr/bin/nsenter --net=/var/run/docker/netns/9860d0aa3267 ip link set veth3pg30562 name eth3
xps13.demovnic.ip#3        /usr/bin/nsenter --net=/var/run/docker/netns/9860d0aa3267 ip addr add 172.17.0.103/24 dev eth3
xps13.demovnic.ip#3        /usr/bin/nsenter --net=/var/run/docker/netns/9860d0aa3267 ip link set eth3 up
xps13.demovnic.ip#3        /usr/bin/nsenter --net=/var/run/docker/netns/9860d0aa3267 ip route replace default dev eth3
xps13.demovnic.ip#3        /usr/bin/nsenter --net=/var/run/docker/netns/9860d0aa3267 /opt/opensvc/lib/arp.py eth3 172.17.0.103
xps13.demovnic.container   container#2,container#1 disabled

[root@xps13 tmp]# svcmgr -s demovnic print status 
demovnic                            up                                                              
`- instances                
   `- xps13            up         idle, started    
      |- ip#0               ....... up         172.17.0.100@docker0@container#0                     
      |- ip#1               ....... up         172.17.0.101@docker0@container#0                     
      |- ip#2               ....... up         172.17.0.102@docker0@container#0                     
      |- ip#3               ....... up         172.17.0.103@docker0@container#0                     
      |- container#0        ....... up         docker container demovnic.container.0@busybox:latest 
      |- container#1        ..D..P. n/a        docker container demovnic.container.1@toke/mosquitto 
      `- container#2        ..D..P. n/a        docker container demovnic.container.2@haproxy:latest 

[root@xps13 tmp]# svcmgr -s demovnic docker exec -it demovnic.container.0 ip a | grep -E "eth[0-9]|inet "
    inet 127.0.0.1/8 scope host lo
515: eth0@if516: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue qlen 1000
    inet 172.17.0.100/24 scope global eth0
517: eth1@if518: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue qlen 1000
    inet 172.17.0.101/24 scope global eth1
519: eth2@if520: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue qlen 1000
    inet 172.17.0.102/24 scope global eth2
521: eth3@if522: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue qlen 1000
    inet 172.17.0.103/24 scope global eth3

[root@xps13 tmp]# svcmgr -s demovnic stop --local
xps13.demovnic.container   container#2,container#1 disabled
xps13.demovnic.container   container#2,container#1 disabled
xps13.demovnic.ip#3        /usr/bin/nsenter --net=/var/run/docker/netns/6b0d93d7bded ip addr del 172.17.0.103/24 dev eth3
xps13.demovnic.ip#3        checking 172.17.0.103 availability
xps13.demovnic.ip#2        /usr/bin/nsenter --net=/var/run/docker/netns/6b0d93d7bded ip addr del 172.17.0.102/24 dev eth2
xps13.demovnic.ip#2        checking 172.17.0.102 availability
xps13.demovnic.ip#1        /usr/bin/nsenter --net=/var/run/docker/netns/6b0d93d7bded ip addr del 172.17.0.101/24 dev eth1
xps13.demovnic.ip#1        checking 172.17.0.101 availability
xps13.demovnic.ip#0        /usr/bin/nsenter --net=/var/run/docker/netns/6b0d93d7bded ip addr del 172.17.0.100/24 dev eth0
xps13.demovnic.ip#0        checking 172.17.0.100 availability
xps13.demovnic.container#0   docker stop a1714ac9ae3f41170e38fd925c929c1c812787cd38c0ad75cb6bfb505857d551
xps13.demovnic.container#0   output:
xps13.demovnic.container#0   a1714ac9ae3f41170e38fd925c929c1c812787cd38c0ad75cb6bfb505857d551
xps13.demovnic.container#0   wait for down status
xps13.demovnic.container   container#2,container#1 disabled

[root@xps13 opensvc]# svcmgr -s demovnic print status 
demovnic                            down                                                            
`- instances                
   `- xps13                         down       idle                               
      |- ip#0               ....... down       172.17.0.100@docker0@container#0                     
      |- ip#1               ....... down       172.17.0.101@docker0@container#0                     
      |- ip#2               ....... down       172.17.0.102@docker0@container#0                     
      |- ip#3               ....... down       172.17.0.103@docker0@container#0                     
      |- container#0        ....... down       docker container demovnic.container.0@busybox:latest 
      |                                        info: can not find container id                      
      |- container#1        ..D..P. n/a        docker container demovnic.container.1@toke/mosquitto 
      `- container#2        ..D..P. n/a        docker container demovnic.container.2@haproxy:latest 

You see that the OpenSVC agent is dealing with network configuration in the container network namespace, and then your haproxy and mqtt containers can herit from the same namespace (thanks to --net=container:{svcname}.container.0)

The mqtt and haproxy are disabled in the example, because I have no configuration file to provide to the containers. Once you have the config file on your side, you can just edit the service configuration file and remove the line "disable = true" or use the command line (svcmgr -s xxxxxx enable --rid container#1,container#2)

alexander.polomodov
  • 1,068
  • 3
  • 10
  • 14