0

I realize I may be asking the impossible here --

At the moment I use too many elastic IPs, and they cost money. Not only do they cost money...but in a disaster scenario, it takes time to reattach 25 - 30 elastic IPs to a machine. Those elastic IPs are all bound to one machine, and have a 1-to-1 mapping of internal machines. I wanted to decrease the number of IPS and consolidate all my usage to one or two elastic IPs. Then, I wanted to route and forward all these hosts to their respective machines from that origin ip. I haven't gotten this to work though, because of the nature of HTTP vs TCP -- and the lack of extra headers. I'm looking for suggestions on what i'm missing, or alternatively I could solve this.

I have many different hosts that operate on TCP ports -- for example

mydbserver.mycompany.com:3306
myseparatedb.mycompany.com:3306
anotherdb.mycompany.com:3306
myrdppc.mycompany.com:3389
mywebserver.mycompany.com:80/:443
myelasticsearch.mycompany.com:9200
mybamboo.mycompany.com:88
mybitbucket.mycompany.com:9090

etc. etc.

example consolidated IP: 200.31.21.11 haproxy bound IP: 3.2.44.1

I'm using a combination of iptables and HAProxy. IPtables on the gateway machine, and HAProxy behind on another machine. I came up with the idea of assigning all different DNS entries for each of these machines. Each assigned the same "gateway" IP. IPtables is set up to listen to any request going to 200.31.21.11 on any port, and forward it to HAProxy on port 999 (random port I picked). Meaning that HAProxy has a frontend that binds to 3.2.44.1:999

The frontend would then point a request to one of the many backends. Thereby routing the request to the appropriate machine.

First I tried to match headers on the request using acl's --- if host header matched mydbserver.mycompany.com then route it to the appropriate backend, which would then forward the request to the machinename on port 3306

This however, does not work, because host headers are not sent on TCP requests, and haproxy won't be able to make connections over http to a database. This is problematic because there's no way to differentiate a request from its host.

I'm liking this solution because I'd only be charged for one or two elastic IPs I'm using, be able to quickly assign to any new machine, and given the nature of elastic IPs - have them exist forever.

Is there an advanced concept or process I can use to look at the TCP request itself and somehow link the two together? Or maybe something in HAProxy that I'm missing?

Zoredache
  • 130,897
  • 41
  • 276
  • 420
Muradin007
  • 13
  • 2
  • haproxy can use [SNI SSL passthrough](https://serverfault.com/questions/625362/can-a-reverse-proxy-use-sni-with-ssl-pass-through) . So if *everything* is configured to use SSL/TLS+SNI, you could use haproxy to differentiate correctly. Of course that means getting certs, reconfiguring every client and server ([mariadb](https://mariadb.com/kb/en/library/secure-connections/) , [rdp](https://smallbusiness.chron.com/secure-remote-desktop-connections-using-tls-ssl-based-authentication-47378.html), [elastic](https://www.elastic.co/blog/tls-elastic-stack-elasticsearch-kibana-logstash-filebeat) etc.) – A.B Sep 23 '18 at 14:29
  • btw: I don't know if SNI can be used for every protocol client, and SNI is required for differentiation. Parts of SNI happen to be unencrypted, "leaking" the requested hostname, that's why it can be used. – A.B Sep 23 '18 at 14:37
  • Precisely - the task of reconfiguring every server, app, and service that a developer *might* use is way beyond the scope of what I can do here. Thank you for the comment though – Muradin007 Sep 24 '18 at 15:50

0 Answers0