0

It appears I've been way to focused on getting the Mulit Master replicating database system running and did not look much into the web server cluster and load balancer for it and SSL certifications..

Planned setup:

  1. Digital Ocean Floating IP: The floating IP programmatically changes on Load Balancer state.

  2. 2x HAproxy servers: These are load balanced and assigned with floating IP.

  3. 3x Application Servers: Application servers running NginX and Apache2, these are running ISP Config as a web hosting panel..

I want to enable HTTPS / SSL Certs. I want to enable my users to use Let's Encrypt. At the same time I want the system to be able to serve any domain, from any backend, without directing the traffic to a specific backend depending on SSL Certs. I read a few guides online but felt more and more confused so I'm asking here.

I've read some that the HAproxy terminates SSL, and then sends the internal traffic unencrypted on the private network. The problem is that I could find no in-depth guides how to handle domain routing, ssl certs etcetera when you have multiple domains. I have about 5 project websites as personal projects, but I'm also helping a few non-profit with their website. Multi domain support is crucial.

ISP Config supports multi webserver setups and got configurations to store its own configuration on an external database for other ISP Config installs to read.

Is it easy to implement HTTPS / SSL Certificates and how would I go forth to do it?

I'm sorry if this question is very nooby or self explainatory, but I have never handled any SSL Certificate configurations on web servers before, and I find it important to learn.

StianM
  • 75
  • 1
  • 8
  • If you were capable of installing the application servers, you'll manage what's called "SSL offloading" on the haproxies too. Plenty of examples to be found. – Gerard H. Pille Apr 20 '18 at 11:16
  • Hi, thank you for the answer. In most of the examples I have found, haproxy routes users to a specific backend depending on what application the user is browsing for. Mind sharing some of the examples you are thinking of? – StianM Apr 20 '18 at 11:59
  • Since all your backends run all your applications, I'd use some kind of load-balancing, perhaps with sticky sessions. – Gerard H. Pille Apr 20 '18 at 12:10
  • Yes. As I mentioned in my setup, I will be running a load balancer with HAproxy. The two are set up with corosync and pacemaker, and can programmatically switch a floating IP over, they are enabled with auto_tie_breaker to prevent split-brain. I'm just unsure of how to handle the SSL and multi tenant/domain part of it. – StianM Apr 20 '18 at 12:18
  • There's no need to "handle" the ssl. Just tell HaProxy where you store your certificates (on the "bind :443" line), and you can deal with the requests as if they were plain http. – Gerard H. Pille Apr 20 '18 at 12:24
  • You'd need a lot of traffic to need two front end servers, or a need for high availability. Not sure how the DO "floating IP" works, generally you use a reverse proxy or load balancer. Nginx can terminate as many SSL websites as you like and pass the traffic to as many servers as you like, encrypted or unencrypted, and I expect HAProxy can do the same. – Tim Apr 21 '18 at 02:30
  • @Tim This is mainly for a future oriented system, and I'm expecting lot's of traffic. It is also a project related to my exam coming up next summer, where I will present my results for this research as a part of practical project, aswell as a full documentation of the setup, and up time and stats that I have been tracking. It's also a big part of my self teaching process I'm doing on the side of the apprenticeship! ^^ – StianM Apr 21 '18 at 18:40
  • @Tim All the DO floating IP does is point top an anchor IP to the HAProxy setup, so I dont think it is any different than keepalived/virtualIP. I believe HAProxy is sufficent aswell, as it shares the status of "industry standard" together with NginX! ^^ – StianM Apr 21 '18 at 18:43

1 Answers1

0

This is not at all easy or straightforward because you are using letsencrypt. When using letsencrypt, the certificates and private keys are created and stored to files on the filesystem in response to an HTTP callback from letsencrypt. But, because you are load balancing, exactly one of your webservers will handle the callback from letsencrypt and, thus, create the necessary files on its file system. In essence, you'd have to share your keystore across all of your webservers so that all webservers see all SSL certs and all CSR requests from letsencrypt. After that, you'd have to make sure each of the webservers or load balancers had the configuration pointing to the correct certs.

2ps
  • 1,106
  • 8
  • 14
  • I will be running a GlusterFS cluster for storing files that needs to be available on multiple hosts, including site specific files. Would it be ideal to store the let's encrypt files? – StianM Apr 21 '18 at 16:27
  • Your last sentence is not finished. Also the Let's Encrypt termination can be handled by the load balancers, and webservers can have different certificates, without the need to share them. – Patrick Mevzek Apr 21 '18 at 17:57
  • @PatrickMevzek: re termination at the load balancers, since the web servers would respond to the letsencrypt callback, the letsencrypt certs *would have to be stored in location the webservers have access to*. If that is shared with the load balancers, that is okay, but I don't see how haproxy could be set up to handle the letsencrypt callback without damaging the haproxy => webserver traffic flow. Re: different certs on different webservers, since the webservers are load balanced, there is no gaurantee that each of them would get a cert, which is necessary. – 2ps Apr 22 '18 at 18:42
  • @StianM: it only matters if you are trying to have security isolation from different tenants. Is there any worry that tenant A might access tenant B's certs maliciously? If the answer is no, then yes. If the answer is yes, security between tenants is important, than you will have to consider something similar but separate with admin-only permissions. – 2ps Apr 22 '18 at 18:43
  • @PatrickMevzek: good catch re the last sentence--updated above. – 2ps Apr 22 '18 at 18:44
  • See for example https://fxdata.cloud/tutorials/secure-haproxy-with-let-s-encrypt-on-ubuntu-14-04#! to show how to let the load balancer handle Let's encrypt HTTP validation itself. – Patrick Mevzek Apr 22 '18 at 18:50
  • Thank you @Patrick for this configuration guide! It makes it clearer. Is there any way to also secure the access, without breaking simplicitly? How is a multi tentant setup with SSL done in a production setup? I dont believe and hope my tenants will try to do malicious activity, but I'm firm with my ideal that if anything can go wrong, it will. And that's what I'd like to prepare for. The current guide does well describing it from the user to the load balancer, but how do I secure from the load balancer to the web server and still maintaining load balancing and functionality? – StianM Apr 23 '18 at 09:52