Among other things, Nginx describes a configuration for "Active-Active HA for NGINX Plus on AWS Using AWS Network Load Balancer". The AWS NLB (network load balancer) balances connections at OSI layer 4 to nginx
servers behind it that do layer 7 application load balancing. We have a similar setup (using nginx
, not NGINX Plus) and want to use the network load balancer because we have a wide variety of traffic coming in and we currently need 4 nginx
servers to handle it all. We need everything from UDP load balancing to HTTP/2 and WebSockets.
The obvious missing feature in the Nginx example is that it does not support TLS. We need to support TLS. Ideally we would support TLS by having TLS terminated by the NLB, but while Amazon recently added that capability to their NLB, we are using Kubernetes 1.12 and integrating NLB termination of TLS is complicated enough that it has been deferred until Kubernetes 1.15, with backporting even to 1.14 being rejected as too hard. So while we are waiting for kops
and EKS to catch up, and given how difficult the developers who did it felt it was to do correctly, we are going to continue to terminate TLS on the nginx
servers.
This mostly works, but as you may know, the SSL handshake is very expensive, and we would like to cut down on this expense. nginx
provides 2 ways to reduce the number of handshakes. The first, sending multiple requests over a single TCP/TLS connection, works fine with the NLB and we use that as much as our customers' clients support it. Unfortunately the second, "reusing SSL session parameters to avoid SSL handshakes for parallel and subsequent connections", does not work. In order to reuse SSL session parameters, each SSL handshake creates an SSL session and future connections can use those session parameters by specifying an SSL session ID. This allows a web browser to open 6 parallel TCP connections while only needing to do one full SSL handshake (for the first connection) while the remaining 5 use the same SSL session.
nginx
provides an ssl_session_cache
to keep track of these sessions on the server side, but as documented, it is at most limited to a single cache shared between all the workers on a single server. This is excellent when using only 1 server, and better than nothing when using more than one, but in practice with 4 servers, it does not help much. Unless all 6 browser connections go to the same server, one connection will go to a server that rejects the session ID, causing it to start the handshake over and invalidating the previous session ID for new connections, even though they could still be valid if appropriately routed.
The solution that comes to mind would be to use something like memcached
to expand the ssl_session_cache
so that all the nginx
servers would have the same cache. (For various reasons we would prefer not to use SSL session tickets, among the most compelling of which is that in our specific situation a lot of our clients will not use them.) I see that kubernetes/ingreses-nginx
switced from using nginx
to OpenResty because it takes nginx
and adds lua
support and lua
modules, and that there is a lua module for memcached
, but I don't know how to navigate all that lua
module integration. The documentation I found on the memcached
module suggests it is intended to be used for content caching, not something internal like SSL session caching.
So, (1) is there some pre-built way for the open source nginx
or OpenResty to share an ssl_session_cache
across multiple servers? and (2) does it seem reasonable to try to build a shared cache out of the lua
modules, and if so, how do I actually go about and do that?