1

RFC 7540, § 9.1.1 states that

Connections [...] MAY be reused for requests with multiple different URI authority components [...] as long as the origin server is authoritative [...].

So, for example, if the same origin server can serve foo.example.com and bar.example.com then the client may reuse a connection to issue requests to both destinations. When that's not desired, the same section says

A server that does not wish clients to reuse connections can indicate that it is not authoritative for a request by sending a 421 (Misdirected Request) status code in response to the request [...].

This can arise in some unexpected situations, such as when virtual servers are used and their TLS configuration differs but they share a certificate using wildcards or subject alternative names.

Unfortunately, the end result is one or more extra round trips: the client optimistically reuses a connection, the server rejects the request, and then the client has to open a new connection and try again. In the worst case, this can be as bad as or perhaps even worse than just using HTTP/1.1 connections without reuse. It seems to get especially bad when there are many different destinations shared by the same origin server and sought by the same client, as each time a new connection is opened in response to a 421, the client still feels it can reuse that connection, and so the 421s can happen almost as often as useful responses.

Assuming the underlying problem is intractable, or at least the conditions under which a 421 response is sent are beyond the control of a server administrator, but the fact that it will happen is known, is there a way to inform clients in advance not to reuse connections across domains? This still leaves the primary benefit of HTTP/2 connection reuse, namely that multiple requests to the same domain can be multiplexed on a single connection, while also avoiding foreseeable 421 responses.

kbolino
  • 371
  • 3
  • 9
  • @anx `foo.example.com` requires the client to supply a certificate for client authentication, `bar.example.com` does not, and the cert is issued for `*.example.com` – kbolino Aug 05 '19 at 18:21

2 Answers2

2

The nuclear option is to simply put the server needing the special configuration on a separate IP address, so that the browser can't reuse the connection. If the site is meant to be accessible to the Internet, then it must be a separate global IP address, not a separate RFC1918 address in your local network.

Michael Hampton
  • 244,070
  • 43
  • 506
  • 972
0

You seem to think that the 421 error is outside the server administrator's control. This is not true. It is entirely due to the server administrator's configuration choices that this occurs.

If you do not wish it to occur, use different TLS certificates in addition to the different TLS configuration for the names which you wish to not share an HTTP/2 connection. Because the connection can only be reused for names on that TLS certificate, not matching means the client must open a new connection.

Michael Hampton
  • 244,070
  • 43
  • 506
  • 972
  • 1
    These configuration choices may be driven by factors outside the administrator's control, such as a requirement for legacy SSL compatibility or client certificate authentication. Your response seems to assume that the server administrator is a certificate authority administrator; it may not be feasible to get new certificates on the fly for every subdomain. – kbolino Aug 05 '19 at 18:25
  • @kbolino The reasons for doing so may not be in the administrator's control, but actually configuring the server is the administrator's doing. – Michael Hampton Aug 05 '19 at 18:44
  • 1
    Yes, that is true. But the server admin, in this case, is not the one deciding when to send 421. It is the server software (nginx, httpd) that makes that decision. As far as I can tell, there is no way to override that (and it may even be impossible; TLS does not allow the server to request a client certificate after the initial handshake as far as I know). – kbolino Aug 05 '19 at 19:00
  • @kbolino The point is, you already know, before you go to configure the server, which hostnames need different TLS configurations. You can then obtain certificates on that basis. – Michael Hampton Aug 05 '19 at 19:01
  • 1
    Yes, but that multiplies the initial effort and the maintenance burden (certs expire). Ten subdomains means ten certs, a hundred subdomains is a hundred certs, and so on. This is an HTTP protocol behavior/limitation, it should IMO be resolved at the HTTP layer not by PKI or DNS. I was hoping there was a better way. – kbolino Aug 05 '19 at 19:07
  • @kbolino If you have other reasons for making them all separate certs, then that's unrelated to this question. But for this specific HTTP/2 issue that you have raised, they only need to be separate certs if they need separate TLS configurations. – Michael Hampton Aug 05 '19 at 19:08
  • There is also a way to do this for the specific case of client certificates from the same CA; the server can be configured to ask but not require a cert on all subdomains, then just 403 if the cert is required but not supplied on a specific subdomain. But, again, this is pushing an HTTP problem into the TLS layer (so does CORS preflight FWIW, so this pain is there already). – kbolino Aug 05 '19 at 19:11
  • Once *one* subdomain needs a separate cert, then *all* subdomains will too. Unless there is an X.509 extension for "this wildcard cert does not apply to these specific subdomains" (I only know of CA restrictions). – kbolino Aug 05 '19 at 19:13
  • @kbolino Hm, I missed the bit about using wildcard certs. in that case this won't work anyway. You'll have to go for the nuclear option. – Michael Hampton Aug 05 '19 at 19:15
  • 1
    IPv4 addresses are rare and costly. Recommending to eat up all remaining addresses for shared web hosting is just unrealistic. And not everybody has IPv6 connectivity where separate IP addresses are no problem. So I conclude that HTTP2 cannot be used with wildcard certificates. That's sad. So HTTP2 doesn't seem to be the future. Or wildcard certificates are to disappear. Either one. – ygoe Dec 17 '20 at 13:19
  • I concur with both kbolino and ygoe. SAN and wildcard certs have been a very valid part of PKI and HTTP for many years now, and suddenly HTTP/2 has pulled the rug from under our feet. We're also finding extra IPv4 addresses unreasonably expensive to work around HTTP/2's shortfall. Annoyingly this whole problem only seems to be happening with Apple web browsers... – Adambean Apr 27 '22 at 15:09