0

I have a working Nginx setup with OCSP stapling configured. Now I want to add client certificate authentication for a number of URLs.

So I added a ssl_client_certificate statement that points to the CA certificate we use for the restricted URLs (it's a private CA certificate, not signed by any publicly-known CA) and because most of the server should still be publicly available, I set ssl_verify_client to optional.

This works only half-way: I can run requests with cURL(*), both with and without passing a certificate and receive the expected responses on both public URLs as well as protected ones that check the presence of the certificate.

But now for my question: when accessing the same URLs with a browser (not presenting a certificate), Nginx responds with error 400. What baffles me is that when I use Firefox Developer Tools to create a cURL request out of any of the failing requests and run them from the command line, it works flawlessly. What could be the problem?

Also even cURL's -v and --trace-ascii don't show anything that would explain to me why it could fail from within a browser. I'm not pasting the whole (long) configuration here, if you think there's something elementary missing, feel free to comment.

Edit: I checked and confirmed that cURL sends Host, User-Agent, Accept, Accept-Language, Referrer, DNT headers as well as cookies and session ID and enables compression just like Firefox would do it.

Also both Firefox and cURL don't have any client certificates that they could offer to the server and Firefox is configured to ask for certificates instead of offering them automatically, too.

Another edit: after coming back from lunch (no configuration changes in the meantime) Firefox could load the first page and associated resources once. Now, a few minutes later with only trying various requests, i.e. no changes, it doesn't work anymore. Also Chrome reports error 400 and using "Copy as cURL" (which again includes all headers) from its developer tools shows that it works in cURL again. Also I tried all requests multiple times to make sure that there's no inconsistency within the behaviour shown towards a single user agent. I'm stumped, it all seems very random to me.

Marcus Ilgner
  • 21
  • 1
  • 7
  • 1
    400 is a "Bad Request" so, if it works from curl, you must be sending *something* different in the curl request from the Firefox one. Are you including *all* the headers in the curl request - the Firefox User Agent, Accept-Language, Accept-Encoding etc? Look at the Raw Headers in developer tools and try including them all. Alternatively, do "Edit and Resend" in developer tools and remove all but the ones you're sending from curl and see if that works. – seumasmac Oct 04 '15 at 17:57
  • That was my reasoning, too. But it certainly looks like all the headers have been included. Also it worked for a time on my machine but not on a colleagues' machine. I'll do some additional investigation tomorrow. – Marcus Ilgner Oct 04 '15 at 17:59
  • Are you using a proxy with Firefox? – seumasmac Oct 04 '15 at 17:59
  • No, networking is the same. My only guess was that there's something going wrong during the TLS handshake. That would explain why it only fails on some platform/client combinations and nothing's visible in the HTTP headers. – Marcus Ilgner Oct 04 '15 at 20:39
  • 2
    Getting a 400 most likely means the TLS negotiation succeeded and a secure tunnel was established, but something else failed down the line. If TLS was the culprit you'd get a TLS-related error before any application data (such as the 400 response) could be transmitted. – André Borie Oct 05 '15 at 07:51
  • I just re-checked and cURL transmits the same user agent, all `Accept`-related headers, referrers, DNT header, cookies and session id. Yet it fails in Firefox but works there. – Marcus Ilgner Oct 05 '15 at 10:58
  • It sounds like there is an element of caching involved, most likely to do with certificates. Browsers are known to be annoyingly strict with certificates, so I think it possible that the browser is seeing the CA certificates from the server ("if you want, give me a client certificate signed by one of this (unknown) CA"), perhaps seeing that its dodgy possibly due to some [lack of?] data in the certificate and negatively caching that for a period. – Cameron Kerr Oct 05 '15 at 12:46
  • Please post your Nginx configuration and the exact command lines you used to generate the TLS certificates so we can try to replicate the issue. – André Borie Oct 06 '15 at 00:15

0 Answers0