0

I'm trying to figure out, why I'm having a problem that Python code is throwing a SSLCertVerificationError for valid LetsEncrypt certificates on a virtual host with multiple domains and certificates at the same IP If I delete all certificates except one it's fine, but with more than one certificate requests ignores the domain to which Python sent the request and pulls the most recent LetsEncrypt certificate, which is incorrect, causing the domain SSLCertVerificationError.

My understanding was that under SNI (Server Name Indication) requests should only pull the certificate for the domain to which the request is being made, not simply the most recent one. I have checked, and I'm running Python, 3.8, requests 2.5 under a version of Nginx that has been compiled with SNI support. I can suppress the error by turning off SSL validation, but that seems a poor workaround.

Any idea what is going on? Why does SNI work fine when browsers requests page from Nginx, pullign the proper certificate, but fail under when the same is done under Python's requests package? I have read everything I can find, and the docs say it should just work under the current builds of nginx, requests,OpenSSL, etc., but it clearly isn't here.


To replicate, I can do requests.get{'https://kedrosky.org') error-free from a local machine. But on scripts run at that server -- a hosted domain -- a newer certificate for the wrong domain is returned, causing an SSLCertVerificationError.

  • I can assure you that it works with a simple `requests.get(...)` against a server with nginx, single IP, multiple domains and multiple certificates. So the question is what you are doing differently. Unfortunately you don't provide any way to reproduce your problem. It might be problem in your code, in your setup or simply that you use a similar but not exactly the same domain as configured (i.e. example.com configured, www.example.com used or similar). So please provide sufficient details so that others can reproduce your problem. – Steffen Ullrich Nov 28 '20 at 15:22
  • @SteffenUllrichI just added how to replicate to the post. There are two valid domains at the IP, and requests pull the certificate for the newest one, not the one associated with the requests. – Tim Benzedrine Nov 28 '20 at 15:36
  • @SteffenUllrich Note that requests works fine from a local machine, but fails on scripts run at that hosted server. – Tim Benzedrine Nov 28 '20 at 16:03
  • Can you check that the domain resolves to the same IP address on both systems? – Steffen Ullrich Nov 28 '20 at 17:18
  • @SteffenUllrich Yes, both domains resolve to the same (correct) IP address on both machines, both local and remote, but requests pulls the wrong certificate when run locally. – Tim Benzedrine Nov 28 '20 at 17:21

1 Answers1

1

The problem is that the server configuration is likely only properly done for IPv4 even though the domain also resolved to an IPv6 address. With IPv4 it returns the correct certificate:

 $ openssl s_client -connect kedrosky.org:443 -4
 ...
 subject=CN = kedrosky.com

But with IPv6 it returns a different certificate (this needs IPv6 connectivity to the internet on your local machine):

 $ openssl s_client -connect kedrosky.org:443 -6
 ...
 subject=CN = paulandhoward.com

Likely this is because there is only a listen 443 but not listen [::]:443, the latter needed for IPv6. In this case virtual hosts only properly work for IPv4 but with IPv6 it will just return the default, i.e. usually the first certificate configured.

And the reason that you are seeing different results from different hosts is that one has only IPv4 connectivity while the other can do IPv6 too.

Steffen Ullrich
  • 114,247
  • 10
  • 131
  • 172
  • Thanks so much for that. That is apparently exactly what was going on. I now have the listen directive in the hosts file to deal with 4 and 6, and it's working. – Tim Benzedrine Nov 28 '20 at 17:49