2

I am using a signed SSL certificate generated by our internal CA. I have added subject alternative names so that myserver.example.net and myserver are both valid for the site. This works correctly in both Firefox and IE, but in Chrome users are still getting a [1]warning message("The identity of this site has not been verified.") when they use the short name myserver. The CA is installed and Chrom finds it just fine when using the FQDN. It's when using the hostname("uname -n"), part of the SAN, that the certificate becomes unverified. As indicated the error produced is vary generic. According to what I've read if there is a SAN the common name should be ignored and this seems to be the case. FQDN that are listed in the SAN seem to work, it's only nodenames found in the SAN causing this problem. The CA being used here is from a large(Multiple class A networks) corporation with thousands of clients and hundreds of servers. The prevailing browser here is IE and what I'm trying to say is that if we don't see an issue with the way we do things with this large a deployment then Chrome is not [2]behaving like IE and that alone is cause for concern.

My question is, Is there any way for users to use myserver without getting the SSL warning in Chrome?

The error screenshot.
1. http://imageshack.us/a/img259/9624/certerror.png

The ignored initial report to Google, no help from upstream.
2. http://productforums.google.com/d/msg/chrome/FWAtO5uikuE/0zVo9FU9pakJ

Mike Mestnik
  • 127
  • 1
  • 6
  • [This question](http://serverfault.com/questions/449453/remove-ssl-warning-with-internal-websites) is similar, but this user may have had issues that I'm not having. – Mike Mestnik May 29 '13 at 14:42
  • This is exactly like [this edit](http://serverfault.com/review/suggested-edits/75111). The edit was rejected leaving no other choice, I need more rep. points to make comments. – Mike Mestnik May 29 '13 at 14:46

1 Answers1

1

Actually Chrome is doing something right here. All SANs in certificates should be forward and reverse resolvable by public DNS. Internal names as well as private IP addresses (say RfC1918) are a bad idea in certificates. The concept of certificates is to prove an entity's identity unambiguously. Since there is more than one host known as 192.168.0.1 as well as more than one host named "mail" or "unix", certificates for these hosts are only valid for parts of the net.

The CA/Browser Forum deprecated the usage of certificates with SANs some time ago. They published a paper describing specifically this matter. It makes sense to enforce this on the client side as well, Google just seems to be the first following the new standards.

  • I've heard about this issue with Kerberos. For me that meant that every interface(uniq address) had to have a uniq reverse DNS and a matching forward. – Mike Mestnik May 30 '13 at 12:43
  • For this deployment the host has two interfaces, one public facing on the 56/8 network and the other for our backup network. When I say nodename, I'm taking about using the client's DNS search setting to append the domain name to a nodename as part of the resolving process. The effect is that a DNS lookup on something like www will return an address even though there is no DNS record for that name. A reverse lookup on the returned result would indicate the FQDN and that FQDN would resolve back to the previous address. – Mike Mestnik May 30 '13 at 12:51
  • Even in situations were private or even link-local addresses are used DNS can be configured with views so that each instance of a private network can correctly resolve all entities in that network. By network law each DNS server can only be configured to route to a single instance of any address range. Views in this case are used so that addresses in the private range are directed to the private addresses while the same DNS server will correctly direct external or public to public facing addresses. One can use multiple DNS servers to archive the same effect. – Mike Mestnik May 30 '13 at 12:55
  • It would always be possible to configure DNS correctly and appropriately for a working network environment. I'm un-aware of any setup that would cause the issue you describe to be unsolvable as you indicate. – Mike Mestnik May 30 '13 at 12:58
  • 2
    Obviously you are mistaken if you believe that the rules used to govern public CAs should be adopted by private/internal CAs. Who would sign certs for hosts that are internal and have no external rights? Furthermore browsers and clients should NOT be attempting to enforce CA policies, for example what a mess it would be if browsers started verifying the identities belonging to the owners of certificates. The verification of a CSR(s) correctness and therefor it's eligibility to be signed by a CA is outside of a client's duties. The client only needs be concerned with the certs correctness. – Mike Mestnik May 30 '13 at 13:23
  • IMHO NAT is the one thing that single-handedly broke the internet. Hopefully v6 will save the end-to-end principle and we can get rid of private addresses as well as split DNS. – Thorsten Tüllmann May 30 '13 at 15:38
  • Talking about CA rules, I do believe that one should only sign what one owns. Since nobody owns private IP space, no client should trust certificates including these addresses. The same thing is true for non-FQDNs. Of course it is nice for a user to just type the short name into a browser, but it is always convenience against security, isn't it? We could argue for hours, why and where SSL is utterly broken and always has been, but we don't have anything better today. That's the reason why more and more clients do work, that should be done by a CA. – Thorsten Tüllmann May 30 '13 at 15:43
  • Google has been using SSL-pinning for their own websites in Chrome for quite some time and that's how we found out about more than one CA giving out certificates for obvious MITM purposes. If a client only verified a certificate's correctness, we should by all means deliver every client software with an emtpy trust store. It was hard enough to get people to watch for a padlock when using online banking. We won't get the masses to verify fingerprints over a secure second channel... – Thorsten Tüllmann May 30 '13 at 15:47
  • Note to Thorsten, v6 has both private and link-local addresses. However it's known to be NAT resistant but there are v6 to v4 NAT implementations, where the existing v4 address space is accessible as a /96 netblock. – Mike Mestnik May 30 '13 at 16:44
  • Clients should utterly trust any CA they are instructed to trust. If a CA creates a certificate that certificate should also be trusted. For clients to ad-hoc there own rule set above and beyond validating that the cert was signed by a trusted CA, then why have a CA trust store to begin with? Should the client then take on all the responsibility of a CA, why do an ad-hoc job about it? – Mike Mestnik May 30 '13 at 16:49
  • I'm utterly unconvinced that this is deliberately what the developers of Chrome were going for. This is most logically an accidental feature, based on the error message and simple deduction. If Chrome had intended for this to be a feature the error message would be more specific. – Mike Mestnik May 30 '13 at 16:52
  • @MikeMestnik The client has to additionally verify that the certificate is appropriate for the URL specified. Just knowing that the cert was issued by the CA is insufficient, as an attacker can easily get a cert (and matching key) signed by a trustworthy CA. This question is dealing with those appropriateness checks, not certificate validity checks. – David Schwartz Feb 16 '15 at 11:15
  • @DavidSchwartz I double/tipple checked forward/reverse resolution as indicated this is also necessary for Kerberos. Plus making sure resolution works correctly for all names, when using addresses this is irrelevant, that would simply be the answer to the original question. – Mike Mestnik Mar 26 '15 at 00:59