12

It seems like a minimal amount of work and it will make the server-side implementation of reliable websites much simpler. Also SRV records have been around for years...

Is there something I'm missing here?

Edit: @DJ Pon3 - what I'm talking about is:

  1. one site served from two datacentres without needing BGP, but still working if either datacentre goes offline. (Can also be achieved by short DNS TTLs.)

  2. multiple httpS servers on different ports on one IP address.

masegaloeh
  • 18,236
  • 10
  • 57
  • 106
fadedbee
  • 2,068
  • 5
  • 24
  • 36
  • I'm not clear as to what problem, precisely, you think this would solve. Its been perfectly possible to create reliable web services without srv records so far. – Rob Moir Jan 30 '12 at 12:48
  • 1
    I think (and maybe only because I'm a simpleton) that it would solve the issue of running a web site on an alternate port without the user needing to know what port the site is running on and having to type the port number in the URL. – joeqwerty Jan 30 '12 at 12:57
  • [Shameful isn't it](http://homepage.ntlworld.com./jonathan.deboynepollard/FGA/dns-srv-record-use-by-clients.html#Shame)? – JdeBP Jan 31 '12 at 12:19
  • 2
    exact duplicate of http://stackoverflow.com/questions/9063378/why-do-browsers-not-use-srv-records – Alnitak Feb 01 '12 at 12:01
  • @chrisdew why have you asked the exact same question on both sites? – Alnitak Feb 01 '12 at 12:06
  • @Alnitak - apologies, I didn't know which site was appropriate. – fadedbee Apr 20 '12 at 10:08

2 Answers2

6

SRV records offer three things:

  1. Multiple hostnames - can be done without
  2. Alternate ports - bad idea - see below
  3. A fix for the CNAME at zone apex problem

Re: alternate ports - SRV records could be used as a way of running web servers on alternate ports without having to advertise that fact in the URL. This is a bad thing. Corporate firewall policies very commonly prohibit access to "unusual" ports, and encouraging the idea of using alternate ports would be poor for site accessibility.

The only tangible benefit I see is for #3 - it would allow example.com to get redirected to webhost.example.net without requiring a CNAME (which isn't permitted in a zone apex) or an A record (which is bad for zone maintenance).

Alnitak
  • 21,191
  • 3
  • 52
  • 82
  • 2
    -1 for missing the whole point, despite many people making it over the years as they ask for this and the questioner even alluding to it, which is of course the explicit load balancing and fallback information for clients. – JdeBP Feb 07 '12 at 13:38
  • 3
    @JdeBP IMNSHO load balancing and fallback data does not belong in the DNS - that's well into the realms of "Stupid DNS Tricks (TM)". They both belong in the IP routing layer - that's the only place you can provide seamless failover between services. – Alnitak Feb 07 '12 at 13:49
  • 3
    Actually, Alternate Ports is a good idea because protocols should not be bound to ports. Imagine a world where the post office always had to be at the second floor in the building, wouldn't that be pointless? That's what we have address books (DNS) for! What's really a bad idea is defining outgoing firewall rules based on a port. It's just pointless because attackers always could use the not-blocked ports. Additionally, imagine a world where it would be denied to go to floor 2 in every building, just because it could be a post office. Funny, isn't it? ;) – FlashFan Oct 27 '14 at 10:25
  • @FlashFan unfortunately, corporates persist in wanting to block internet _egress_ by assuming that all web sites are on port 80 or 443. – Alnitak Oct 27 '14 at 18:36
  • 4
    Yes, I know. That's why enabling SRV records would be good, because it force the corporates to stop doing those pointless, bad practices. No matter how much outgoing ports you block, as long as there is one port open, you can do everything you want, because you can do everything though every port. The fact, that you cannot even know if what goes through the TLS connection on port 443 really is HTTP, does only underline this. – FlashFan Oct 28 '14 at 08:11
  • @FlashFan blocking egress on non-standard ports is often thought *good security practice*; you can somewhat constrain access to many non-standard services, particularly if a machine is compromised. It is also often a requirement for compliance (such as PCI-DSS regulations). While malware may itself communicate with C&C on standard web ports, many enterprises take this a step further: internal machines cannot communicate with anything outbound, even ports 80/443, leaving instead such tasks up to a proxy, which may be responsible for external DNS resolution also, and aiding compromise detection. – Cosmic Ossifrage Mar 07 '15 at 00:37
  • @FlashFan in case you didn't discover this until now: it is certainly **not** possible to use any protocol on any port - a lot of routers, switches, backbones and the likes support protocol filtering - you wont be able to establish non-HTTP / HTTPS - connections via ports 80 and 443 if such a filter is enabled, for instance. In fact, its rather trivial to enable most of them - and it has been done all over the world in all sorts of networks, especially corporate and governement-ones. This can be done on layer 2, even (but is usually is done on layer 6 or 7) – specializt May 17 '22 at 13:54
  • 1
    @specializt No, you can't filter out non-HTTP protocols on a TLS connection. That would require you to be able to decrypt the connection, and that is only possible if you force all network users to trust your MITM certificate. – FlashFan Jun 17 '22 at 14:57
  • -1. There is a pronounced difference between something being a "bad idea" in general, and being considered an undesirable idea by particular parties. What if I want to run a web server accessible at my public IPv4 address, but my ISP has assigned me only a small range of ports thanks to MAP-T? Currently, my only solution is to buy an IPv4 address. *That's* a bad idea. – Jivan Pal Jul 25 '23 at 21:30
  • @JivanPal simple - publish a URL with a :port specifier. It's not DNS's problem to fix that for you. – Alnitak Jul 27 '23 at 20:40
  • @Alnitak I would argue that it's the HTTP spec's responsibility to allow me to map `http://example.com` to a specific IP address *and* port number. DNS is merely the means to that end. If Minecraft and Matrix have the sense to do that, then why not HTTP? – Jivan Pal Jul 29 '23 at 16:59
  • @JivanPal it's the responsibility of the *URL* specification for HTTP to that, which it does, via a `:port` field. Minecraft doesn't use HTTP URLs. – Alnitak Jul 29 '23 at 19:38
  • @Alnitak Yes, and I consider that a downside of HTTP; support for SRV records ought to be added to the spec. You claim that's a bad idea, which I think is a silly claim. – Jivan Pal Jul 30 '23 at 00:42
  • @JivanPal it'll never happen, and it was proposed, because the browser folks at IETF didn't want the overhead of the extra DNS lookup. We now have the HTTP(S) DNS resource record type instead. NB: it was only alt ports that I claimed to be a bad idea, I was in favour of SRV record support in HTTP. – Alnitak Jul 30 '23 at 14:14
  • and note this from an Internet Draft that I wrote: _the presence of the Port field in an SRV record is incompatible with the "Same Origin" security policy enforced by web browsers_. – Alnitak Jul 30 '23 at 14:18
  • @Alnitak "We now have the HTTP(S) DNS resource record type instead." Ooh, this is news to me, interesting stuff! – Jivan Pal Jul 30 '23 at 23:50
3

Why do browsers not use SRV records?

Because SRV records did not exist when http was onceived and because http is not assumd to be a service.

SRV records have been around for years...

Hahaha. Do you remember the time when HTTP started? Wen the first browsers were writtten? THAT was a long time ago.

SRV are first in RFC 2782. HTTP goes to RFC 1945 for 1.0. Guess which was first.

TomTom
  • 51,649
  • 7
  • 54
  • 136
  • HTTP 1.1 was 2616, so also missed it. Is HTTP 1.2 with SRV support that we need? – fadedbee Jan 30 '12 at 12:26
  • No, because quess what - it is not needed ;) – TomTom Jan 30 '12 at 12:27
  • 14
    -1 for the rather silly argument that relative ages constrain interoperability. The world really _does_ have the capability of making two separate inventions work together once they exist, and has done just that many times throughout history. It has even done it twice over for `SRV` resource records and HTTP. – JdeBP Jan 31 '12 at 12:03
  • @JdeBP you well know that if it were that easy it would have been done by now. The problem is transitioning sites to an HTTP+SRV mechanism without providing an inferior experience to the countless millions of users that would be stuck on old browsers. – Alnitak Feb 01 '12 at 14:04
  • You either don't or cannot read, Alnitak. In what you are replying to I explicitly pointed out that it **has** been done, twice over, long since. – JdeBP Feb 07 '12 at 13:29
  • 6
    You mean a couple of times someone wrote an internet draft? That's hardly the same - it's _trivial_ to write one, and then the real world hits you and you find that actually there's shed loads of edge cases and other issues that means it won't work in the real world, and eventually the draft expires and is mostly forgotten. Hell, I've had that happen to quite a few of mine already. – Alnitak Feb 07 '12 at 13:51
  • Woosh.. Did you hear that?! – stolsvik Mar 31 '15 at 22:26
  • Old answer but... large changes like this have have been achieved... examples include adding SSL support (HTTPS), then adding SNI to SSL. It's not about edge cases. I'd be impressed if you can name one "edge case" that breaks the idea of SRV with HTTP. The really important point missed by this answer is that, yes we could change it, but changing 1000's of different client and server implementations to support it has a cost. For this to succeed, there needs to be real **value** in implementing it. Unlike implementing SSL, there's no big cost benefit to it so nobody bothers. – Philip Couling Dec 29 '20 at 14:46