1

DNS has been notorious for caching, not only at the browser level but at the ISP level. I've heard they used to cache for hours or days even, not regarding the specified TTL. However my direct observation as of late is that most ISPs seem to do a good job of respecting TTLs even when they're low (like 5 minutes).

Now we are considering using DNS changes to trigger downtime when we perform maintenance. Is DNS propagation adherence reliable enough today to use to implement our downtime?

Aaron R.
  • 467
  • 1
  • 8
  • 21

1 Answers1

1

The last discussion related to this on IETF mailing-list(*) has shown that:

  • the TTL in an indication of a maximum amount of time to keep the record in cache
  • which means that resolvers can be free to use an inferior value and expiring sooner
  • but they should not use a superior value... however many resolvers combat "too small" TTLs (some people have TTLs values in seconds...) to cap it at some minimum, in order not to stress too much their resolving infrastructure.

In short, it may be a problem for your maintenance page. It depends exactly how in advance of it you need to publish and the reaction time you like.

What many entities do is to have a complete separate domain just for that, on which website they will publish their current status and any downtime information. For example if your company is "Foobar Inc." with main website at foobar.example you could buy foobar-status.example (or even in another TLD completely), which would be hosted in totally separate infrastructure and only used to publish maintenance/dowtime related information. Doing things that way, you remove the DNS TTL problem out of the equation.

(*) see the thread starting at https://mailarchive.ietf.org/arch/msg/dnsop/iF9eAt3L5s0BljFCscInOdbwdq4/?qid=ba40c0f912c85091192795a755137e9c

Patrick Mevzek
  • 9,921
  • 7
  • 32
  • 43