1

We use Varnish Cache as a Frontend to a lot of our customers and we server stale content by grace while any backend is sick.

We do now have a failed backend and we want to increase the grace period (while it's sick), is that a possible scenario? I tried digging in docs and found nothing.

Varnish 4

Mohamed Magdy
  • 151
  • 1
  • 7

1 Answers1

1

Serving outdated content in Varnish Cache 4.x when a backend in sick is a common use cache. You simply need to implement your own vcl_hitsubroutine. The idea is caching contents using a high grace value (e.g. 24 hours), but limit grace to a small time window (e.g. 10 seconds) when your backend is healthy:

sub vcl_hit {
    if (obj.ttl >= 0s) {
        # Normal hit.
        return (deliver);
    }

    # We have no fresh fish. Lets look at the stale ones.
    if (std.healthy(req.backend_hint)) {
        # Backend is healthy. Limit age to 10s.
        if (obj.ttl + 10s > 0s) {
            return (deliver);
        } else {
            # No candidate for grace. Fetch a fresh object.
            return(fetch);
        }
    } else {
        # Backend is sick. Use full grace.
        if (obj.ttl + obj.grace > 0s) {
            return (deliver);
        } else {
            # No graced object.
            return (fetch);
        }
    }
}

For further information please check:

Community
  • 1
  • 1
Carlos Abalde
  • 1,077
  • 7
  • 12
  • Thank you but I still didn't get it. Scenario is that I have vcl_hit as you stated and set grace to 5h, the backend was sick. I want to increase these 5h to let's say 10h or so while it's still sick, is that a possible scenario? – Mohamed Magdy Jul 28 '16 at 06:50
  • 1
    As far as I know, if you stored a content in Varnish Cache using a grace value of X hours, that content will be removed from the storage X hours after the insertion in the storage, and there is nothing you can do in VCL to avoid that (you have ``beresp.keep`` but I think is not useful here). Therefore the only solution would be storing objets using a grace value higher than the longest expected backend outage. – Carlos Abalde Jul 28 '16 at 07:02