-1

I have a varnish 4 setup with - nginx ssl termination -> varnish -> varnish rr to 4 apache backends

We need to basically not cache any requests where a specific cookie isn't set on the incoming request, so in my vcl_recv I have:

if (!req.http.Cookie ~ "cookiename") {
    return(pass);
        }

This works fine initially, but as it is a busy site over time (10 mins or so) our backend failures, and busy sleep/wakeup are increasing, and we get 503s from varnish itself, but the backends are fine and don't appear to be under any real load. Which makes me think that the requests are queued and sent sequentially to the backends and it skips any request coalescing.

I can't really find anything to support this, is this the case? Or is there is a better way to do this? I would appreciate the feedback.

Thanks

Pablo R
  • 33
  • 4

1 Answers1

0

Passed requests aren't request coalescing candidates. Request coalescing only applies to cacheable resources.

This means requests that go through vcl_miss, but that don't end up becoming Hit-For-Miss/Hit-For-Pass objects in vcl_backend_response.

Please use the following command to monitor potential HTTP 503 errors:

varnishlog -g request -q "BerespStatus == 503"

It will allow you to figure out why the error is taking place.

Thijs Feryn
  • 3,982
  • 1
  • 5
  • 10