5

This would be a perfect solution for me as I need to serve some generated content to web browsers. My plan is to generate the content on demand and store it for next time. I don't want the browsers to call my service (which generates the content) every time. I want them to go directly to the "cached" resource if it's available and only call the service if it's not. So I'd put Varinsh in front of server A which runs the service and server B which stores the previously generated content versions. If it gets a request for a resource it hasn't got cached it'll try server B. Upon getting a 404 response it'll request the same resource from server A.

Can Varnish be configured in such a way with VCL? If not is there a solution like that you know about?

P.S. I don't want to send 302 redirects to the browser plus I don't have control over server B to make it send such redirects instead of 404's

Charles
  • 50,943
  • 13
  • 104
  • 142

2 Answers2

5

This is perfectly possible in Varnish. Make sure that in vcl_fetch (and possibly in vcl_error) you check the return status code (e.g. check for status > 400), do a restart if it failed, and in vcl_recv select the other backend if req.restarts > 0. For example:

backend serverA {
   .host="192.168.0.1";
   .port = "80";
}

backend serverB {
   .host = "192.168.0.2";
   .port = "80";
}

sub vcl_recv {
   if (req.restarts == 0) {
       set req.backend = serverB;
   } else {
       set req.backend = serverA;
   }
}

sub vcl_fetch {
    if (obj.status >= 400 && req.restarts == 0) {
        restart;
    }
}

sub vcl_error {
   if (req.restarts == 0) {
       restart;
   }
}

But this being said, it sounds like you're reinventing the concept of a cache server. And Varnish is great cache server. Why not have one back-end server (serverA) and have Varnish cache your generated entities? You can setup complex rules and you'll get expiration (of the cache), purge management and performance for free! :)

ivy
  • 5,539
  • 1
  • 34
  • 48
  • Thanks very much. It looks like it'll do the trick. I'll give it a try and come back with another comment. – Paweł Sokołowski Dec 03 '10 at 17:15
  • And as for your question - My sick mind invented this devious plan because the storage server(serverB) is actually Amazon's S3-a storage solution that is transparently scalable with no extra effort and the amount of the generated content will be significant. I can't afford to be bothered about whether the Varnish server has enough storage space for its cache, whether it needs another disk or not. Does that make sense? – Paweł Sokołowski Dec 03 '10 at 17:30
  • Your plan might be suitable for your case. However, I wouldn't recommend it. In real life, the requirements to cache something start simple ("I only need to cache simple pages, they probably never expire, all users are served the same content and performance is no issue") and you're tempted to solve them with your home-brewn 'simple' solution. After a while, requirements evolve and your solution has grown more advanced. Then you discover that caching is actually a complex thing to get right and manage, and you wished you picked a more standard setup. Please consider how this applies to you... – ivy Dec 05 '10 at 20:50
  • Thanks, I'll be careful:) I do appreciate the benefits of WebCache and I intend to make the best of its standard features just with the addition of an extra storage solution behind it. – Paweł Sokołowski Dec 06 '10 at 09:09
0

In this example, varnish try find in 6 servers, if not found, show the last msg.

# cat /etc/varnish/default.vcl 
backend serverA {
   .host="10.42.4.104";
   .port = "80";
}

backend serverB {
   .host = "10.42.4.102";
   .port = "80";
}

backend serverC {
   .host = "10.42.4.103";
   .port = "80";
}

backend serverD {
   .host = "10.42.4.101";
   .port = "80";
}

backend serverE {
   .host = "10.42.4.105";
   .port = "80";
}

backend serverF {
   .host = "10.42.4.106";
   .port = "80";
}




sub vcl_recv {
   if (req.restarts == 0) {
       set req.backend = serverA;
   } elseif (req.restarts == 1){
       set req.backend = serverB;
   } elseif (req.restarts == 2){
       set req.backend = serverC;
   } elseif (req.restarts == 3){
       set req.backend = serverD;
   } elseif (req.restarts == 4){
       set req.backend = serverE;
   } else {
       set req.backend = serverF;
   }
}


sub vcl_fetch {
    if (beresp.status >= 400 && req.restarts == 0) {
        return(restart);
    }
    if (beresp.status >= 400 && req.restarts == 1) {
        return(restart);
    }
    if (beresp.status >= 400 && req.restarts == 2) {
        return(restart);
    }
    if (beresp.status >= 400 && req.restarts == 3) {
        return(restart);
    }
    if (beresp.status >= 400 && req.restarts == 4) {
        return(restart);
    }
}

sub vcl_error {
   if (req.restarts == 0) {
       return(restart);
   }
   if (req.restarts == 1) {
       return(restart);
   }
   if (req.restarts == 2) {
       return(restart);
   }
   if (req.restarts == 3) {
       return(restart);
   }
   if (req.restarts == 4) {
       return(restart);
   }

}