0

I'm using mod_proxy as a failover proxy with two balance members.

While mod_proxy marks dead nodes as dead, it still routes one request per minute to each dead node and, if it's still dead, will either return 503 to the client (if maxattempts=0) or retry on another node (if it's > 0).

The backends are serving a REST web service. Currently I have set maxattempts=0 because I don't want to retry POSTs and DELETEs. This means that when one node is dead, each minute a random client will receive a 503. Unfortunately, most of our clients are interpreting codes like 503 as "everything is dead" rather than "that didnt work but please try that again".

In order to program some kind of automatic retry for safe requests at the proxy layer, I'd like to configure mod_proxy to use maxattempts=1 for GET and HEAD requests and maxattempts=0 for all other HTTP Methods.

Is this possible? (And how? :)

Graham Lea
  • 201
  • 1
  • 9
  • A dead node is marked as dead for all requests, so this wouldn't really work how you're expecting.. what're you trying to accomplish? – Shane Madden Sep 02 '11 at 01:47
  • I expect it's probably an attempt to work around problems with overloaded but not dead backends. – womble Sep 02 '11 at 01:54
  • Updated the description. Dead nodes still get attempted requests every minute. I want to retry those requests if they are GET or HEAD. – Graham Lea Sep 05 '11 at 00:14
  • I'd be fixing the problem of dead nodes getting sent requests. That's insane. – womble Sep 05 '11 at 00:42
  • From what I know, sending requests to dead nodes is how mod_proxy detects whether they've recovered. – Graham Lea Sep 05 '11 at 01:29

2 Answers2

0

I think you may be out of luck -- as far as I can tell, the "obvious" way of doing this (with a <Limit> block) only works for access-control-related operations; I suspect it won't be doable.

In general, though, I don't think this is going to achieve what you think you want to achieve. In my experience, you normally actually want to do the opposite of what you've described; retrying non-idempotent requests against multiple backends is usually a bad idea (just in case they did the operation but didn't report success); you're far better off just failing everything out quickly and having the browser handle the retry if required.

womble
  • 96,255
  • 29
  • 175
  • 230
  • Sorry, there was some detail missing in the description. The clients are applications calling a REST service, not browsers. Also, it's only the safe methods that I want to retry, not the un-safe methods. – Graham Lea Sep 05 '11 at 00:17
0

If the problem is caused by the client applications misinterpreting the server return codes, then you should fix the clients.

You will also find that failing some client requests and not others will confuse your clients making them very difficult to write with overly complicated failure handling.

David Newcomb
  • 275
  • 1
  • 5
  • 14