5

I wanted to ask about Nginx Retry of Requests. I have a Nginx running at the backend which then sends the requests to HaProxy which then passes it on the web server and the request is processed. I am reloading my Haproxy config dynamically to provide elasticity. The problem is that the requests are dropped when I reload Haproxy. So I wanted to have a solution where I can just retry that from Nginx. I looked through the proxy_connect_timeout, proxy_next_upstream in http module and max_fails and fail_timeout in server module. I initially only had 1 server in the upstream connections so I just that up twice now and less requests are getting dropped ( only when ) have say the same server twice in upstream , if I have same server 3-4 times drops increase ).

So , firstly I wanted to now , that when a request is not able to establish connection from Nginx to Haproxy so while reloading it seems that conneciton is seen as error and straightway the request is dropped .

So how can I either specify the time after the failure I want to retry the request from Nginx to upstream or the time before which Nginx treats it as failed request.

( I have tried increaing proxy_connect_timeout - didn't help , mail_retires , fail_timeout and also putting the same upstream server twice ( that gave the best results so far )

Nginx Conf File

upstream gae_sleep {

server 128.111.55.219:10000;

}

server {

listen 8080;
server_name 128.111.55.219;
root /var/apps/sleep/app;
# Uncomment these lines to enable logging, and comment out the following two
#access_log  /var/log/nginx/sleep.access.log upstream;
error_log  /var/log/nginx/sleep.error.log;
access_log off;
#error_log /dev/null crit;

rewrite_log off;
error_page 404 = /404.html;
set $cache_dir /var/apps/sleep/cache;



location / {
  proxy_set_header  X-Real-IP  $remote_addr;
  proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header Host $http_host;
  proxy_redirect off;
  proxy_pass http://gae_sleep;
  client_max_body_size 2G;
  proxy_connect_timeout 30;
  client_body_timeout 30;
  proxy_read_timeout 30;
}

location /404.html {
  root /var/apps/sleep;
}

location /reserved-channel-appscale-path {
  proxy_buffering off;
  tcp_nodelay on;
  keepalive_timeout 55;
  proxy_pass http://128.111.55.219:5280/http-bind;
}

}

HopelessN00b
  • 53,795
  • 33
  • 135
  • 209
vaibhav
  • 121
  • 2
  • 4
  • There is a working answer at https://serverfault.com/a/871806/187237 using the Lua module – remram Jul 09 '21 at 15:48
  • 2
    Does this answer your question? [nginx proxy retry while backend is restarting](https://serverfault.com/questions/259665/nginx-proxy-retry-while-backend-is-restarting) – remram Jul 09 '21 at 15:50

2 Answers2

2

So after trying to find the answer to retrying requests at nginx, I haven't found a clean way of retrying requests but have come up with sort of a hacky way for it. So within the upstream section in the nginx conf , you should put multiple copies of the same upstream server as retrying in nginx is at upstream server level. If one upstream server fails then nginx tries the request at another upstream server. If you have only 1 upstream server like I have it wont retry the request. So to overcome that I put multiple copies of the same upstream server , so that by the time nginx goes through the list of servers and sends requests , the upstream server ( haproxy in this case ) would have reloaded and request will go to. It is also essential to go through the various timeouts that nginx provides at http module and server module. "fail_timeout" - says that if a upstream server is not available decommission it for x secs , but if all of them are not available then it doesn't decommission ( I am mentioning this as by the time nginx goes through the entire list haproxy may not have come up but this wont be a problem because of this property ) PS : this is a hacky solution and i had to have some 100 - 150 entries of upstream in my nginx file for reducing errors to in significant number. Better solutions are welcome :)

vaibhav
  • 121
  • 2
  • 4
  • Actually I think this isn't even an nginx specific problem as the root of the issue here is that there is a window (small, but real) between when the old process closes the ports and the new process opens them where requests can get dropped on the floor. I wonder what the pure HAProxy answer to this is. – WaldenL Apr 04 '12 at 14:04
0

Not exactly what you're asking, but maybe more to the point: How are you restarting HAProxy? With the SF option you should be able to restart HAProxy w/out dropping any connection requests. I don't believe nginx can retry requests, but if you really wanted to you could have an HAProxy front end to another HAProxy instance. Then HAProxy would retry requests to the second instance. But that's seems really silly. Check out the SF option first.

Restart HAProxy script (from comments) The greps are just for my warm-fuzzy feelings:

#!/bin/sh

ps -ef | grep haproxy
haproxy -f haproxy.cfg -sf $(cat /var/run/haproxy.pid)
ps -ef | grep haproxy
WaldenL
  • 1,258
  • 1
  • 14
  • 26
  • http://bit.ly/x8pkeT - This was my previous post. I am using the -sf for haproxy option but the requests are still getting lost. I first tried to find solutions for that , when that didn't work have to nginx layer to take action for lost requests. I mean nginx has the option for retries , where it sends to next upstream server . Just if I could get something which can say after how long that request is send to the next upstream server , i may succeed. – vaibhav Mar 12 '12 at 05:33
  • I guess there's a very small timing hole where the old instance of HAProxy has stopped listening and the new instance hasn't started yet. If your requests are hitting at that point I guess they'd fail. The timeout from nginx seems to be a combination of proxy_connect_timeout and proxy_read_timeout. But the only one that should matter in your case is proxy_connect_timeout as the read timeout only matters once connected, and HAProxy better not be dropping connected connections. BTW, how do you know you're dropping connections? That's a very small tining window to worry about. – WaldenL Mar 12 '12 at 19:47
  • So wats happening is I send requests from ab to nginx and I monitor the Haproxy queues and the request themselves dont do anything and just sleep after which response is provide. So at the time of reload all the requests in the queue ( existing connections that moment) get dropped. And I see failed requests as a result of that in ab result. So its not only the connections in the window , its the connections in the queue I am worried abt ( is there any specific required to be done for this to work in Haproxy ) – vaibhav Mar 13 '12 at 00:02
  • OK, you had me worried that HAProxy wasn't doing what it's supposed to, so I just did a trial. Page sleeps for 10 seconds and then returns. I restarted HAProxy in the middle of the request and the request finished w/out a hiccup. HAProxy did _not_ drop the connections. So, I think there's a configuration problem either in your HAProxy setup, or your restart script. My restart script is on the next answer (code blocks don't work in comments) – WaldenL Mar 13 '12 at 22:18
  • So Walden , may be I am a little late , but was trying some things out and had end days of my quarter here :) So I tried the set up as u said , and haproxy does reload but there is a caveat. 1) So when I try with 10 seconds sleep , it does reload and when i do a ps aux | grep haproxy , i see 2 process unless the first process finishes the running items and those assigned to it in the queue. 2) Now when the requests are small ( and not the 10 seconds sleep ) what I think the reload still happens but still some requests get lost because of small window of time between old & new haproxy. cont. – vaibhav Apr 01 '12 at 02:41
  • So , is there a way ( may be some hacky way ) u think that can help to overcoming this small window of time between the old process finishing and new ha-proxy process starting. I thought of a persistent job running but the problem with that is that new process won't even start unless you the old job hands over and also finishes the queue assigned to it. – vaibhav Apr 01 '12 at 02:46
  • OK, so I'm really late responding to you now. :-) I'm not sure, now you need someone that understands the inner workings of sockets in Linux. HAProxy does start the new process before the old one ends, but I don't know sockets on Linux enough to know how the handoff of the listening socket works. I would think that process A would have to close it before process B could open it, and there's your hole, but maybe A can had B the listening socket? Don't know. – WaldenL Jun 05 '12 at 19:36