13

I have my web app running on AWS EC2 instance in PHP and I make an ajax call that takes about 5-10 mintes, I see the web console in Google Chrome and I get this 504 (Gateway Timeout) ,How I can increase the value of this, is that related to apache? Thanks

  • Where did the 504 message originate - in Apache, in PHP, or in some kind of load balancer / CDN? What does your phpinfo() say next to "Server API"? – TML Nov 19 '13 at 22:49
  • If you're running your call through PHP, you need to increase your [maximum execution time in php.ini](http://www.php.net/manual/en/info.configuration.php#ini.max-execution-time). – Mat Carlson Nov 19 '13 at 22:55
  • @TML I strongly believe that it is due to ELB but not sure how to fix that –  Nov 19 '13 at 23:14
  • 1
    @matcarlson its in ELB because once I use instance URL that ajax call works fine –  Nov 19 '13 at 23:14

3 Answers3

15

ELB by default times out at 60 seconds; there is no way I know of to extend this limit, although this page suggests that it's something Amazon Support can do for you (and also suggests a method of working around the problem):

Point 6) Amazon ELB timeouts at 60 seconds (kept idle)

Amazon ELB currently timeouts persistent socket connections @ 60 seconds if it is kept idle. This condition will be a problem for use cases which generates large files (PDF, reports etc) at backend EC2, sends them as response back and keeps connection idle during entire generation process. To avoid this you'll have to send something on the socket every 40 or so seconds to keep the connection active in Amazon ELB. Note: I heard we can extend this value after explaining the case to AWS support team.

Edit: As commenters below have pointed out, as of July 24th, 2014 this is configurable in your AWS console.

TML
  • 12,813
  • 3
  • 38
  • 45
  • 2
    I've had success with getting the ELB timeout limit extended by asking before. It's worth confirming (so you can relay to the AWS engineers) that your backend has a greater timeout than the timeout you're seeking to increase the ELB to. – Jeff Sisson Nov 20 '13 at 01:12
  • @JeffSisson they extended it to 20 Minutes but what if the request takes more than 20 minutes? –  Nov 20 '13 at 10:37
  • 1
    @user1765876 If a request is taking that long, you are probably doing it wrong. Start using message queues. – datasage Nov 20 '13 at 15:20
  • 1
    Yeah, you'll need to look into some kind of asynchronous operation, queueing being the general category here. Heavy operations shouldn't be done on the request. – Jeff Sisson Nov 20 '13 at 15:36
  • 14
    Idle timeout is now configurable! http://aws.amazon.com/about-aws/whats-new/2014/07/24/elastic-load-balancing-now-supports-idle-timeout-configuration/ – Elad Nava Aug 16 '14 at 18:36
0

Do you tried change the timeout on ELB? Increase the inactivity limit connection inside Load Balancer Atrributes. Then you can change the default limit form 60 to 120.

0

There is a related question on SO here. Ensure you are doing the following (from this related question):

  1. I've confirmed that the containers/pods are running,
  2. the application has started and
  3. if I exec into the pod then I can run a local curl command and get a response from the app.
  4. I've checked the logs on the ingress pods and traffic is arriving

Some useful commands for this here and here.

Next step is to increase the ELB idle time as mentioned in the previous answer. In case you are using terraform you can use in annotations of your ingress:

"alb.ingress.kubernetes.io/load-balancer-attributes" = "idle_timeout.timeout_seconds=600"

Usually there is an NGINX ingress controller on top of that ELB that helps directing the traffic to your services. The latter also has timeouts, usually at 60 seconds. You need to change its timeouts as specified here. If you are using terraform you can specify them in your ingress annotations like so (changed them from 60 to 300 seconds):

  "nginx.ingress.kubernetes.io/proxy-connect-timeout" = "10"
  "nginx.ingress.kubernetes.io/proxy-send-timeout" = "300"
  "nginx.ingress.kubernetes.io/proxy-read-timeout" = "300"

You can then shell into your ingress controller as explained in link 2 and 3 and check your nginx.conf file. The new limits should be applied.

CVname
  • 347
  • 3
  • 12