0

We're getting a 504 time out in Wordpress with a feature we have attached to "publish post" which takes an attacked zip file, unzips it, moves the unzipped files to s3, makes a new Zip and moves that to S3 too. On bigger files its timing out after 60 seconds.

Can I make it clear here now this ISNT a user function - this doesn't happen from the front end of the site when a user does anything. The user uploads the content images, zip etc to a post which waits for us in the admin panel. Upon moderation we can chose to delete the post (which removes all the data they uploaded at the same time) or publish the post which then takes the zip they uploaded, unzips it, checks for viruses, deletes anything that isn't an MP3, uploads the individual MP3's to S3, creates a new zip file and uploads that to S3 too. This is all running from EC2. As you can imagine while this doens't put too much load on the server CPU, it does often take longer than 60 seconds to move all this data to S3.

So I've seen the suggestions on How do I prevent a Gateway Timeout with Nginx

I've put fastcgi_read_timeout into my nginx.conf and set it (for now) to 2700 in attempt to avoid all time out errors I've done this with everything that involves timeouts. I've also added client_body_timeout and send_timeout as mentioned on that page. But still the process times out 60 seconds in.

Am I possibly putting them into the wrong place on nginx.conf (it restarts with no problem) using the wrong times, or perhaps there is another feature that will allow this php process to complete.

I have all php-fpm times as long as I can set them too.

Danny Shepherd
  • 177
  • 1
  • 12
  • 3
    You need to redesign your application, so that it responds to the user right away and does your file processing in the background. – Michael Hampton Mar 20 '13 at 11:06
  • Michael, we don't want to do this as we need to option to reject submission that are garbage and we don't want them appearing on S3 - it also gives us the ability to stagger the process one at time rather than run the risk of 50 people all trying to process automatically, so many advantages to it being triggered on post published rather than post submitted. Regardless its still the same background task so same timeout occurs which is obviously a setting server side - but where, is what im asking. – Danny Shepherd Mar 20 '13 at 14:53
  • 1
    Nothing about background processing would prevent you from rejecting submissions, removing them from S3, or staggering your processes. In fact, they would be easier if you did. – Michael Hampton Mar 20 '13 at 14:54
  • 1
    We can't debug this without your configuration. – mgorven Mar 20 '13 at 16:33
  • mgorven - would a copy of my nginx.conf and other nginx setup files help, what about php-fpm conf files? – Danny Shepherd Mar 20 '13 at 17:22
  • If your processing takes more than 60 seconds, and your webapp does not issue any kind of intermediate replies during such processing, then you're definitely doing something wrong, and even fixing your nginx configuration would only solve the tip of the iceberg of problems you'd likely to deal with in the future. – cnst Mar 20 '13 at 17:26
  • cnst - the process is working in such time, but bear in mind it is unzipping a 200 mb file of mp3s, checking they are mp3s, uploading each mp3 individually to S3, making a new zip file (after removing anything that isn't mp3s) and also uploading that to S3. This is going to take more than 60 seconds in a lot of cases - its moving up to 500mb of data from one of our EC2 instances to S3 in most cases along with zipping, unzipping and checking processes in the meantime and publishing the post on Wordpress. – Danny Shepherd Mar 20 '13 at 17:54
  • Can I just also add - this is not a user front end process - the user does't sit and wait for this. Its an admin thing. The user only uploads the files and has to wait for the upload to server to commence. Its our job to either than reject the submission (which deletes zip file and images attached to post instantly) or publish the post, which then goes into the process of checking it, cleaning it, repackaging it and putting on S3 and making the post live on the site. – Danny Shepherd Mar 20 '13 at 18:05
  • Do you have an Elastic Load Balancer in front of your EC2 instance? – jamieb Mar 20 '13 at 19:43
  • hi jamieb - we will have when the site comes to launch, but just doing dev on a single EC2 instance at the mo. – Danny Shepherd Mar 20 '13 at 20:46
  • Ok, sorted it - fastcgi_read_timeout needed to be set specifically in a wordpress section in one of the many .conf files that are included in my nginx image setup. Doing that worked a treat - now I've just go to go back and try and set all the other options back to reasonable numbers, though I don't see what damage having them high will do. Realise its not the most elegant method and its not a background process but all I was asking is which time out setting I needed to change to stop it happening after exactly 60 seconds. – Danny Shepherd Mar 20 '13 at 22:15

1 Answers1

2

HTTP requests cannot live forever - either the server or the client will eventually give up (the most generous server-side timeouts for HTTP requests are usually 5-10 minutes, and users will frequently give up before then and start banging on the reload button - a virtually guaranteed way to kill your server).

Like Michael Hampton said, you need to redesign your application to deal with the fact that you have a background processing stage which may take a long time to complete.

A little AJAX goes a long way (while the back-end is processing the client can request status updates from the server periodically - that lets you give the user feedback on progress, and avoids the whole timeout issue).
There are plenty of other ways to handle this too.

voretaq7
  • 79,879
  • 17
  • 130
  • 214
  • voretaq7 - as I mentioned in the comments above. This ISNT a user feature. Its an admin feature. This happens when WE decide from the admin panel as we approve or delete submissions. All the user does is visit the form and upload their content which sits on the server waiting for our interaction. At this point we either reject it (which deletes the files they uploaded) or we submit it, which then unzips their file, check its for viruses and deletes anything but mp3, creates new zip and uploads new zip and individual mp3s to S3. So uploading 500mb+ often takes more than 60 seconds... – Danny Shepherd Mar 20 '13 at 19:35
  • @DannyShepherd As everyone else has mentioned in their comments, ***that DOESN'T matter*** - it's a HTTP request through a web browser. Whether the "user" is your end users, your admins, or a trained kitten is wholly immaterial - you need to make discrete calls shorter than your timeout window to keep the connection from dying. There are plenty of sites & admin interfaces that do exactly this ([FreePBX](http://freepbx.org) updates are one example of an async front-end/back-end architecture). Check out Stack Overflow for example code if you need it. – voretaq7 Mar 20 '13 at 19:42
  • Ok, fair enough - but with that in mind, how can I set this time out to longer than 60 seconds anyway, there must be a way to make it 2-3 minutes? If its the right or wrong way to currently do it the original question is why is it timing out at exactly 60 seconds when the server times should all be set to longer than that - and if it times out even if you set your expirying times at 2700 seconds, whats the point in being to set them longer? – Danny Shepherd Mar 20 '13 at 20:48
  • @DannyShepherd (1) it may not be a ***server*** timeout (*clients* can decide it's taking too long and give up as well!). (2) it may not be an `nginx` timeout (are you using PHP? PHP has its own maximum execution time setting...) – voretaq7 Mar 20 '13 at 21:23
  • Ok, sorted it - fastcgi_read_timeout needed to be set specifically in a wordpress section in one of the many .conf files that are included in my nginx image setup. Doing that worked a treat - now I've just go to go back and try and set all the other options back to reasonable numbers, though I don't see what damage having them high will do. Realise its not the most elegant method and its not a background process but all I was asking is which time out setting I needed to change to stop it happening after exactly 60 seconds. – Danny Shepherd Mar 20 '13 at 22:15