4

I am having the same symptoms as https://forums.aws.amazon.com/message.jspa?messageID=580990#580990 but on EB Docker Preconfigured Python (i.e. visibility timeout not respected). First off, my queue visibility timeout (configured in both eb and sqs) is 1800s.

I receive a 502 after 60s since my messages take more than 60s to be processed (and after 60s the queue of course attempts to retry the message since it received a 502). I tried the .ebextensions proxy.conf solution (mentioned in the link by ecd_bm) to no avail.

My /var/log/nginx/access.log gives:

127.0.0.1 - - [18/May/2015:08:56:58 +0000] "POST /scrape-emails HTTP/1.1" 502 172 "-" "aws-sqsd/2.0"

My nginx /var/log/nginx/error.log gives:

2015/05/18 08:56:58 [error] 12465#0: *32 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: , request: "POST /scrape-emails HTTP/1.1", upstream: "http://172.17.0.4:8080/scrape-emails", host: "localhost"

My /var/log/aws-sqsd/default.log gives:

2015-05-18T08:56:58Z http-err: 8240b585-61c3-4fba-b99a-265ace312308 (1) 502 - 60.050

First off, my /etc/nginx/nginx.conf looks like:

# Elastic Beanstalk Nginx Configuration File

user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log;

pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    access_log    /var/log/nginx/access.log;

    include       /etc/nginx/conf.d/*.conf;
    include       /etc/nginx/sites-enabled/*;
}

I used to receive 504s after 60s but adding the following to /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy.conf (which is included by /etc/nginx/nginx.conf) got rid of them (but they were replaced with 502s):

map $http_upgrade $connection_upgrade {
    default     "upgrade";
    ""          "";
}

server {
    listen 80;

    location / {
        proxy_pass          http://docker;
        proxy_http_version  1.1;

        proxy_set_header    Connection      $connection_upgrade;
        proxy_set_header    Upgrade     $http_upgrade;
        proxy_set_header    Host            $host;
        proxy_set_header    X-Real-IP       $remote_addr;
        proxy_set_header    X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_buffers 8 16k;
        proxy_buffer_size 32k;
        proxy_connect_timeout 1800s;
        proxy_send_timeout 1800s;
        proxy_read_timeout 1800s;

    }
}

I have literally set every param that defaults to 60s to 1800s -- http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffers

I have noticed that uwsgi log says: your mercy for graceful operations on workers is 60 seconds Could this be the issue? -- How do I fix this if it is? If not how do I stop the 502s.

Also, I have added the following to /etc/nginx/uwsgi_params to no avail:

uwsgi_read_timeout 1800s;
uwsgi_send_timeout 1800s;
uwsgi_connect_timeout 1800s;

After editing an nginx config file (using ssh), I would always "Restart App Servers" in the eb web interface and then test.

Any ideas on how to get rid of the 502 and make the visibility timeout respected when processing a message?

Gobi Dasu
  • 459
  • 1
  • 6
  • 22
  • Did you set harakiri in your uwsgi file? Did you try to set limit-post? Probably client_max_body_size may also help. I had similar issue before and it was really hard to figure out... – Andrii Rusanov May 22 '15 at 11:57
  • Did you manage to find a solution to this? I have the exact same problem. – awidgery May 06 '16 at 06:04
  • Would also love to see an answer to this. Seems like it should be possible to set nginx based on the environment's visibility timeout with an appropriately-crafted ebextension. – Mat Schaffer Jan 11 '18 at 07:08

1 Answers1

0

Here's what I've worked out so far. No idea if this is a "safe" way to access the queue visibility timeout but it seems to do the trick on my ruby worker environment for now:

packages:
  yum:
    jq: []

commands:
  match_nginx_timeout_to_sqs_timeout:
    command: |
      VISIBILITY_TIMEOUT=$(
        /opt/aws/bin/cfn-get-metadata --region `{"Ref": "AWS::Region"}` --stack `{"Ref": "AWS::StackName"}` \
          --resource AWSEBBeanstalkMetadata --key AWS::ElasticBeanstalk::Ext |
          jq -r '.Parameters.AWSEBVisibilityTimeout'
      )
      if [[ -n "${VISIBILITY_TIMEOUT}" ]]; then
        echo "proxy_read_timeout ${VISIBILITY_TIMEOUT}s;" > /etc/nginx/conf.d/worker.conf
        service nginx restart
      fi

I actually had a secondary use for this data so ended up splitting it out into a properties-cache file as well. See https://github.com/Safecast/ingest/pull/43/files for details.

I get the impression updating the visibility timeout from the beanstalk UI won't update this value until the next deployment, but I'm okay with that situation since it doesn't change for an environment very often anyway.

Mat Schaffer
  • 1,634
  • 1
  • 15
  • 24