4

I deployed my django project to the AWS ECS service using the docker. And to use celery I set rabbitmq as a separate ec2 server (two ec2 with brocker and result backend).

The problem is that the celery worker works locally but not on AWS. When I typing docker run -rm -it -p 8080: 80 proj command in local, worker is working.

But when i deploy app on ECS, worker does not working. So I must create a worker with celery -A mysite worker -l INFO in my local django project. Despite setting the supervisor to run worker.

Below is my code.

Dockerfile

FROM        ubuntu:16.04

# A layer is created for each command ex)RUN, ENV, COPY, etc...
RUN         apt-get -y update
RUN         apt-get -y install python3 python3-pip
RUN         apt-get -y install nginx
RUN         apt-get -y install python-dev libpq-dev
RUN         apt-get -y install supervisor

WORKDIR     /srv
RUN         mkdir app

COPY        . /srv/app
WORKDIR     /srv/app

RUN         pip3 install -r requirements.txt
RUN         pip3 install uwsgi
ENV         DEBUG="False" \
            STATIC="s3" \
            REGION="Tokyo"

COPY        .conf/uwsgi-app.ini         /etc/uwsgi/sites/app.ini
COPY        .conf/nginx.conf            /etc/nginx/nginx.conf
COPY        .conf/nginx-app.conf        /etc/nginx/sites-available/app.conf
COPY        .conf/supervisor-app.conf   /etc/supervisor/conf.d/
COPY        .conf/docker-entrypoint.sh  /
RUN         ln -s /etc/nginx/sites-available/app.conf   /etc/nginx/sites-enabled/app.conf

EXPOSE      80
CMD         supervisord -n
ENTRYPOINT  ["/docker-entrypoint.sh"]

supervior-app.conf

[program:uwsgi]
command = uwsgi --ini /etc/uwsgi/sites/app.ini

[program:nginx]
command = nginx

[program:celery]
directory = /srv/app/django_app/
command = celery -A mysite worker -l INFO --concurrency=6
numprocs=1

stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.err
autostart=true
autorestart=true
startsecs=10

; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600

; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true

; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998

settings.py

# CELERY STUFF
BROKER_URL = 'amqp://user:password@example.com//'
CELERY_RESULT_BACKEND = 'amqp://user:password@example.com//'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'

celery.py

import os
from celery import Celery
from django.conf import settings

# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'mysite.settings')
app = Celery('mysite')

# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)


@app.task(bind=True)
def debug_task(self):
    print('Request: {0!r}'.format(self.request))

tasks.py

from celery.task import task
from celery.utils.log import get_task_logger

from .helpers import send_password_email

logger = get_task_logger(__name__)


@task(name="send_password_email_task")
def send_password_email_task(email, password):
    """Send an email when user when a user requests to find a password"""
    logger.info("Sent feedback email")
    return send_password_email(email, password)

Add Nginx code...

nginx.conf

user root;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
daemon off;

events {
    worker_connections 768;
    # multi_accept on;
}

http {

    ##
    # Basic Settings
    ##

    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    # server_tokens off;

    server_names_hash_bucket_size 512;
    # server_name_in_redirect off;

    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    ##
    # SSL Settings
    ##

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
    ssl_prefer_server_ciphers on;

    ##
    # Logging Settings
    ##

    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;

    ##
    # Gzip Settings
    ##

    gzip on;
    gzip_disable "msie6";

    # gzip_vary on;
    # gzip_proxied any;
    # gzip_comp_level 6;
    # gzip_buffers 16 8k;
    # gzip_http_version 1.1;
    # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

    ##
    # Virtual Host Configs
    ##

    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}


#mail {
#   # See sample authentication script at:
#   # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
#   # auth_http localhost/auth.php;
#   # pop3_capabilities "TOP" "USER";
#   # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
#   server {
#       listen     localhost:110;
#       protocol   pop3;
#       proxy      on;
#   }
#
#   server {
#       listen     localhost:143;
#       protocol   imap;
#       proxy      on;
#   }
#}

nginx-app.conf

server {
        listen 80;
        server_name localhost ~^(.+)$;
        charset utf-8;
        client_max_body_size 128M;


        location / {
                uwsgi_pass      unix:///tmp/app.sock;
                include         uwsgi_params;
        }
}

ACL inbound rules

ACL inbound rules image

Task definition

ECS Task definition capture

byunghyun park
  • 499
  • 1
  • 8
  • 25
  • 1
    Hi, I don not understand when you say "it does not work". What happen? The container does not start? the application is not accessible? Have you modified the EC2 instance security group to allow traffic on the 8080 port? Out of the box it is closed. – Maurizio Benedetti Apr 12 '17 at 08:36
  • Hi~ @MaurizioBenedetti When I run docker in local 8080 port, celery worker is working in `http://localhost:8080`. that is just local case. ECS app server is working fine except celery. and I already setting EC2 security group is all traffic allow.. container is working for supervisord ex) Nginx works fine – byunghyun park Apr 12 '17 at 08:58
  • So locally you can telnet to localhost 8080. From another machine you can't. Right? Two options are there, security group or OS firewall. If running on LInux, check the iptables/firewalld for configuration. Which Firewall are you using? – Maurizio Benedetti Apr 12 '17 at 09:04
  • @MaurizioBenedetti The security group I mentioned was the aws ec2 instance security group. I think I should look at the iptables. It's a concept I do not know yet. – byunghyun park Apr 12 '17 at 09:18
  • which distribution and version are you using? we can help on the commands, for a quick test you could disable the firewall – Maurizio Benedetti Apr 12 '17 at 09:47
  • @MaurizioBenedetti If you are talking about the Linux version, you are distributing it to the ubuntu 16.04 version of the docker image. – byunghyun park Apr 12 '17 at 10:55
  • @MaurizioBenedetti This [wiki link](https://wiki.ubuntu.com/UncomplicatedFirewall) is that you mean? – byunghyun park Apr 12 '17 at 11:09
  • @MaurizioBenedetti I also added nginx related code – byunghyun park Apr 12 '17 at 11:16
  • and ECS Linux version is **Amazon Linux AMI release 2016.09** – byunghyun park Apr 12 '17 at 11:23
  • strange, it should not be there AFAIK. Did you define any Network ACL? – Maurizio Benedetti Apr 12 '17 at 12:00
  • @MaurizioBenedetti Yes, I added a capturing photo of the ACL information to the question. (I do not know if it's what you want..) – byunghyun park Apr 12 '17 at 12:41
  • @MaurizioBenedetti I add capture of ECS Task definition in Question – byunghyun park Apr 12 '17 at 13:10
  • @byunghyunpark (I have a different question) - from supervisor conf I see you are starting one worker with concurrency of 6. Have you tried running one worker per container? If yes will you be able to share details. Thanks. – Hussain Bohra Apr 14 '18 at 02:03

2 Answers2

2

Too simple to solve the problem. The problem was that I set the broker node to localhost. There was no problem with the app code.

byunghyun park
  • 499
  • 1
  • 8
  • 25
  • 1
    I'm not quite sure what you mean by this. Can you add an updated config to your answer? I've got similar symptoms on my ECS deployment. – vitale232 Oct 21 '19 at 13:12
0

If you want to run a celery worker in ECS by default containers are executed by root user, and to use worker in a root environment you should setup an env var to do so:

http://docs.celeryproject.org/en/latest/userguide/daemonizing.html

When you define the vars in the task definitio set C_FORCE_ROOT = true.