8

Ive been searching wide and far and everything i've come across thus far suggests my configuration is correct.... I can see my tasks are being registered in my celery worker container. I don't see them registered in my beat container (but I don think I should? I should just see the tasks, which I do) but the tasks are not being run.

celery.py

from __future__ import absolute_import, unicode_literals
import os
from celery import Celery

# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'itapp.settings')

app = Celery("itapp")
app.config_from_object('django.conf:settings', namespace='CELERY')

# Load task modules from all registered Django app configs.
app.autodiscover_tasks()

if __name__ == '__main__':
    app.start()

@app.task(bind=True)
def debug_task(self):
    print('Request: {0!r}'.format(self.request))

init.py

from __future__ import absolute_import, unicode_literals

# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery import app as celery_app

__all__ = ('celery_app',)

settings.py

LANGUAGE_CODE = 'en-gb'
TIME_ZONE = 'Europe/London'
USE_TZ = True
USE_I18N = True
USE_L10N = True

# CELERY SETTINGS
CELERY_TIMEZONE = 'Europe/London'
ENABLE_UTC = True
CELERY_BROKER_URL = 'amqp://rabbitmq:5672'
CELERY_RESULT_BACKEND = 'redis://redis:6379'
CELERY_BEAT_SCHEDULER = 'django_celery_beat.schedulers:DatabaseScheduler'
CELERY_TASK_TIME_LIMIT = 540

docker-compose.yml

  celery:
    image: "app:latest"
    env_file: 
      - /Users/a/app/config/en_vars.txt
    volumes:
      - /Users/a/app/app:/app/app
    command: celery -A app worker -l INFO --concurrency 30
    depends_on:
      - rabbitmq
      - redis
  celery_beat:
    image: "app:latest"
    env_file: 
      - /Users/a/app/config/en_vars.txt
    volumes:
      - /Users/a/app/app:/app/app
    command: celery -A app beat -l DEBUG --pidfile= 
    depends_on:
      - rabbitmq
      - redis

periodic schedule

scheuduled job

celery logs

 -------------- celery@a05cfcda833f v4.4.7 (cliffs)
--- ***** -----
-- ******* ---- Linux-5.4.39-linuxkit-x86_64-with-debian-9.7 2020-11-26 14:57:55
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app:         itapp:0x7fb8d9e80518
- ** ---------- .> transport:   amqp://guest:**@rabbitmq:5672//
- ** ---------- .> results:     redis://redis:6379/
- *** --- * --- .> concurrency: 30 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
 -------------- [queues]
                .> celery           exchange=celery(direct) key=celery


[tasks]
  . monitoring.tasks.monitoring_devices

[2020-11-26 14:58:09,160: INFO/MainProcess] Connected to amqp://guest:**@rabbitmq:5672//
[2020-11-26 14:58:09,201: INFO/MainProcess] mingle: searching for neighbors
[2020-11-26 14:58:10,266: INFO/MainProcess] mingle: all alone
[2020-11-26 14:58:10,333: INFO/MainProcess] celery@a05cfcda833f ready.
[2020-11-26 14:58:10,417: INFO/MainProcess] Events of group {task} enabled by remote.

beat logs

celery beat v4.4.7 (cliffs) is starting.
__    -    ... __   -        _
LocalTime -> 2020-11-26 14:57:55
Configuration ->
    . broker -> amqp://guest:**@rabbitmq:5672//
    . loader -> celery.loaders.app.AppLoader
    . scheduler -> django_celery_beat.schedulers.DatabaseScheduler

    . logfile -> [stderr]@%DEBUG
    . maxinterval -> 5.00 seconds (5s)
[2020-11-26 14:57:55,321: DEBUG/MainProcess] Setting default socket timeout to 30
[2020-11-26 14:57:55,322: INFO/MainProcess] beat: Starting...
[2020-11-26 14:57:55,325: DEBUG/MainProcess] DatabaseScheduler: initial read
[2020-11-26 14:57:55,326: INFO/MainProcess] Writing entries...
[2020-11-26 14:57:55,327: DEBUG/MainProcess] DatabaseScheduler: Fetching database schedule
[2020-11-26 14:57:55,742: DEBUG/MainProcess] Current schedule:
<ModelEntry: celery.backend_cleanup celery.backend_cleanup(*[], **{}) <crontab: 0 4
         * *
          * (m/h/d/dM/MY), Europe/London>
        >
<ModelEntry: Meraki Device Monitor monitoring.tasks.monitoring_devices(*[], **{}) <freq: 2.00 minutes>>
[2020-11-26 14:57:56,537: DEBUG/MainProcess] beat: Ticking with max interval->5.00 seconds
[2020-11-26 14:57:57,094: DEBUG/MainProcess] beat: Waking up in 5.00 seconds.
[2020-11-26 14:58:02,101: DEBUG/MainProcess] beat: Synchronizing schedule...
[2020-11-26 14:58:02,102: INFO/MainProcess] Writing entries...
[2020-11-26 14:58:02,365: DEBUG/MainProcess] beat: Waking up in 5.00 seconds.
[2020-11-26 14:58:07,663: DEBUG/MainProcess] beat: Waking up in 5.00 seconds.
[2020-11-26 14:58:12,897: DEBUG/MainProcess] beat: Waking up in 5.00 seconds.

running the container as a single command "celery -A app worker -l INFO --beat" with debug enabled logs below

[2020-11-26 17:36:42,710: DEBUG/MainProcess] using channel_id: 1
[2020-11-26 17:36:42,725: DEBUG/MainProcess] Channel open
[2020-11-26 17:36:43,791: INFO/MainProcess] mingle: sync with 1 nodes
[2020-11-26 17:36:43,794: DEBUG/MainProcess] mingle: processing reply from celery@a05cfcda833f
[2020-11-26 17:36:43,795: INFO/MainProcess] mingle: sync complete
[2020-11-26 17:36:43,796: DEBUG/MainProcess] ^-- substep ok
[2020-11-26 17:36:43,797: DEBUG/MainProcess] | Consumer: Starting Tasks
[2020-11-26 17:36:43,831: DEBUG/MainProcess] ^-- substep ok
[2020-11-26 17:36:43,832: DEBUG/MainProcess] | Consumer: Starting Control
[2020-11-26 17:36:43,833: DEBUG/MainProcess] using channel_id: 2
[2020-11-26 17:36:43,837: DEBUG/MainProcess] Channel open
[2020-11-26 17:36:43,858: DEBUG/MainProcess] ^-- substep ok
[2020-11-26 17:36:43,859: DEBUG/MainProcess] | Consumer: Starting Gossip
[2020-11-26 17:36:43,860: DEBUG/MainProcess] using channel_id: 3
[2020-11-26 17:36:43,863: DEBUG/MainProcess] Channel open
[2020-11-26 17:36:43,877: DEBUG/MainProcess] ^-- substep ok
[2020-11-26 17:36:43,878: DEBUG/MainProcess] | Consumer: Starting Heart
[2020-11-26 17:36:43,879: DEBUG/MainProcess] using channel_id: 1
[2020-11-26 17:36:43,883: DEBUG/MainProcess] Channel open
[2020-11-26 17:36:43,888: DEBUG/MainProcess] ^-- substep ok
[2020-11-26 17:36:43,889: DEBUG/MainProcess] | Consumer: Starting event loop
[2020-11-26 17:36:43,891: DEBUG/MainProcess] | Worker: Hub.register Pool...
[2020-11-26 17:36:43,898: WARNING/MainProcess] /usr/local/lib/python3.7/site-packages/celery/fixups/django.py:206: UserWarning: Using settings.DEBUG leads to a memory
            leak, never use this setting in production environments!
  leak, never use this setting in production environments!''')
[2020-11-26 17:36:43,899: INFO/MainProcess] celery@123d2a28ed9f ready.
[2020-11-26 17:36:43,900: DEBUG/MainProcess] basic.qos: prefetch_count->12
[2020-11-26 17:36:44,139: DEBUG/MainProcess] celery@a05cfcda833f joined the party
[2020-11-26 17:36:45,832: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]
[2020-11-26 17:36:45,834: INFO/MainProcess] Events of group {task} enabled by remote.
[2020-11-26 17:36:47,337: INFO/Beat] beat: Starting...
[2020-11-26 17:36:47,345: DEBUG/Beat] DatabaseScheduler: initial read
[2020-11-26 17:36:47,346: INFO/Beat] Writing entries...
[2020-11-26 17:36:47,347: DEBUG/Beat] DatabaseScheduler: Fetching database schedule
[2020-11-26 17:36:47,782: DEBUG/Beat] Current schedule:
<ModelEntry: celery.backend_cleanup celery.backend_cleanup(*[], **{}) <crontab: 0 4
         * *
          * (m/h/d/dM/MY), Europe/London>
        >
<ModelEntry: Meraki Device Monitor monitoring.tasks.monitoring_devices(*[], **{}) <freq: 2.00 minutes>>
[2020-11-26 17:36:48,629: DEBUG/Beat] beat: Ticking with max interval->5.00 seconds
[2020-11-26 17:36:49,125: DEBUG/Beat] beat: Waking up in 5.00 seconds.
[2020-11-26 17:36:50,835: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]
[2020-11-26 17:36:54,093: DEBUG/Beat] beat: Synchronizing schedule...
[2020-11-26 17:36:54,095: INFO/Beat] Writing entries...
[2020-11-26 17:36:54,336: DEBUG/Beat] beat: Waking up in 5.00 seconds.
[2020-11-26 17:36:55,802: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]
[2020-11-26 17:36:59,592: DEBUG/Beat] beat: Waking up in 5.00 seconds.
[2020-11-26 17:37:00,834: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]
[2020-11-26 17:37:03,859: DEBUG/MainProcess] heartbeat_tick : for connection 6d96e75dc3d240809cca3a0aa738e512
[2020-11-26 17:37:03,859: DEBUG/MainProcess] heartbeat_tick : Prev sent/recv: None/None, now - 28/103, monotonic - 33966.588017603, last_heartbeat_sent - 33966.587982103, heartbeat int. - 60 for connection 6d96e75dc3d240809cca3a0aa738e512
[2020-11-26 17:37:04,876: DEBUG/Beat] beat: Waking up in 5.00 seconds.
[2020-11-26 17:37:05,834: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]
[2020-11-26 17:37:10,131: DEBUG/Beat] beat: Waking up in 5.00 seconds.
[2020-11-26 17:37:10,836: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]
[2020-11-26 17:37:15,364: DEBUG/Beat] beat: Waking up in 5.00 seconds.
[2020-11-26 17:37:15,833: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]
[2020-11-26 17:37:20,773: DEBUG/Beat] beat: Waking up in 5.00 seconds.
[2020-11-26 17:37:20,833: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]
[2020-11-26 17:37:23,827: DEBUG/MainProcess] heartbeat_tick : for connection 6d96e75dc3d240809cca3a0aa738e512
[2020-11-26 17:37:23,827: DEBUG/MainProcess] heartbeat_tick : Prev sent/recv: 28/103, now - 28/176, monotonic - 33986.590378703, last_heartbeat_sent - 33966.587982103, heartbeat int. - 60 for connection 6d96e75dc3d240809cca3a0aa738e512

Data in the live DB:

>>> PeriodicTask.objects.all()
<ExtendedQuerySet [<PeriodicTask: celery.backend_cleanup: 0 4 * * * (m/h/d/dM/MY) Europe/London>, <PeriodicTask: Device Monitor: every 2 minutes>]>
>>> PeriodicTasks.objects.all()
<ExtendedQuerySet [<PeriodicTasks: PeriodicTasks object (1)>]>
>>> vars(PeriodicTasks.objects.all()[0])
{'_state': <django.db.models.base.ModelState object at 0x7fc0e1f300f0>, 'ident': 1, 'last_update': datetime.datetime(2020, 12, 7, 9, 1, 59, tzinfo=<UTC>)}
>>>
AlexW
  • 2,843
  • 12
  • 74
  • 156
  • Can you add logs from over 2 seconds? As that's your interval. Also, I have had similar issues when using celery with docker. Try removing containers, images and then recreate image and run container. – Tom Wojcik Nov 26 '20 at 15:26
  • the celery logs end there, its not doing anything else, just waiting. The beat logs are a repeat of "waking up in 5.00 seconds". ive tried that on two different hosts, no luck. Ive also just tried using the single command argument on one container to see if that worked "celery -A app worker -l INFO --beat" all launches successfully but no schedule running again – AlexW Nov 26 '20 at 17:36
  • I would try to launch a one-off task with a period of 3 seconds and see if anything happens – maremare Nov 26 '20 at 20:03
  • the one off task didn't run either – AlexW Nov 27 '20 at 10:04
  • does it works if you specify the scheduler as a command argument? `celery -A app beat -l info --scheduler django_celery_beat.schedulers:DatabaseScheduler` – matt.LLVW Dec 01 '20 at 16:29
  • no, no difference, I tried setting the start time to be 1 minute ahead of now incase that reset it, no different when "now" hit. only messages from beat in debug is waking up in 5.0 seconds and nothing else – AlexW Dec 01 '20 at 18:30
  • @AlexW, can you try to run them manually? (under django admin, select "Run selected tasks") – manishg Dec 02 '20 at 01:09
  • I can't see that option anywhere – AlexW Dec 02 '20 at 09:50
  • https://i.stack.imgur.com/AjvKU.png -- In this image, there is a dropdown next to "Action". You will find the option there. @AlexW – manishg Dec 02 '20 at 20:40
  • @manishg yes that worked, the celery container received the job and executed it, however the last run time hasn't been set (not sure if it is when run manually?) – AlexW Dec 03 '20 at 09:56
  • So this means your beat is not working with your database. Are you using multiple tenants? I will go to database and check if the schedule is created correctly – manishg Dec 03 '20 at 13:53
  • youre right, when I set my dev env to use a local db.sqlite3 database it works, when I change it to our live mysql DB they dont work. ive looked at the structure of the tables and they seem to be identical. Any ideas how I can troubleshoot this? – AlexW Dec 04 '20 at 15:48
  • @AlexW, so celery should write the periodic tasks to a table called `PeriodicTask` and this is the one celery beat should pick. I am worried whether you have multiple schemas in your database and the default one doesnt have the listed periodic task. Try the shell commands to check it out (https://docs.celeryproject.org/en/stable/userguide/periodic-tasks.html) – manishg Dec 04 '20 at 19:58
  • Which version of django_celery_beat are you using? – pygeek Dec 05 '20 at 05:46
  • I'm not sure if you're aware, but if you make changes to your Timezone after running the PeriodicTask migrations, you'll need to update the PeriodicTask (https://docs.celeryproject.org/en/stable/userguide/periodic-tasks.html#time-zones). This may be why your app is working under sqlite and not production mysql. – pygeek Dec 05 '20 at 06:28
  • celery==4.4.7, django-celery-beat==2.0.0. ive run the reset timezone command from shell, it hasn't made any differences. what other shell commands were you referring to int the document, incase im missing anything? – AlexW Dec 05 '20 at 12:51
  • Did you check your database to see if the tables have the schedule and tasks? – manishg Dec 06 '20 at 02:18
  • ive added the output to the Q, the PeriodicTask has them but he PeriodicTasks (plural) doesnt look like it has them – AlexW Dec 07 '20 at 09:11
  • PeriodicTasks (plural) in my local sqlite3 db also only has one entry in too so not sure if its related? – AlexW Dec 07 '20 at 17:32
  • @manishg any ideas? – AlexW Dec 07 '20 at 22:01
  • @AlexW, the `PeriodicTask` table has the correct entries so ideally it should work. It's most likely a bug in celery or celerybeat of not picking up your schedule. Maybe downgrade your celery to something like 4.3.0 and beat to maybe like 1.5.0 and give it a try – manishg Dec 08 '20 at 02:58
  • 1
    I had to use 4.3.1 due to a vine import error but its working!!! – AlexW Dec 08 '20 at 17:12
  • 1
    @AlexW so it is indeed a bug with celery. Maybe a good idea to report it to them – manishg Dec 08 '20 at 20:16

1 Answers1

1

Your docker-compose.yml doesn't have any MySQL links. Is it possible that your application can't reach the database server?

If you suspect that this is the issue, you can list the docker containers with docker ps and then manually try to connect to the required databases from inside the relevant container with docker exec -ti <containerid> bash and then use the mysql or even nc or curl or wget to check whether the database host can be resolved - if not, link to it from the docker-compose.yml which will insert it into the hosts file automatically, or you can do it manually, or just update it in your configuration files.

dagelf
  • 1,468
  • 1
  • 14
  • 25
  • the mysql DB connection is settings.py all the containers in compose can access the DB, I can load manage.py shell and import models and browse data successfully – AlexW Dec 07 '20 at 16:38
  • Does your MySQL ACL allow access from all containers? – dagelf Dec 08 '20 at 15:12