0

I am using raven to log from my celery jobs to sentry. I am finding that whenever I use the django logging system to log to sentry each update can take minutes (but the log succeeds). If I remove sentry from my logging configuration it is instant.

I tried reverting back to using raven directly via:

import raven
client=raven.Client("DSN")
client.captureMessage("message")

this works with no delay inside the worker.

But if I try to use the django specific client instead as below the delay exists:

from raven.contrib.django.raven_compat.models import client
client.captureMessage("message")

It is usually a little over 2 minutes so it looks like a timeout but the operation succeeds.

The delays are adding up and making my jobs queue unreliable.

BlueSkies
  • 103
  • 2
  • 8

1 Answers1

1

If you're using the default Celery worker model things should generally just work. If you're using something else that may be less true.

The Python client by default uses a threaded worker. Meaning, upon instantiation, it creates a queue and a thread to process messages asynchronously. If this happens in various ways it could cause problems (i.e. pre-fork), or if you're using something like gevent and not patching threads.

You can try changing the transport to be synchronous to confirm this is related:

https://docs.getsentry.com/hosted/clients/python/transports/

David Cramer
  • 1,990
  • 14
  • 24