2

I have a local readthedocs install and get a ValueError exception when trying to import a project. I'm on release 5.1.0, running python 3.6 on Debian buster with celery 4.1.1 (from the requirements files).

From the debug.log:

[19/May/2020 23:31:11] celery.app.trace:124[24]: INFO Task readthedocs.projects.tasks.send_notifications[39551573-cfe1-46c1-b7e2-28bde20fd962] succeeded in 0.005342413205653429s: None
[19/May/2020 23:31:11] celery.app.trace:124[24]: INFO Task readthedocs.oauth.tasks.attach_webhook[119bed10-cacc-450c-bd51-822e96faffd7] succeeded in 0.016763793770223856s: False
[19/May/2020 23:31:11] celery.app.trace:249[24]: ERROR Task readthedocs.projects.tasks.update_docs_task[b6c22791-f1c6-4ddb-b64a-68d141580c30] raised unexpected: ValueError('signal only works in main thread',)
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/celery/app/trace.py", line 375, in trace_task
    R = retval = fun(*args, **kwargs)
  File "/readthedocs.org/readthedocs/projects/tasks.py", line 448, in update_docs_task
    signal.signal(signal.SIGTERM, sigterm_received)
  File "/usr/local/lib/python3.6/signal.py", line 47, in signal
    handler = _signal.signal(_enum_to_int(signalnum), _enum_to_int(handler))
ValueError: signal only works in main thread

I'm using manage.py runserver to run readthedocs, so I tried the --noreload option which has no effect, and the --nothreading option, which causes pages to hang forever.

luds
  • 331
  • 2
  • 13

1 Answers1

3

In order to get a local installation working you need to run a celery worker, which I wasn't doing before (and is not in the readthedocs docs). I'm using docker compose, and ran a separate service called celery that uses the same image as the main readthedocs service (custom docker image that installs django and readthedocs).

celery -A readthedocs.worker worker -E -l info -Q celery,web

Additionally, I have these settings in my Django config:

BROKER_URL = os.getenv('REDIS_URL')
CELERY_RESULT_BACKEND = os.getenv('REDIS_URL')
CELERY_ALWAYS_EAGER = False

I have a simple redis service in my compose config:

redis:
  image: redis

I then have REDIS_URL=redis://redis:6379/0 as an environment variable on my readthedocs and celery services.

Somewhat unrelated but I also stopped using python manage.py runserver and replaced it with uwsgi for production.

uwsgi \
    --http :80 \
    --wsgi-file readthedocs/wsgi.py \
    --static-map /static=./static \
    --master --processes 4 --threads 2
luds
  • 331
  • 2
  • 13
  • Thanks! I ran into this same problem. You notes have been a great help! – Malcolm Oct 12 '20 at 20:35
  • I do have one probably stupid question. What/Where is the "compose config"? – Malcolm Oct 12 '20 at 20:50
  • 1
    See this file: https://github.com/readthedocs/common/blob/e74453b4334ed1541c63a015d4bf6f7901de882f/dockerfiles/docker-compose.yml A little confusing because it's in a submodule. – luds Oct 13 '20 at 02:02
  • I have also had to make two small changes to readthedocs/doc_builder/python_environments.py to be able to force my http_proxy settings into the pip install commands. I added code conditional on the existence of a settings.HTTP_PROXY value to insert the --proxy argument. Was there an existing mechanism that I was just missing? – Malcolm Oct 13 '20 at 19:30
  • Thanks! I managed to overcome this error by starting a redis server running on a different port number. Here is my configuration in the django settings file: BROKER_URL = 'redis://localhost:6300/0' CELERY_RESULT_BACKEND = 'redis://localhost:6300/0' CELERY_ALWAYS_EAGER = False – cie Feb 06 '21 at 15:25