I have a webapp written with django-channels+celery that uses websockets for client-server communication. After testing it running daphne, the celery worker and redis on my host machine I decide to encapsulate everything with docker-compose to have a deployable system.
This is where the problems started. I managed to have it working after learning, tweaking and debugging my docler-compose.yaml but still I can't get websockets to work again.
If I open a websocket and send a command, wether from inside the javascript part of the app, or from the javascript console in chrome, it never triggers the ws_connect nor the ws_receive consumers.
This is my setup:
settings.py
# channels settings
REDIS_HOST = os.environ['REDIS_HOST']
REDIS_URL = "redis://{}:6379".format(REDIS_HOST)
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_redis.RedisChannelLayer",
"CONFIG": {
# "hosts": [os.environ.get('REDIS_HOST', 'redis://localhost:6379')],
"hosts": [REDIS_URL],
},
"ROUTING": "TMWA.routing.channel_routing",
},
}
routing.py
channel_routing = {
'websocket.connect': consumers.ws_connect,
'websocket.receive': consumers.ws_receive,
'websocket.disconnect': consumers.ws_disconnect,
}
consumers.py
@channel_session
def ws_connect(message):
print "in ws_connect"
print message['path']
prefix, label, sessionId = message['path'].strip('/').split('/')
print prefix, label, sessionId
message.channel_session['sessionId'] = sessionId
message.reply_channel.send({"accept": True})
connMgr.AddNewConnection(sessionId, message.reply_channel)
@channel_session
def ws_receive(message):
print "in ws_receive"
jReq = message['text']
print jReq
task = ltmon.getJSON.delay( jReq )
connMgr.UpdateConnection(message.channel_session['sessionId'], task.id)
@channel_session
def ws_disconnect(message):
print "in ws_disconnect"
connMgr.CloseConnection(message.channel_session['sessionId'])
docker-compose.yaml
version: '3'
services:
daphne:
build: ./app
image: "tmwa:latest"
# working_dir: /opt/TMWA
command: bash -c "./start_server.sh"
ports:
- "8000:8000"
environment:
- REDIS_HOST=redis
- RABBIT_HOST=rabbit
- DB_NAME=postgres
- DB_USER=postgres
- DB_SERVICE=postgres
- DB_PORT=5432
- DB_PASS=''
networks:
- front
- back
depends_on:
- redis
- postgres
- rabbitmq
links:
- redis:redis
- postgres:postgres
- rabbitmq:rabbit
volumes:
- ./app:/opt/myproject
- static:/opt/myproject/static
- /Volumes/AMS_Disk/TrackerMonitoring/Data/:/Data/CalFiles
worker:
image: "tmwa:latest"
# working_dir: /opt/myproject
command: bash -c "./start_worker.sh"
environment:
- REDIS_HOST=redis
- RABBIT_HOST=rabbit
- DB_NAME=postgres
- DB_USER=postgres
- DB_SERVICE=postgres
- DB_PORT=5432
- DB_PASS=''
networks:
- front
- back
depends_on:
- redis
- postgres
- rabbitmq
links:
- redis:redis
- postgres:postgres
- rabbitmq:rabbit
volumes:
- ./app:/opt/myproject
- /Volumes/AMS_Disk/TrackerMonitoring/Data/:/Data/CalFiles
postgres:
restart: always
image: postgres:latest
networks:
- back
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
redis:
restart: always
image: redis
networks:
- back
ports:
- "6379:6379"
volumes:
- redis:/data
rabbitmq:
image: tutum/rabbitmq
environment:
- RABBITMQ_PASS=password
networks:
- back
ports:
- "5672:5672"
- "15672:15672"
networks:
front:
back:
volumes:
pgdata:
driver: local
redis:
driver: local
app:
driver: local
static:
I run the server with
daphne -b 0.0.0.0 -p 8000 TMWA.asgi:channel_layer
and the worker with
python manage.py runworker
I removed nginx from the equation, so I run the worker and daphne in separate containers and I would like to have all the websocket connections managed by the daphne container which then dispatch the computing task to the worker. The problem is that when I open a websocket and send data nothing happens
docker-compose up
redis_1 | 1:C 11 Oct 15:25:22.012 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1 | 1:C 11 Oct 15:25:22.012 # Redis version=4.0.2, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1 | 1:C 11 Oct 15:25:22.012 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1 | 1:M 11 Oct 15:25:22.013 * Running mode=standalone, port=6379.
redis_1 | 1:M 11 Oct 15:25:22.013 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1 | 1:M 11 Oct 15:25:22.013 # Server initialized
redis_1 | 1:M 11 Oct 15:25:22.014 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_1 | 1:M 11 Oct 15:25:22.014 * DB loaded from disk: 0.000 seconds
redis_1 | 1:M 11 Oct 15:25:22.014 * Ready to accept connections
rabbitmq_1 | => Securing RabbitMQ with a preset password
postgres_1 | LOG: database system was interrupted; last known up at 2017-10-11 15:09:33 UTC
rabbitmq_1 | => Done!
postgres_1 | LOG: database system was not properly shut down; automatic recovery in progress
rabbitmq_1 | ========================================================================
rabbitmq_1 | You can now connect to this RabbitMQ server using, for example:
postgres_1 | LOG: invalid record length at 0/249A378: wanted 24, got 0
rabbitmq_1 |
postgres_1 | LOG: redo is not required
rabbitmq_1 | curl --user admin:<RABBITMQ_PASS> http://<host>:<port>/api/vhosts
rabbitmq_1 |
postgres_1 | LOG: MultiXact member wraparound protections are now enabled
rabbitmq_1 | ========================================================================
postgres_1 | LOG: database system is ready to accept connections
rabbitmq_1 |
postgres_1 | LOG: autovacuum launcher started
rabbitmq_1 | RabbitMQ 3.6.1. Copyright (C) 2007-2016 Pivotal Software, Inc.
rabbitmq_1 | ## ## Licensed under the MPL. See http://www.rabbitmq.com/
rabbitmq_1 | ## ##
rabbitmq_1 | ########## Logs: /var/log/rabbitmq/rabbit@a22d1ccdf39e.log
rabbitmq_1 | ###### ## /var/log/rabbitmq/rabbit@a22d1ccdf39e-sasl.log
rabbitmq_1 | ##########
daphne_1 | System check identified some issues:
daphne_1 |
daphne_1 | WARNINGS:
daphne_1 | ?: (1_7.W001) MIDDLEWARE_CLASSES is not set.
daphne_1 | HINT: Django 1.7 changed the global defaults for the MIDDLEWARE_CLASSES. django.contrib.sessions.middleware.SessionMiddleware, django.contrib.auth.middleware.AuthenticationMiddleware, and django.contrib.messages.middleware.MessageMiddleware were removed from the defaults. If your project needs these middleware then you should configure this setting.
daphne_1 | Operations to perform:
daphne_1 | Synchronize unmigrated apps: staticfiles, channels, messages
daphne_1 | Apply all migrations: admin, TkMonitor, contenttypes, auth, sessions
daphne_1 | Synchronizing apps without migrations:
daphne_1 | Creating tables...
daphne_1 | Running deferred SQL...
daphne_1 | Installing custom SQL...
daphne_1 | Running migrations:
daphne_1 | No migrations to apply.
worker_1 | System check identified some issues:
worker_1 |
worker_1 | WARNINGS:
worker_1 | ?: (1_7.W001) MIDDLEWARE_CLASSES is not set.
worker_1 | HINT: Django 1.7 changed the global defaults for the MIDDLEWARE_CLASSES. django.contrib.sessions.middleware.SessionMiddleware, django.contrib.auth.middleware.AuthenticationMiddleware, and django.contrib.messages.middleware.MessageMiddleware were removed from the defaults. If your project needs these middleware then you should configure this setting.
worker_1 | 2017-10-11 15:25:30,482 - INFO - runworker - Using single-threaded worker.
worker_1 | 2017-10-11 15:25:30,483 - INFO - runworker - Running worker against channel layer default (asgi_redis.core.RedisChannelLayer)
worker_1 | 2017-10-11 15:25:30,483 - INFO - worker - Listening on channels http.request, websocket.connect, websocket.disconnect, websocket.receive
daphne_1 | System check identified some issues:
daphne_1 |
daphne_1 | WARNINGS:
daphne_1 | ?: (1_7.W001) MIDDLEWARE_CLASSES is not set.
daphne_1 | HINT: Django 1.7 changed the global defaults for the MIDDLEWARE_CLASSES. django.contrib.sessions.middleware.SessionMiddleware, django.contrib.auth.middleware.AuthenticationMiddleware, and django.contrib.messages.middleware.MessageMiddleware were removed from the defaults. If your project needs these middleware then you should configure this setting.
rabbitmq_1 | Starting broker... completed with 6 plugins.
daphne_1 | DEBUG: Init FileManager with path /Data/CalFiles
daphne_1 | DEBUG: Found 72026 files
daphne_1 | DEBUG: 72026 entries in the DB
daphne_1 | DEBUG: DB updated.
daphne_1 | 72026 entries in the DB
daphne_1 | Last file: /Data/CalFiles/CalTree_1500299703.root
daphne_1 |
daphne_1 | 0 static files copied to '/opt/myproject/static', 89 unmodified.
daphne_1 | 2017-10-11 15:25:42,068 INFO Starting server at tcp:port=8000:interface=0.0.0.0, channel layer TMWA.asgi:channel_layer.
daphne_1 | 2017-10-11 15:25:42,070 INFO HTTP/2 support enabled
daphne_1 | 2017-10-11 15:25:42,070 INFO Using busy-loop synchronous mode on channel layer
daphne_1 | 2017-10-11 15:25:42,071 INFO Listening on endpoint tcp:port=8000:interface=0.0.0.0
and after this radio silence. I had output printed when running everything on the host machine but now I have nothing. Any idea where the problem could be?