0

I have a socketio setup that uses an eventlet server. My program gets logs from multiple machines and writes them to a database. I have an event called "new_log" which is triggered whenever a new log is sent through the websocket. Since database insertions take longer than the intervals between new logs, when I don't use any queueing system, the logs accumulate on the client side and when the client queue is filled to it's max, I no longer receive any new log. That is the reason I decided to use RabbitMQ.

But, I thought that since db insertions still take longer, a RabbitMQ setup with a single consumer doesn't really solve the problem. This time, the queue will be on the server side but it will still get bigger and bigger. So I wanted it to launch a new consumer thread with each log. I found the following multi-threaded example from Pika's repo:

https://github.com/pika/pika/blob/0.13.1/examples/basic_consumer_threaded.py

and modified it a bit to use it like this:

main.py

import socketio
import os
import threading
import json
import pika
import functools
import config as cfg
from util.rabbitmq import consumer_threaded

    sio = socketio.Server(async_mode="eventlet", namespaces='*', cors_allowed_origins=['*'])
    app = socketio.WSGIApp(sio)
    
    credentials = pika.PlainCredentials('guest', 'guest')
    parameters =  pika.ConnectionParameters('localhost', credentials=credentials, heartbeat=100)
    connection = pika.BlockingConnection(parameters)
    
    channel = connection.channel()
    channel.exchange_declare(exchange="test_exchange", exchange_type="direct", passive=False, durable=True, auto_delete=False)
    channel.queue_declare(queue="standard", durable=True)
    channel.queue_bind(queue="standard", exchange="test_exchange", routing_key="standard_key")
    channel.basic_qos(prefetch_count=1)
    
    @sio.on("new_log")
    def client_activity(pid, data):
    
        channel.basic_publish(
        exchange='test_exchange',
        routing_key='standard_key',
        body=json.dumps(data),
        properties=pika.BasicProperties(
            delivery_mode=pika.spec.PERSISTENT_DELIVERY_MODE
        ))
        
        return "OK"
    
    @sio.event
    def connect(sid, environ, auth):
        print(f"[NEW CONNECTION]] {sid}", flush=True)
    
    @sio.event
    def disconnect(sid):
        sio.disconnect(sid)
        print(f"[DISCONNECTED] {sid}", flush=True)
    
    def start_consumer():
        on_message_callback = functools.partial(consumer_threaded.on_message, args=(connection, channel))
        channel.basic_consume('standard', on_message_callback)
    
        channel.start_consuming()
        print("Started consuming", flush=True)
        
    if __name__ == "__main__":
        
        consumer_thread = threading.Thread(target=start_consumer)
        consumer_thread.start()
    
        import eventlet
        eventlet.monkey_patch()
        eventlet.wsgi.server(eventlet.listen(("", 1234)), app)

consumer_threaded.py

import functools
import logging
import threading
import json
from util.logger import save_log

LOG_FORMAT = ('%(levelname) -10s %(asctime)s %(name) -30s %(funcName) '
              '-35s %(lineno) -5d: %(message)s')
LOGGER = logging.getLogger(__name__)

logging.basicConfig(level=logging.INFO, format=LOG_FORMAT)

def ack_message(channel, delivery_tag):
    if channel.is_open:
        channel.basic_ack(delivery_tag)

def do_work(connection, channel, delivery_tag, body):
    thread_id = threading.get_ident()
    fmt1 = 'Thread id: {} Total threads: {} Delivery tag: {} Message body: {}'
    LOGGER.info(fmt1.format(thread_id, threading.active_count(), delivery_tag, body))
    save_log.save_log(json.loads(body.decode()))
    cb = functools.partial(ack_message, channel, delivery_tag)
    connection.add_callback_threadsafe(cb)

def on_message(channel, method_frame, header_frame, body, args):
    (connection, channel) = args
    delivery_tag = method_frame.delivery_tag
    t = threading.Thread(target=do_work, args=(connection, channel, delivery_tag, body))
    t.start()
    t.join()

This seems to be working for a bit but after a while, I get the following error:

AssertionError: ('_AsyncTransportBase._produce() tx buffer size underflow', -44, 1)

How can I achieve what I described without getting this error?

m.yagmur
  • 41
  • 6

0 Answers0