4

The following situation:

  • Web client: Using JavaScript socketio to listen for incoming messages (= JavaScript).
  • Web server: Using flask-socketio with eventlet to send data (= Python).

Everything works if the client sends a message to the server. The server receives the messages. Example:

socketio = SocketIO(app, engineio_logger=True, async_mode="eventlet")

@socketio.on("mymsg")
def handle_event(message):
    print("received message: " + str(message))

Unfortunately the other way around does not work - to some extent. I have a thread producing live data about 5 to 10 times a second the web frontend should display. It should be sent to the client.

First: It does not work at all if the thread producing the data tries to invoke sockeito.emit() directly. The reason for that is unclear to me but somehow plausible as flask-socketio with eventlet follows different async models, as the documentation says.

Second: Decoupling classic threads from the async model of flask/eventlet works to some extent. I attempt to use an eventlet queue for that. All status data my thread produces is put into the queue like this:

statusQueue.put(statusMsg)

This works fine. Debugging messages show that this is performed all the time, adding data after data to the queue.

As the documentation of flasks tells I'm adviced to use socketio.start_background_task() in order to get a running "thread" in a compatible mode to the async model socketio uses. So I am using this code:

def emitStatus():
    print("Beginning to emit ...")
    while True:
        msg = statusQueue.get()
        print("Sending status packet: " + str(msg))
        socketio.emit("status", msg, broadcast=True)
        statusQueue.task_done()
        print("Sending status packet done.")
print("Terminated.")

socketio.start_background_task(emitStatus)

The strange thing where I'm asking you for help is this: The first call to statusQueue.get() blocks as expected as initially the queue is empty. The first message is taken from the queue and sent via socketio. Debug messages at the client show that the web client receives this message. Debug messages at the server show that the message is sent successfully. But: As soon as the next statusQueue.get() is invoked, the call blocks indefinitely, regardless of how many messages get put into the queue.

I'm not sure if this helps but some additional information: The socketio communication is perfectly intact. If the client sends data, everything works. Additionally I can see the ping-pongs both client and server play to keep the connections alive.

My question is: How can I properly implement a server that is capable of sending messages to the client asynchronously?

Have a look at https://github.com/jkpubsrc/experiment-python-flask-socketio for a minimalistic code example featuring the Python-Flask server process and a JQuery based JavaScript client.

(FYI: As these are status messages not necessarily every message needs to arrive. But I very much would like to receive at least some messages not only the very first message and then no other message.)

Thank you for your responses.

Regis May
  • 3,070
  • 2
  • 30
  • 51
  • Please construct a minimal reproduction case on gist to run whole thing with one command. – temoto Aug 26 '18 at 21:23
  • Let me get a little bit of sleep and I'll provide a minimal version in roughly 8 hours. – Regis May Aug 26 '18 at 23:04
  • You're an optimist :-) A client-server application spanning two different programming languages can not be implemented in a single file. Therefor I created a repository on github if you want to take a look into the details I'd be very happy if you could assist in getting it to work. – Regis May Aug 27 '18 at 19:08
  • Gist is a also git repository and allows multiple files. Anyway, how to find it? – temoto Aug 29 '18 at 09:50
  • Yes, but it's simpler to have separate directories: The wwwroot with various files that form the web client, and additionally python file(s) that implement the server side. Please have a look into my question - I added a reference to the Github repository. I don't know why this has previously been removed from my comment. I intend to update the specified repo and provide the working example under a MIT license for everybody to use. – Regis May Sep 01 '18 at 18:50

1 Answers1

1

I left two solutions to make the code work as pull requests.

Basically, the answer is: you choose one technology and stick a process with it:

  • Going async_mode=threading? Great, use stdlib Queue. Don't import eventlet unless you have to.
  • Going async_mode=eventlet? Also great, use eventlet Queue and don't forget that stdlib time.sleep or socket IO will block everything else, fix with eventlet.monkey_patch()
  • If you must use both eventlet and threading, the best approach is to let them live in separate OS processes and communicate via local socket. It's extra work, but it is very robust and you know how it works and why it will not break.

With good knowledge of both eventlet and native threads you can carefully mix them into working code. As of 2018-09, mixing doesn't work in friendly obvious way, as you already found. Sorry. Patches are welcome.

temoto
  • 5,394
  • 3
  • 34
  • 50
  • I tried `async_mode=threading` with the program fragment being of my larger program. This did not work at all. With `async_mode=eventlet` I did not succeed as "monkeypatching" failed: After monkeypatching my application did no longer work at all either. :-( – Regis May Sep 03 '18 at 16:17
  • The OS processes approach is something I considered already but dismissed as I already read from and write data to another background process. I'd end up with `A <=> B <=> Events/Threads <=> FlaskWebServer`. That'd be very very ugly. Trying to get eventlets to run seems the most promising approach. But I can't get this asynchronous I/O to work. But taking your advice into consideration I'll try again. – Regis May Sep 03 '18 at 16:21
  • With more details on how it failed maybe I could help. I know nothing about socketio, but eventlet has fair websocket server, there may be route to make socketio use that without explicit background worker. Multiple processes is not that bad really, if deploy/ops is sane and everybody understands role of each process. Postfix, Postgresql use this model if you're into big name game. – temoto Sep 04 '18 at 03:00