0

Thanks for reviewing, There are several thread on this topic but non was able to lead me to solution, I dont think it's duplicate, as this was not answered on many of the other threads on the subject

I've decided to write another more up to date question we working example.

issue: setting up HWM does not seem to give to correct effect, am I missing some socket configuration?

expected: pusher blocked to send more packets until puller reach 1500 messages

actual: pusher seem to get blocked only after 70K messages

running the code below I find the PUSH socket send MANY MANY messages before halting

...
Message [72747::1678881796.1216621]
Message [72748::1678881796.1216683]
Message [72749::1678881796.1216772]
Message [72750::1678881796.121687]
Message [72751::1678881796.1216931]
Socket is blocked! Waiting for the worker to consume messages...
Worker is slow! Waiting...
Worker is slow! Waiting...

below a working example

push

import time
import zmq

context = zmq.Context()

# Set up a PUSH socket with a high water mark of 10 messages for sending
socket = context.socket(zmq.PUSH)
socket.setsockopt(zmq.SNDHWM, 1500)

socket.bind("tcp://127.0.0.1:5566")


# Send messages to the PUSH socket
for i in range(200000):
    try:
        print(f"Message [{i}::{time.time()}]")
        socket.send_string(f"Message [{i}::{time.time()}]", zmq.DONTWAIT)
    except zmq.error.Again:
        # Handle the situation where the socket is blocked due to the high water mark being reached
        print("Socket is blocked! Waiting for the worker to consume messages...")
        while True:
            # Check whether the socket is ready for sending
            # We use poll with a timeout of 1000ms to avoid busy-waiting
            if socket.poll(timeout=5000, flags=zmq.POLLOUT):
                # If the socket is ready for sending, break out of the loop and try again
                break
            else:
                # If the socket is not ready for sending, wait for the worker to consume messages
                print("Worker is slow! Waiting...")
                time.sleep(1)

# Clean up
socket.close()
context.term()

pull

import time
import zmq

context = zmq.Context()

receiver = context.socket(zmq.PULL)
receiver.setsockopt(zmq.RCVHWM, 1)
receiver.connect("tcp://127.0.0.1:5566")

def worker():
    i = 0
    while True:
        try:
            message = receiver.recv_string(zmq.NOBLOCK)
            print(f"Received message: [{i}:{time.time()}]{message}")
            time.sleep(500e-3)
            i+=1
        except zmq.Again:
            print("no messages")
            time.sleep(100e-3)


worker()
LordTitiKaka
  • 2,087
  • 2
  • 31
  • 51

1 Answers1

0

going to attempt to answer it

as noted here ZeroMQ buffer size v/s High Water Mark,

HWM nor BUF can help to back pressure, but setting them both in both sides is probably the right way to go (can't say I'm sure what is the right way to configure it correctly for my needs)

HWM is ZMQ way to limit the data going to the underlying TCP socket AKA kernel socket

BUF is ZMQ way to limit the data in the kernel

as probably each ZMQ socket create 2 underlaying sockets we must configure both sides somehow to enable back pressure

code below shows it

PUSH

import zmq
import time

context = zmq.Context()
sender = context.socket(zmq.PUSH)
sender.setsockopt(zmq.SNDHWM, 2)  # set high water mark to 2
sender.setsockopt(zmq.SNDBUF, 20)
sender.connect("tcp://localhost:5559")

for i in range(1000):
    message = f"Message {i}"
    sender.send_string(message)
    print(f"Sent: {message}")
#    time.sleep(100e-3)

PULL

import zmq
import time

context = zmq.Context()
receiver = context.socket(zmq.PULL)
receiver.setsockopt(zmq.RCVHWM, 2)  # set high water mark to 2
receiver.setsockopt(zmq.RCVBUF, 20)
receiver.bind("tcp://*:5559")

while True:
    message = receiver.recv()
    print(f"Received: {message}")
    time.sleep(100e-3)

result: when running in parallel we see that the PUSHER blocked due to heavy load in PULLER

note: still need to understand how to configure it correctly as HWM is messages and BUF in Bytes or something

LordTitiKaka
  • 2,087
  • 2
  • 31
  • 51