0

I am using Redis streams and basically have a stream that will append an END sentinel message when it's done. Until then, I basically want to mimic tail -f which is when I begin a read I want to see all previous logs until the current time and then keep getting new updates from the stream indefinitely.

I realize I can probably replicate this by polling from the last fetched timestamp but was wondering if there is some natively supported way to do this. I could not find one in the docs and it's getting me to wonder if for some reason this is a bad idea

I have tried calling xread with a very high block count but that did not work as it returns as soon as there are new results

1 Answers1

0

See https://redis-doc-test.readthedocs.io/en/latest/commands/xread/

You are looking for the $ special ID, which is the highest defined key (as opposed to + which is the highest possible key). After the first read, you typically use the last retrieved ID as the key for the next read.

Here's an example that uses asyncio:

import redis.asyncio as redis
async def get_stream_event():
    last_id = '$'                   # last defined value
    redisConn = await redis.Redis() # customize to your needs
    sleep_ms = 100

    while True:
        # Loop exit criteria is application specific. Test/break here

        # Read the stream
        try:
            stream_key = 'my_stream'
            resp = await redisConn.xread({stream_key: last_id},
                                         count=1,
                                         block=sleep_ms)
            if resp:
                key, messages = resp[0]
                last_id, payload = messages[0]
                yield payload  # payload processing is left as an exercise for the reader
            else:
                pass           # debug error handling can go here

        except ConnectionError as e:
            print("ERROR: REDIS CONNECTION: {} ".format(e))

    await redisConn.close()
jwm
  • 1,504
  • 1
  • 14
  • 29