I have a problem where i need to prioritize some events to be processes earlier and some events lets say after the high priority events. Those events come from one source and i need to prioritize the streams depending on their event type priority to be either forwarded in the high priority or lower priority sink. I'm using kafka and akka kafka streams. So the main problem is i get a lot of traffic at a given point in time. What would here be the preferred scenario?
-
Is the processing in the high-priority path the same as the low-priority path, it's just you want a high-priority message to "cut ahead" of the low-priority messages in the stream? – Levi Ramsey Sep 22 '21 at 23:00
-
Also: are you planning to commit offsets, and if so, are you expecting at-most-once or at-least-once delivery? – Levi Ramsey Sep 22 '21 at 23:01
-
@LeviRamsey sorry about the late reply. So after i get the event/message from the source i need to evaluate that message and based on it's content i will need to route the message that is to produce a message to a topic based on the content of the previously evaluated message. Yes i will need the high priority message to be processed first and maybe after there are no more high priority messages or we do not get any high priority messages we can start processing the lower ones. Second yes, i plan to commit offsets since i will need to know that the message has been processes successfully. – rollercoaster Sep 27 '21 at 13:59
-
To confirm, you want at-most-once processing of every message, or just the high-priority ones? – Levi Ramsey Sep 27 '21 at 19:12
-
Please take a look at this one, it may be what you are looking for: https://stackoverflow.com/a/66013251/4602706 – Marco Vargas Feb 04 '22 at 20:28
1 Answers
The first thing to tackle is the offset commit. Because processing will not be in order, committing offsets after processing cannot guarantee at-least-once (nor can it guarantee at-most-once), because the following sequence is possible (and the probability of this cannot be reduced to zero):
- Commit offset for high-priority message which has been processed before multiple low-priority messages have been processed
- Stream fails (or instance running the stream is stopped, or whatever)
- Stream restarts from last committed offset
- The low-priority messages are never read from Kafka again, so never get processed
This then suggests that either the offset commit will have to happen before the reordering or we'll need a notion of processed-but-not-yet-committable until the low-priority messages have been processed. Noting that for the latter option, tracking the greatest offset not committed (the simplest strategy which could possibly work) will not work if there's anything which could create gaps in the offset sequence which implies infinite retention and no compaction, I'd actually suggest committing the offsets before processing, but once the processing logic has guaranteed that it will eventually process the message.
A combination of actors and Akka Persistence allows this approach to be taken. The rough outline is to have an actor which is persistent (this is a good fit for event-sourcing) and basically maintains lists of high-priority and low-priority messages to process. The stream sends an "ask" with the message from Kafka to the actor, which on receipt classifies the message as high-/low-priority, assuming that the message hasn't already been processed. The message (and perhaps its classification) is persisted as an event and the actor acknowledges receipt of the message and that it commits to processing it by scheduling a message to itself to fully process a "to-process" message. The acknowledgement completes the ask, allowing the offset to be committed to Kafka. On receipt of the message (a command, really) to process a message, the actor chooses the Kafka message to process (by priority, age, etc.) and persists that it's processed that message (thus moving it from "to-process" to "processed") and potentially also persists an event updating state relevant to how it interprets Kafka messages. After this persistence, the actor sends another command to itself to process a "to-process" message.
Fault-tolerance is then achieved by having a background process periodically pinging this actor with the "process a to-process message" command.
As with the stream, this is a single-logical-thread-per-partition process. It's possible that you are multiplexing many partitions worth of state per physical Kafka partition, in which case you can have multiple of these actors and send multiple asks from the ingest stream. If doing this, the periodic ping is likely best accomplished by a stream fed by an Akka Persistence Query to get the identifiers of all the persistent actors.
Note that the reordering in this problem makes it fundamentally a race and thus non-deterministic: in this design sketch, the race is because for messages M1 from actor B and M2 from actor C sent to actor A may be received in any order (if actor B sent a message M3 to actor A after it sent message M1, M3 would arrive after M1 but could arrive before or after M2). In a different design, the race could occur based on speed of processing relative to the latency for Kafka to make a message available for consumption.

- 18,884
- 1
- 16
- 30