I am evaluating different streaming/messaging services for use as an Event Bus. One of the dimensions I am considering is the ordering guarantee provided by each of the services. Two of the options that I am exploring are AWS Kinesis and Kafka and from a high level, according it looks like they both provide similar ordering guarantees where records are guaranteed to be consumable in the same order they were published only within that shard/partition.
It seems that AWS Kinesis APIs expose the ids of the parent shard(s) such that Consumer Groups using KCL can ensure records with the same partition key can be consumed in the order they were published (assuming a single threaded publisher) even if shards are being split and merged.
My question is, does Kafka provide any similar functionality such that records published with a specific key can be consumed in order even if partitions are added while messages are being published? From my reading, my understanding of partition selection (if you are specifying keys with your records) behaves along the lines of HASH(key) % PARTITION_COUNT
. So, if additional partitions are added, they partition where all messages with a specific key will be published may (and I've proven it does locally) change. Simultaneously, the Group Coordinator/Leader will reassign partition ownership among Consumers in Consumer Groups receiving records from that topic. But, after reassignment, there will be records (potentially unconsumed records) with the same key found in two different partitions. So, from the Consumer Group level is there no way to ensure that the unconsumed records with the same key now found in different partitions will be consumed in the order they were published?
I have very little experience with both these services, so my understanding may be flawed. Any advice is appreciated!