0

How can I implement request/reply pattern with Apache Kafka? Implementation should also work with scaling of service instances (f.e. pods in the kubernetes).

In the rabbit, I can create the temporary non-durable unique queue per instance that receives responses from other services. This queue will be removed automatically when connection is lost (when instance of the service is down).

How can I do this with Kafka? How to scale this solution?

I use node js

Arthur
  • 13
  • 1
  • 2
  • This strikes me as an X:Y question. I strongly suspect any implementation in Kafka is going to be inferior to something more suited to point-to-point synchronous messaging (e.g. gRPC). – Levi Ramsey Sep 03 '21 at 09:58
  • a message queue-based implementation has some advantages. if the consumer is down, the request will be received when the consumer is active and will respond to the producer with some delay. The Grpc implementation will fail immediately after disconnecting the consumer, and grpc must be configured. – Arthur Sep 03 '21 at 11:24
  • Highlighting a rabbit implementation where you're taking a queue down when there's no instance listening (thus deleting whatever requests are in the queue, unless I'm mistaken) indicates that you don't actually care about that advantage. – Levi Ramsey Sep 03 '21 at 12:22
  • Ah, I see that the rabbit queue is on the response side... – Levi Ramsey Sep 03 '21 at 12:25

1 Answers1

1

Given that your Rabbit example is only talking about the channel for receiving the response (ignoring sending the request), it's most practical (since Kafka doesn't handle dynamic topic creation/deletion particularly well) to have a single topic for responses to that service with however many partitions you need to meet your throughput goal. A requestor instance will choose a partition to consume at random (multiple instances could consume the same partition) and communicate that partition and a unique correlation ID with the request. The response is then produced to the selected partition and keyed with the correlation ID. Requestors track the set of correlation IDs they're waiting for and ignore responses with keys not in that set.

The risk of collisions in correlation IDs can be mitigated by having the requestors coordinate among themselves (possibly using something like etcd/zookeeper/consul).

This isn't a messaging pattern for which Kafka is that well-suited (it's definitely not best of breed for this), but it's workable.

Levi Ramsey
  • 18,884
  • 1
  • 16
  • 30
  • but if i want to receive response in the instance that produce request ? i want to do smth like this ` // producer.ts async function doSmth() { const order = await _bus.createRequestClient(OrderDetailsRequest) .send(new OrderDetailsRequest(orderId)); if(order.is(OrderDetails)) { console.log(order.message.amount); } else if (order.is(OrderNotFound)) { console.log('order not found'); } } ` i don't know how many partitions needs for the my app because kubernetes scale instances dynamically – Arthur Sep 04 '21 at 08:21
  • The partition size in this case (since we're deliberately not using the consumer group functionality, which will actively work against the guarantees you're looking for) doesn't have anything to do with the number of instances. There can be more partitions than instances or fewer partitions than instances. The requestor includes the partition it expects the response on and the correlation ID. Since the requestor is subscribed to only that partition, it's guaranteed to receive the response. – Levi Ramsey Sep 04 '21 at 13:26
  • Other instances of the requestors can also direct responses to that partition, so each instance will want a way to filter out responses not intended for it: the correlation ID enables that. – Levi Ramsey Sep 04 '21 at 13:29
  • It should be noted that I would not consider Kafka to be at all well-suited for request-response: I suggest either looking at how you can get around a request-response interaction between these two services or looking at using something that's not Kafka (e.g. Rabbit) for this particular message pattern. – Levi Ramsey Sep 04 '21 at 13:31