2

Given default configuration and this binding

@Bean
public Function<Flux<Message<Input>>, Flux<Message<Output>>> process() {
  return input -> input
    .map(message -> {
      // simplified
      return MessageBuilder.build();
  });
}

Is there any guarantee that input message offset is commited after output is written to Kafka? I don´t need full Transactions, and I can live with at-least-once delivery and possible duplicates, but I cannot loose output message. I was unable to find this exact scenario in docs, and I believe previous channel-based binding worked as I need it to, since it was blocking by nature, but I am not sure about functional.

B.Gen.Jack.O.Neill
  • 8,169
  • 12
  • 51
  • 79
  • Hey, I'm running into the same dilemma, but with ActiveMQ. Have you been able to figure it out yet? Appreciate the time... – Robert K. Aug 19 '21 at 10:39
  • 1
    Hi, I have concluded that reactive processing without some kind of upstream signal propagation is not really good for this and I don´t believe Spring offers it at the moment. With Spring Kafka, you can set manual ACK mode, but this is only for input and I don´t know if it could be paired with result message write back to Kafka. I switched to Kafka Streams, which natively supports exactly_once processing which does exactly this and use Spring only as KStream in / out binder. – B.Gen.Jack.O.Neill Aug 19 '21 at 11:48

0 Answers0