0

I am looking for a way to use producer with transaction, using fs2, however the current TransactionalProducer seems to be geared toward a scenario in which it is an end to end workflow, meaning consume-process-produce.

However, we would like to use it in a context where we are just producing message to kafka.

Is there a known way to achieve that with fs2-kafka ? I have tried to see how but it seems impossible, maybe i am missing something ?

EDIT1

After double checking, it is clear that the use case is not supported. I'm however curious as to why ? Is it for a specific reason, that i may need to be aware of while implementing my own solution, or is just that it is not done and won't never be, for no specific reason ? If someone could shed some light ?

MaatDeamon
  • 9,532
  • 9
  • 60
  • 127

2 Answers2

1

Ultimately the only thing the transactional producer adds to enable.idempotence=true, acks=all is that the consumer offsets get committed as part of producing the message. Since the offsets being committed implies successful production and vice versa, this allows a consume-process-produce stream to process messages effectively-once (Confluent arguably stretches the exactly-once terminology a little bit), assuming everything in the process step is also idempotent.

Levi Ramsey
  • 18,884
  • 1
  • 16
  • 30
  • Levi thanks for the answer. However I am not sure of what you like me to understand from this. I’m sensing that you are trying to make an important point, but I can quite figure it out from your wording. – MaatDeamon Jul 08 '22 at 11:33
  • Are you saying that when coming from an external system it is not worth having transaction, even if you take the responsibility to ensure that u are fault tolerant and never produce duplicate on restart ? Even beyond that, what if you need to ensure that message produced to two topics are either done together or not done (while coming from an external system) isn’t it still worth using transaction ? – MaatDeamon Jul 08 '22 at 11:36
  • In our case, we are committing in a special topic in Kafka the state of where we are in the external system at the same time as we produce our events to it. Hence we reproduce the same exactly once as the consume-process-produce of kafka to kafka – MaatDeamon Jul 08 '22 at 11:41
  • 1
    Transactional Kafka only prevents duplicate messages by committing the consumer offset as part of the production (because that commit prevents consuming the source message again). So it can only provide transactional guarantees in the context of a consume-process-produce scenario. Outside of that scenario, if it were possible to use the transactional producer, you would not get anything approaching an exactly-once guarantee. – Levi Ramsey Oct 30 '22 at 02:41
  • Thanks, that's why i actually thumbs up your answer so late actually. Your last comment is what i came to learn since then. – MaatDeamon Nov 01 '22 at 09:25
  • However @Levi Ramsey, just for readers to understand, fs2-kafka does not support the scenario where those offset are offset from an external system. So say you readying from a database, and producing to kafka, and you want to save where you are in the dabase, as you produce. This is a legitimate scenario for transaction. Fs2-kafka does not make it easy, because it wires everything for the use case where the saved offset are from a kafka topic, as opposed to an external system. – MaatDeamon Nov 01 '22 at 10:38
  • It is not that it does not make it easy, the library by design does not support that scenario at all. Which i think is a miss – MaatDeamon Nov 01 '22 at 10:39
0

It's possible using common queue