0

i'm building a POC that contains a Kafka cluster deployed on Kubernetes and three spring boot apps that read from kafka and write to jms and vice versa.

I want to know how data integrity and consistency is ensured for example: If a producer node failed how the consumer is going to continue reading the message without losing the msg consistency. Is there a detailed scenario to test this feature.

aymen0406
  • 127
  • 10

1 Answers1

1

See the Spring for Apache Kafka documentation about transactions.

If you are new to Spring; also see Spring Framework's Transaction Support which Spring Kafka leverages.

Kafka cannot paritipate in a JTA (XA) transactions so your only option is to consider Best Efforts 1PC commits via Spring transaction synchronization and deal with the (small) possibility of redeliveries of messages that have previously been processed (after a failure).

Gary Russell
  • 166,535
  • 14
  • 146
  • 179
  • why the need for spring transaction? isn't kafka capable to redeliver the remaining messages(after failure ) from a given offset ? – aymen0406 Apr 22 '20 at 00:19
  • 1
    Of course it can, but you said `>read from kafka and write to jms and vice versa`. If you want both to be committed or rolled back (best effort 1PC) then you need to synchronize the transactions. Otherwise, one might commit and the other roll back - that still might happen, but the chances are much less. – Gary Russell Apr 22 '20 at 00:24