I'm developing a reactive application using Quarkus and Panache reactive.
When I consume a message in Kafka I have to update a database and I also want to manage duplicate messages.
In Avoiding message losses, duplication and lost / multiple processing in Kafka, the author suggests to perform two inserts in the DBMS, once to keep track of the message offset and once to process the business logic.
If a failure happens on the consumer side, when it's restarted the same message can be processed, so the first insert is going to fail an it has to skip the second insert.
Based on the information in Quarkus Kafka guide, I wrote the following code:
@Inject
Mutiny.Session session;
@ActivateRequestContext
public Uni<Void> persist(ConsumerRecord<Long, String> record) {
return session.withTransaction(t -> {
KafkaState state = new KafkaState();
state.topic = record.topic();
state.partition = record.partition();
state.offsetN = record.offset();
state.persist();
Event event = new Event();
event.key = record.key();
event.message = record.value();
return event.persistAndFlush().replaceWithVoid();
}).onTermination()
.call(() -> session.close())
.onFailure().call(t -> {
System.out.println(t);
return session.close();
});
}
Unfortunately, the previous code disregard the first entity KafkaState
.