1

We are reading messages from a Kafka-topic. I was under the (false) impression that by setting the EnableAutoOffsetStore/enable.auto.offset.store = false you, as a consumer could choose when you wanted to move the offset.

We were using this like so

while (!cancellationToken.IsCancellationRequested)
{
    try{       
         var consumeResult = kafkaConsumer.Consume(cancellationToken);             
         // process consumeResult.Message
         kafkaConsumer.StoreOffset(consumeResult);
    }
    catch
    {
      // delay and re-try
    } 
}

In this way we could re-read the message if we got an exception. We have 1 producer, 1 consumer,1 topic, 1 partition (and 1 group)

How should this behavior be achieved?

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
Cowborg
  • 2,645
  • 3
  • 34
  • 51

1 Answers1

2

If you catch an exception in process, then the commit will never happen.

You can store the partition+offset for the record that failed and Seek back to that offset.

Alternatively, you can skip over the offset in your main processing loop and write that record to a dead-letter topic and write a different consumer to "process again/differently"

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
  • Thanks for your response. First of all, your suggestion about saving offset and seeking it for a retry works, thanks! But I cant help feeling that its a workaround for something that should be simpler or more built in [newline] Im new to Kafka and want to learn to do it correctly. One thing that confuses me: I don't do commit and still the offset is increased on next Consume(). I was hoping to, when I dont Commit and read again I would get the same message on next read. Is this not the expected behaviour from not commiting (having both AutoCommit=false, AutoOffsetStore=false) – Cowborg May 21 '22 at 14:10
  • 1
    This is how any Kafka client works. You would need to kill the application on any exception, then let some external process restart it (supervisor, Docker/Kubernetes, etc). Only then would it pickup at the last failed offset. The only problem there is that it'll not be able to track how many times it restarted, so might restart forever (until message retention happens), and the whole consumer group will keep pausing to rebalance. Saving the offset in-memory isn't the best approach either because it assumes your consumer won't rebalance between polls – OneCricketeer May 22 '22 at 12:33
  • Ok, good to know. I guess I have to Im guessing rebalancing is not an issue since we only have 1 partition (and 1 consumer). – Cowborg May 22 '22 at 19:11