9

Regard new feature of Kafka aimed for negative acknowledgement and now supported by Spring-Kafka, according to /spring-kafka/docs/2.4.4.RELEASE/

"... Starting with version 2.3, the Acknowledgment interface has two additional methods nack(long sleep) and nack(int index, long sleep). The first one is used with a record listener, the second with a batch listener. Calling the wrong method for your listener type will throw an IllegalStateException.

...

With a record listener, when nack() is called, any pending offsets are committed, the remaing records from the last poll are discarded, and seeks are performed on their partitions so that the failed record and unprocessed records are redelivered on the next poll(). The consumer thread can be paused before redelivery, by setting the sleep argument. This is similar functionality to throwing an exception when the container is configured with a SeekToCurrentErrorHandler. "

Well, if some error happens on consumer side, say fail to save on database, let's say the consumer doesn't acknowledgment.acknowledge(), as far as I understand the message is still on poll and it will be read/consumed again. I guess someone can say that with nack(..., some time) the consumer can sleep giving the chance to read/consume again a bit later and don't face error. If keep listening the topic isn't an issue, my straight question is:

is there any further point on using nack instead of simply not acknowledge?

As far as I can see the message will keep in pool for the time longer than the nack sleep anywhay. So, by the way, if the consumer keeps trying get the message and save the message it will successed assuming the issue is fixed in less than sleep time.

A surrounding point or advantage would be that somehow the producer get notified that nack is used. If so, I could find some worth on it in some specific scenarios. Let's say while using Log Compation (interested only on last message status) or Kafka as a long-term storage service (future releases will provide this I guess - KIP 405)

Regard more general exceptions I tend to follow approaches like configure a SeekToCurrentErrorHandler and throw the exception

Jim C
  • 3,957
  • 25
  • 85
  • 162

1 Answers1

10

nack is simply an alternative to using a SeekToCurrentErrorHandler - it was added before we made the SeekToCurrentErrorHandler the default error handler (previously, the default simply logged the error).

The STCEH is more sophisticated in that you can configure the number of retries, configure a recoverer called after retries are exhausted (e.g. DeadLetterPublishingRecoverer).

nack is different to "not acknowledge"; with the latter, if you don't throw an exception, you'll get the next record from the poll (or the next poll if this was the last record); you will not get a redelivery of a not-acked record unless you either use nack or a STCEH and throw an exception.

Gary Russell
  • 166,535
  • 14
  • 146
  • 179
  • is there any reasonable way to let producer be aware that either "nack" or "STCEH+EXCEPTION" happens on consumer side? You may wonder why. Well, our company has two datacenters each one with its own Kafka cluster and they aren't sincronized. We have instances of same producer running on both datacenter getting goverment messages. Let's say, producer from datacenter1 posts 100 messages and consumer getsfirst 50 and suddenly our Kafka Cluster crashes. Since producer gets ack for all 100 msgs it is not going to post again. Is there someway to use some kind of "ack" from consumer to producer? – Jim C Jun 16 '20 at 18:34
  • 1
    `>Since producer gets ack for all 100 msgs`; I don't know what you mean by that, producers don't "get acks". `>Is there some way` Not built in; producers and consumers are independent; you could add a dead letter publishing recoverer to the STECH and have the producing code consume from the dead letter topic to get the failed records. – Gary Russell Jun 16 '20 at 19:53
  • By "producer gets ack" I mean "acks=all" was well successed (http://kafka.apache.org/documentation.html#producerconfigs " This means the leader will wait for the full set of in-sync replicas to acknowledge the record. "). As far as understand both producer and consumer get acks. From producer perspective when is safely posted according to acks configuration (1 or all). From consumer when it moves to read the next record. – Jim C Jun 16 '20 at 20:10
  • 1
    The two are unrelated; what we call acks on the consumer side are actually offset commits. The broker maintains two pointers for a consumer/topic/partition - the committed offset (where we'll start consuming the next time the consumer starts) and the current `position` - in memory, always advancing (unless a seek is performed) from where records will be retrieved on the next poll. If we poll 100 records and 51 fails (and we do nothing) you will get 52 next. If you nack or throw an exception (with a STCEH), the container will reposition at 51 so it will be redelivered on the next poll. – Gary Russell Jun 16 '20 at 21:03
  • @GaryRussell In which version was SeekToCurrentErrorHandler made the default error handler. I tried going over the release notes, but could not find. – Suraj Menon Sep 21 '20 at 06:36
  • 2.5 - see [What's New](https://docs.spring.io/spring-kafka/docs/2.5.6.RELEASE/reference/html/#x25-container). – Gary Russell Sep 21 '20 at 13:27
  • @GaryRussell As you said : "The broker maintains two pointers.... the committed offset .... and the current position - in memory" - Could this committed offset sirves for onPartitionsAssigned of ConsumerRebalanceListener to make "consumer.seek()"? – Alexander Sep 17 '21 at 21:02
  • If there is a committed offset, it will be the same as position in onPartitionsAssigned(). Position is reset on a rebalance. Yes, you can safely seek there. – Gary Russell Sep 17 '21 at 21:20
  • @GaryRussell can I call nack​(0) ? 0 is valid value? – Ashika Umanga Umagiliya May 09 '22 at 06:12
  • Yes; it will retry immediately. – Gary Russell May 09 '22 at 13:35