20

Getting below exception while starting Kafka consumer.

org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of range with no configured reset policy for partitions{test-0=29898318}

Kafka version: 9.0.0 Java 7

basit raza
  • 661
  • 2
  • 6
  • 18

2 Answers2

37

So you are trying to access offset(29898318) in topic(test) partition(0) which is not available right now.

There could be two cases for this

  1. Your topic partition 0 may not have those many messages
  2. Your message at offset 29898318 might have already deleted by retention period

To avoid this you can do one of following:

  1. Set auto.offset.reset config to either earliest or latest . You can find more info regarding this here
  2. You can get smallest offset available for a topic partition by running following Kafka command line tool

command:

bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list <broker-ip:9092> --topic <topic-name> --time -2

Hope this helps!

Gray
  • 115,027
  • 24
  • 293
  • 354
avr
  • 4,835
  • 1
  • 19
  • 30
  • 4
    Thanks buddy , I try this but not works. `auto.offset.reset` should be `latest` , ` earliest` or `none` – basit raza May 23 '16 at 07:17
  • 1
    try with the `latest` – avr May 24 '16 at 10:55
  • 1
    Was getting this error while `auto.offset.reset=latest`. Had to configure a new `group.id` to clean Kafka's offset state then the consumer started working. – CᴴᴀZ Nov 25 '19 at 09:30
  • 2
    @BdEngineer `retention_period` has no relation with `group.id`. Setting a new `group.id` refreshes the meta (at Broker) for the consumer group. Since this is an edge case, there is no _permanent_ (configurable?) solution. – CᴴᴀZ Apr 16 '20 at 03:09
3

I hit this SO question when running a Kafka Streams state store with a specific changelog topic config:

  • cleanup.policy=compact,delete
  • retention of 4 days

If Kafka Streams still has a snapshot file pointing to an offset that doesn't exist anymore, the restore consumer is configured to fail. It doesn't fall back to the earliest offset. This scenario can happen when very few data comes in or when the application is down. In both cases, when there's no commit within the changelog retention period, the snapshot file won't be updated. (This is on partition basis)

Easiest way to resolve this issue is to stop your kafka streams application, remove its local state directory and restart your application.

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
Tim Van Laer
  • 2,434
  • 26
  • 30
  • 1
    Check the ‘state.dir’ config setting of the kafka streams application; https://kafka.apache.org/10/documentation/streams/developer-guide/config-streams.html#state-dir – Tim Van Laer Oct 30 '19 at 17:29