1

I have a simple requirement to read kafka messages and store in database. I am using spring kafka in batch listener mode. I have gone through spring kafka docs but still its not clear that when using spring kafka in batch listener mode, does it commits db transaction in batch mode and in case of failure is the complete transaction rolled back ?

In case of failure will it seek the same set of records again ?

I have below configuration,

props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaConfigProperties.getBootstrapservers());
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, kafkaConfigProperties.getConsumer().getGroupid());
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, kafkaConfigProperties.getConsumer().getOffset());
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG,250);
props.put(ApplicationConstant.KAFKA_SCHEMA_URL_PROPERTY, kafkaConfigProperties.getSchemaregistry());
@Bean
    public ConcurrentKafkaListenerContainerFactory<String, GenericRecord> kafkaListenerContainerFactory(KafkaConfigProperties kafkaConfigProperties) {
        ConcurrentKafkaListenerContainerFactory<String, GenericRecord> factory =
                new ConcurrentKafkaListenerContainerFactory<>();
        
        factory.setConsumerFactory(consumerFactory(kafkaConfigProperties));
        factory.setConcurrency(2);
        factory.setBatchListener(true);
        ContainerProperties containerProperties = factory.getContainerProperties();
        containerProperties.setAckOnError(false);
        containerProperties.setAckMode(AckMode.BATCH);
        return factory;
    }
Ritesh
  • 25
  • 5

1 Answers1

0

You need to add a SeekToCurrentBatchErrorHandler or RecoveringBatchErrorHander to replay the batch. This is the default error handler with version 2.5 and later.

See the documentation.

Gary Russell
  • 166,535
  • 14
  • 146
  • 179
  • Thanks @GaryRussell , I want to make sure that all the records are committed in db as batch transaction or rolled back if any exception occurs, is this the default behaviour while using kafka batch listener or do I need to add kafka transaction ? – Ritesh Jun 23 '20 at 15:57
  • You don't need Kafka transactions for that use case, just a DB transaction manager and `@Transactional` on the listener (or something it calls); the listener will run in a DB transaction and commit if exits normally; rolls back if an exception is thrown and the `SeekToCurrentBatchErrorHandler` will replay the batch. – Gary Russell Jun 23 '20 at 16:11
  • Thanks @GaryRussell, will the similar behaviour not provided by DefaultAfterRollbackProcessor, in case of transaction rollback, so do i need to configure SeekToCurrentBatchErrorHandler ? Also I am using spring-kafka version 2.1.12, which does not allow to configure maximum number of retries, or fixed backoff. What would be the default behaviour and is it possible to configure infinite try behaviour with 2.1.12 version of kafka ? – Ritesh Jun 24 '20 at 01:27
  • No; the DARP is only used with Kafka transactions; the listener container knows nothing about your DB transaction. You need a STCBEH. WIth old versions like that, it will retry indefinitely with no back-off. You really need to upgrade to a more recent version to get improved functionality. 2.1.x is no longer supported. – Gary Russell Jun 24 '20 at 01:30