0

I have a topic with two partitions,i'm using @RetryableTopic and i can see logs on my app console that INFO o.s.k.r.RetryTopicConfigurer - Received message in dlt listener: {topic name with second partition} and this is wrong because it's just another partition not dlt topic, how can i hide those logs or avoid them?

 @RetryableTopic(
            attempts = "1",
            backoff = @Backoff(delay = 100, multiplier = 3.0),
            autoCreateTopics = "false",
            topicSuffixingStrategy = TopicSuffixingStrategy.SUFFIX_WITH_INDEX_VALUE, numPartitions = "2")
  @KafkaListener(id= "ccn2_listener",topics = "test", groupId = "test", autoStartup = "${listen.auto.start:true}", topicPartitions =  { @TopicPartition(topic = "ccn2-bam-raw-data", partitions = {"1"})})
public void listen(ConsumerRecord<String, String> consumerRecord, Acknowledgment acknowledgment) throws IOException, InterruptedException {
    log.info(consumerRecord.key());
    log.info(consumerRecord.value());
    {some working with code}
    if(some objectives) {
          throw new RodaTableMappingException("Problem with mapping Kafka record, sending it on dlt topic");

application properties file:

elastic.apm.enabled=true
elastic.apm.server-url=url
elastic.apm.service-name=name
elastic.apm.secret-token=token
elastic.apm.environment=prod
elastic.apm.application-packages=package
elastic.apm.log-level=INFO
apminsight.console.logger=true

And i can see logs like that in my console: 2023-01-24 02:16:08,824 [topic_listener-dlt-0-C-1] INFO
o.s.k.r.RetryTopicConfigurer - Received message in dlt listener: topic- 1@38262. And I'm running two instances of this app with different partitions 0 and 1, but same group and topic. And i understand that these logs are because message is received on another partition, but how to avoid them logs ConsumerConfiguration:

 @Bean
public ConsumerFactory<String, String> consumerFactory() {
    return new DefaultKafkaConsumerFactory<>(consumerConfigurations());
}
@Bean
public Map<String, Object> consumerConfigurations() {
    Map<String, Object> configurations = new HashMap<>();
    configurations.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaBroker0);
    configurations.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
    configurations.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());
    configurations.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
    configurations.put("ssl.truststore.location", truststoreLocation);
    configurations.put("ssl.truststore.password", truststorePassword);
    configurations.put("security.protocol", "SSL");
    configurations.put("ssl.keystore.location", keystoreLocation);
    configurations.put("ssl.keystore.password", keyPassword);
    configurations.put("ssl.key.password", keyPassword);
    configurations.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    configurations.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    configurations.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);

    return configurations;
}

@Bean
ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
    ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
    factory.setConsumerFactory(consumerFactory());
    factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
    return factory;
}

Producer configuration:

 @Bean
public Map<String, Object> producerConfigs() {
    Map<String, Object> props = new HashMap<>();
    props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
            bootstrapServers);
    props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
            StringSerializer.class);
    props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
            StringSerializer.class);
    props.put("security.protocol", "SSL");
    props.put("ssl.truststore.location", truststoreLocation);
    props.put("ssl.truststore.password", truststorePassword);
    props.put("ssl.keystore.location", keystoreLocation);
    props.put("ssl.keystore.password", keyPassword);
    props.put("ssl.key.password", keyPassword);
    return props;
}

@Bean
public ProducerFactory<String, String> producerFactory() {
    return new DefaultKafkaProducerFactory<>(producerConfigs());
}

@Bean
public KafkaTemplate<String, String> kafkaTemplate() {
    return new KafkaTemplate<>(producerFactory());
}
Kamalama
  • 23
  • 2

1 Answers1

0

attempts="1" means send it directly to DLT after an initial delivery fails, so it makes no sense to have a back off (or a suffixing strategy).

Using @RetryableTopic is overkill for that use case, a simple dead letter publishing recoverer will suffice.

This works as expected for me...

@SpringBootApplication
public class So75135382Application {

    public static void main(String[] args) {
        SpringApplication.run(So75135382Application.class, args);
    }

    @RetryableTopic(attempts = "1",
            topicSuffixingStrategy = TopicSuffixingStrategy.SUFFIX_WITH_INDEX_VALUE)
    @KafkaListener(id = "so75135382", topics = "so75135382")
    void listen(String in) {
        throw new RuntimeException("test");
    }

    @Bean
    NewTopic topic() {
        return TopicBuilder.name("so75135382").partitions(1).replicas(1).build();
    }

    @Bean
    ApplicationRunner runner(KafkaTemplate<String, String> template) {
        return args -> {
            template.send("so75135382", "foo");
        };
    }

}
Received message in dlt listener: so75135382-dlt-0@3

If you can't figure it out, post a similar, complete, minimal example that exhibits the behavior, so we can see what's wrong.

Gary Russell
  • 166,535
  • 14
  • 146
  • 179
  • I changed my Retryable Topic configuration to yours, and I still have those logs Received message in dlt listener: my_topic-1@106737, but it's not dlt topic, its my ,,regular'' topic and there should not be a log like this, yes if message will be on dlt topic i should have this log, but not only when i receive message on my other partition of ,,regular'' not dlt topic – Kamalama Jan 25 '23 at 03:21
  • It is working fine when it should with dlt, but also i can see this log when message is receive on other partition on ,,regular'' topic and i want to get rid of this logs in that case – Kamalama Jan 25 '23 at 11:06
  • Please read the final sentence in the answer. I need a complete example. – Gary Russell Jan 25 '23 at 13:11
  • Hello, i added consumer config and producer config, the rest of the code is not something that i can show, and i'm sure it is not connected with that problem, I found in Spring kafka code that there is method LoggingDltListenerHandlerMethod, and i think somehow it is triggered in wrong place with wrong Consumer reccord(my regular topic) – Kamalama Jan 26 '23 at 15:52
  • One more time; I need a small, complete, example that exhibits the behavior that you are seeing; if I can't reproduce it, I can't help. That class is normally only used for the DLT; it is never used elsewhere; it is (at least normally) never used with any other topic so something is wrong with the configuration to make that happen. – Gary Russell Jan 26 '23 at 16:13
  • I understand but i cannot show whole code because of my company policy. After long time searching i found the reason: @KafkaListener(id= "ccn2_listener",topics = "test", groupId = "test", autoStartup = "${listen.auto.start:true}", topicPartitions = { @TopicPartition(topic = "ccn2-bam-raw-data", partitions = {"1"})}) everyting is workin properly without this part // topicPartitions = { @TopicPartition(topic = "ccn2-bam-raw-data", partitions = {"1"})}) // but unfortunatelly i need this because i want to choose from which topic i want to listen, i know its hard but maybe with this info – Kamalama Jan 31 '23 at 12:33
  • You hava an idea what to do – Kamalama Jan 31 '23 at 12:34
  • It still works as expected for me `Received message in dlt listener: so75135382-dlt-1@0` - in future; don't put code in comments; as you can see, it's unreadable; edit the question instead and comment that you have done so. `>my company policy.` You don't need to share the whole project, just a minimal, complete project that shows the behavior that you are seeing. – Gary Russell Jan 31 '23 at 14:17