1

I have an application using Spring Cloud Stream Kafka. For user defined topics I can delete records from specified topics by giving the configuration I mentioned below. But this configuration doesn't work for DLQ Topics.

For example in the configuration below, I configured retention time in binder level. So my producer topic (student-topic) defined under bindings level is correctly configured, I can check that the records are deleted when topic logs exceed specified retention byte(300000000).

But binder level retention time doesn't work DLQ topic(person-topic-error-dlq). Is there any different configuration for cleaning records from DLQ topics other than retention time.

How can I do this?

spring:
  cloud:
    stream:
      kafka:
        bindings:
          person-topic-in:
            consumer:
              enableDlq: true
              dlqName: person-topic-error-dlq
      binders:
        defaultKafka:
          type: kafka
          environment:
            spring:
              cloud:
                stream:
                  kafka:
                    default:
                      producer:
                        topic:
                          properties:
                            retention.bytes: 300000000
                            segment.bytes: 300000000
                    binder:
                      brokers: localhost:19092
      bindings:
        person-topic-in:
          binder: defaultKafka
          destination: person-topic
          contentType: application/json
          group: person-topic-group
        student-topic-out:
          binder: defaultKafka
          destination: student-topic
          contentType: application/json
omerstack
  • 535
  • 9
  • 23

1 Answers1

1

You are only setting the (default) properties for producer bindings.

That said, this still doesn't work for me:

      binders:
        defaultKafka:
          type: kafka
          environment:
            spring:
              cloud:
                stream:
                  kafka:
                    default:
                      producer:
                        topic:
                          properties:
                            retention.bytes: 300000000
                            segment.bytes: 300000000
                      consumer:
                        topic:
                          properties:
                            retention.bytes: 300000000
                            segment.bytes: 300000000

(the properties are not applied to even the primary topic).

Looks like there is a problem with default kafka consumer binding properties.

This works for me; the properties are applied to both the primary and dead letter topics:

spring:
  cloud:
    stream:
      kafka:
        bindings:
          person-topic-in:
            consumer:
              enableDlq: true
              dlqName: person-topic-error-dlq
              topic:
                properties:
                  retention.bytes: 300000000
                  segment.bytes: 300000000
Gary Russell
  • 166,535
  • 14
  • 146
  • 179
  • So is there any way to apply this config for all topics? E.g; Can we pass this config to all topics by giving a config from the binder level? I don't want to configure individual topic. I want to be able to apply it to all topics at a single level. – omerstack Mar 12 '21 at 19:40
  • 1
    I don't know if it's by design or a bug (I would suggest the latter), but it looks like the defaults are only applied if there are no concrete properties for the binding - if you move the dlq properties to the defaults as well (removing all references to the binding-specific kafka binding properties), it works as well (but that doesn't help unless you have no bindings that need a DLQ). The code is in `spring-cloud-stream` - I suggest you open a GitHub issue there with a reference to this question. – Gary Russell Mar 12 '21 at 21:11