0

I'd like to move spring.kafka.streams.* under spring.cloud.stream - is this possible? I thought of streams-properties similarly to consumer-properties or producer-properties, but it doesn't work.

spring:
  cloud:
    config:
      override-system-properties: false
      server:
        health:
          enabled: false
    stream:
      bindings:
        input_technischerplatz:
          destination: technischerplatz
        output_technischerplatz:
          destination: technischerplatz
      default:
        group: '${spring.application.name}'
        consumer:
          max-attempts: 5
      kafka:
        binder:
          auto-add-partitions: false
          auto-create-topics: false
          brokers: '${values.spring.kafka.bootstrap-servers}'
          configuration:
            header.mode: headers
          consumer-properties:
            allow.auto.create.topics: false
            auto.offset.reset: '${values.spring.kafka.consumer.auto-offset-reset}'
            enable.auto.commit: false
            isolation.level: read_committed
            max.poll.interval.ms: 300000
            max.poll.records: 100
            session.timeout.ms: 300000
          header-mapper-bean-name: defaultKafkaHeaderMapper
          producer-properties:
            acks: all
            key.serializer: org.apache.kafka.common.serialization.StringSerializer
            max.in.flight.requests.per.connection: 1
            max.block.ms: '${values.spring.kafka.producer.max-block-ms}'
            retries: 10
          required-acks: -1
  kafka:
    streams:
      applicationId: '${spring.application.name}_streams'
      properties:
        default.key.serde: org.apache.kafka.common.serialization.Serdes$StringSerde
        default.timestamp.extractor: org.apache.kafka.streams.processor.LogAndSkipOnInvalidTimestamp
        state.dir: '${values.spring.kafka.streams.properties.state.dir}'
Andras Hatvani
  • 4,346
  • 4
  • 29
  • 45

1 Answers1

0

You can bind the streams properties with spring.cloud.stream in the following manner:

spring.cloud.stream.kafka.streams.binder.applicationId: my-application-id
spring.cloud.stream.kafka.streams.binder.configuration:
      default.key.serde: org.apache.kafka.common.serialization.Serdes$StringSerde
      default.value.serde: org.apache.kafka.common.serialization.Serdes$StringSerde

For more details, you can refer the documentation:

https://cloud.spring.io/spring-cloud-static/spring-cloud-stream-binder-kafka/3.0.0.M3/reference/html/spring-cloud-stream-binder-kafka.html#_kafka_streams_binder

Nishu Tayal
  • 20,106
  • 8
  • 49
  • 101
  • Then I get the following error: ``` org.apache.kafka.common.errors.InconsistentGroupProtocolException: The group member's supported protocols are incompatible with those of existing members or first group member tried to join with empty protocol type or empty protocol list. ``` – Andras Hatvani Apr 27 '20 at 07:30
  • It seems like you are using the same group.id for Streams and Consumer. All consumers which belong to the same group must have one common strategy declared. If a consumer attempts to join a group with an assignment configuration inconsistent with other group members, you will end up with this exception. – Nishu Tayal Apr 27 '20 at 07:40
  • Even if I remove spring.cloud.stream.default.group the error still persists. What do you suggest to solve the problem? – Andras Hatvani Apr 27 '20 at 07:51
  • Looks lie it could be a config issue. Make sure, that you set the `applicationId` under `spring.cloud.stream.kafka.streams.binder.applicationId`. It's not clear from your config that they are aligned. The ref docs suggest a few ways to set the application id if you have multiple Kafka Streams processors. – sobychacko Apr 27 '20 at 14:14