0

We have a scenario, where we want to consume data from kafka topics on cluster#1, but create KTable topics (repartition and changelog) on cluster#2.

channel binding -

spring.cloud.stream.bindings.member.destination: member
spring.cloud.stream.bindings.member.consumer.useNativeDecoding: true
spring.cloud.stream.bindings.member.consumer.headerMode: raw
spring.cloud.stream.kafka.streams.bindings.member.consumer.keySerde: org.apache.kafka.common.serialization.Serdes$StringSerde
spring.cloud.stream.kafka.streams.bindings.member.consumer.valueSerde: io.confluent.kafka.streams.serdes.avro.GenericAvroSerde

Create Ktable -

protected KTable<String, GenericRecord> createKTable(String field, KStream<String, GenericRecord> stream, String stateStore) {
        return stream
                .map((s, genericRecord) -> KeyValue.pair(field, genericRecord))
                .groupByKey()
                .reduce((oldVal, newVal) -> newVal, Materialized.as(stateStore));
    }

So member topic is on cluster#1, but we want to create below ktable topics on different cluster, not sure how to use two different kafka binders in this case -

application-member-store-repartition
application-member-store-changelog
R K
  • 382
  • 5
  • 25
  • Have you tried to do that without using Spring Cloud Stream, but Kafka Streams directly? I think the mechanism is going to be the same in Spring Cloud Stream as well. If Kafka Streams allows that, the binder could delegate it somehow. – sobychacko Jan 12 '19 at 01:50

1 Answers1

1

Single Kafka Streams application can connect to only one cluster. According to answer from link below, you can create two different instance, but they will be different applications.

More details can be found Kafka Streams - connecting to multiple clusters

Bartosz Wardziński
  • 6,185
  • 1
  • 19
  • 30