0

I'm trying to use Spring Cloud Dataflow to bridge two Kafka clusters (essentially a fancy MirrorMaker instance) using the Bridge app. As covered in the docs, I've defined two binders. Kafka-qa1 should be the default, and kafka-qa2 can be provided in the definition or deployment properties as the output binder e.g.: app.bridge.spring.cloud.stream.bindings.output.binder=kafka-qa2

My SCDF application.yaml contains both binders:

spring:
  cloud:
    dataflow:
      applicationProperties:
        stream:
          spring:
            cloud:
              stream:
                defaultBinder: kafka-qa1
                binders:
                  kafka-qa1:
                    type: kafka
                    environment:
                      spring:
                        brokers: qa-1.example.com:9093
                        zk-nodes: qa-1.example.com:2181
                  kafka-qa2:
                    type: kafka
                    environment:
                      spring:
                        brokers: qa-2.example.com:9093
                        zk-nodes: qa-2.example.com:2181

However it seems to be ignoring the output binder. I've also kept the section for use with a single binder in my config (below). If I remove it the defaultBinder option doesn't seem to work and it reverts to localhost.

kafka:
 binder:
  brokers: qa-1.example.com:9093

Any ideas or examples to point me to for connecting multiple Kafka clusters with the Bridge app?

1 Answers1

0

It looks like the environment token is missing spring.cloud.stream.kafka.binder prefix for brokers and zk-nodes. Please see below.

spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.binders.kafka-qa1.type=kafka

spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.binders.kafka-qa1.environment.spring.cloud.stream.kafka.binder.brokers=qa-1.example.com:9093

spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.binders.kafka-qa1.environment.spring.cloud.stream.kafka.binder.zkNodes=qa-1.example.com:2181

spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.binders.kafka2.type=kafka

spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.binders.kafka-qa2.environment.spring.cloud.stream.kafka.binder.brokers=qa-2.example.com:9093

spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.binders.kafka-qa2.environment.spring.cloud.stream.kafka.binder.zkNodes=qa-2.example.com:2181

Community
  • 1
  • 1
Sabby Anandan
  • 5,636
  • 2
  • 12
  • 21
  • That got me working for default binder as qa1, the output binder doesn't appear to be working yet though. This is the stream definition I'm using, it is trying to output to qa1 (getting a topic doesn't exist error): `stream create bridge-test --definition ":myTopic1 > :myTopic2 --spring.cloud.stream.bindings.input.binder=kafka-qa1 --spring.cloud.stream.bindings.output.binder=kafka-qa2" --deploy` – Kevin Niemann Feb 08 '17 at 22:06
  • You'd still need input/output channels defined to be able to pin the topics to the respectively. In order to do that, you'd have to use the `bridge-processor` - this is what we use internally to bridge named-destinations with upstream or downstream apps. – Sabby Anandan Feb 09 '17 at 02:05
  • Your stream definition then becomes: `stream create bridge-test --definition ":myTopic1 > bridge > :myTopic2"` and when you deploy the stream, you'd pass the binder properties to the "bridge-processor" like: `stream deploy bridge-test --properties "app.bridge.spring.cloud.stream.bindings.input.binder=kafka-qa1,app.bridge.spring.cloud.stream.bindings.output.binder=kafka-qa2"` – Sabby Anandan Feb 09 '17 at 02:06
  • That works for me, but shouldn't it be possible to create the stream with the binder properties instead of specifying them at deploy-time? It doesn't seem to be accepting the config and is using the default output binder. – Kevin Niemann Feb 13 '17 at 17:42