I have multiple APIs that talk with each other through kafka (produce and consume messages). In one of the APIs I produce messages based on an HTTP request trigger (when an endpoint is called, a meesage is produced and sent to kafka) with @Output and @EnableBinding annotations. These meesages are consumed by other APIs that subscribe to this topic.
Now, I try to migrate to the the new spring cloud stream functional programming model and from the documentation I concluded that StreamBridge with external source data is the approach needed for my case (https://docs.spring.io/spring-cloud-stream/docs/3.1.0/reference/html/spring-cloud-stream.html#_sending_arbitrary_data_to_an_output_e_g_foreign_event_driven_sources). However, I did not understand how the source, bindingName and destination topic should be configured properly in terms of naming convention when no source function is defined. I have the following configuration which successfully produces messages on "myFooTopic", but I noticed some strange logs when the application starts, as the binding seems to not be properly done:
application.properties:
spring.kafka.bootstrap-servers=listOfServers for kafka
spring.kafka.producer.value-serializer= MyCustomKafkaPayloadAvroSerializer
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.properties.schema.registry.url=listOfServers for schema registry
spring.cloud.stream.bindings.user.destination=myUserTopic
spring.cloud.stream.bindings.user.producer.useNativeEncoding=true
spring.cloud.stream.bindings.user.producer.partitionCount=1
spring.cloud.stream.bindings.user.producer.partitionKeyExpression=headers['partitionId']
spring.cloud.stream.kafka.binder.autoCreateTopics=false
spring.cloud.stream.kafka.binder.configuration.security.protocol=SSL
spring.cloud.stream.kafka.binder.configuration.ssl.truststore.location=
spring.cloud.stream.kafka.binder.configuration.ssl.truststore.type=
spring.cloud.stream.kafka.binder.configuration.ssl.keystore.location=
spring.cloud.stream.kafka.binder.configuration.ssl.keystore.type=
Furthermore, the code for StreamBridge is:
@Component
@RequiredArgsConstructor
@Slf4j
public class EventPublisher {
private final StreamBridge streamBridge;
private static final String USER = "user";
public void sendToChannel(String message) {
log.info("sendToChannel - sending to channel hello event");
try {
if (streamBridge.send(USER, buildChannelMessage(message))) {
log.info("sendToChannel - message was successfully sent ");
} else {
log.error("sendToChannel - failed to send message");
}
} catch (Exception e) {
log.error("sendToChannel - error while sending message on output binding {}", USER, e);
}
}
private Message<HelloEventAvro> buildChannelMessage(String message) {
HelloEventAvro helloEventAvro = HelloEventAvro.newBuilder()
.setHelloMessage(message)
.build();
long timestamp = Instant.now().toEpochMilli();
return MessageBuilder.withPayload(helloEventAvro)
.setHeader("partitionId", 1)
.setHeader("X-Timestamp", timestamp)
.build();
}
}
and among the dependencies used:
- spring-boot 2.6.2
- kafka-clients 2.8.1
- spring-cloud-stream 3.2.1
- spring-cloud-stream-binder-kafka 3.2.1
- spring-integration-kafka 5.5.8
My questions are:
- is the binding of the producer to myUserTopic correct or do I need to add the property spring.cloud.stream.source=user?
- "user" bindingName is correct or it should respect the convention as "user-out-0" considering that I don't have a bean supplier configured?
- When the first message is produced after the application starts, I can see the following logs:
Using kafka topic for outbound: myUserTopic (which is correct)
Caching the binder: kafka
Retrieving cached binder: kafka
.....
Channel 'unknown.channel.name' has 1 subscriber(s). (which is strange)
For the following messages produced, the log 'unknown.channel.name' does not appear again.
I don't understand why the name of the channel is "unknown" instead of the output bindingName "user" provided in the application.properties configuration. Can you guide me to understand if there is any misconfiguration on my side? All the spring-cloud-stream examples from the documentation & github use StreamBridge with either dynamic destinations or SupplierConfiguration.
Edit:
I have a further question regarding testing of the above configuration. I tried to write a test for this particular use case of StreamBridge by following the examples from here ([https://github.com/spring-cloud/spring-cloud-stream/blob/d8ed65a249ed4364b96d68f4f56b3d5b4de996a4/spring-cloud-stream/src/test/java/org/springframework/cloud/stream/function/StreamBridgeTests.java#L716][1]) and adapt it for my above application.properties config and for some reason, the message received in the outputDestination is null for this test:
public class StreamBridgeTests {
@Test
public void testSendingMessageToDestination() {
try (ConfigurableApplicationContext context = new SpringApplicationBuilder(TestChannelBinderConfiguration
.getCompleteConfiguration(Application.class))
.web(WebApplicationType.NONE).run()) {
HelloEventAvro helloEventAvro = buildHelloEventAvro();
Message<HelloEventAvro> helloEventAvroMessage = MessageBuilder
.withPayload(helloEventAvro)
.setHeader(CustomKafkaHeaders.PARTITION_ID.value(), helloEventAvro.getId())
.build();
StreamBridge bridge = context.getBean(StreamBridge.class);
bridge.send("user", helloEventAvroMessage);
OutputDestination outputDestination = context.getBean(OutputDestination.class);
Message<byte[]> message = outputDestination.receive(100, "user");
assertThat(new String(message.getPayload())).contains("hello");
}
}
}
I dived deeper with debug and it seems that the outputDestination is created with one channel (myUserTopic.destination) and 2 messageQueues instead of 1, mentioned below:
- user.destination with size 0 (where "user" = my bindingName)
- myUserTopic.destination with size 1 and the helloEventAvroMessage produced
If I change the bindingName from "user" to the name of the kafka topic "myUserTopic" in the receive() method of OutputDestination, it works as expected: one channel is created (myUserTopic.destination) and 1 messageQueue (myUserTopic.destination):
public class StreamBridgeTests {
@Test
public void testSendingMessageToDestination() {
try (ConfigurableApplicationContext context = new SpringApplicationBuilder(TestChannelBinderConfiguration
.getCompleteConfiguration(Application.class))
.web(WebApplicationType.NONE).run()) {
HelloEventAvro helloEventAvro = buildHelloEventAvro();
Message<HelloEventAvro> helloEventAvroMessage = MessageBuilder
.withPayload(helloEventAvro)
.setHeader(CustomKafkaHeaders.PARTITION_ID.value(), helloEventAvro.getId())
.build();
StreamBridge bridge = context.getBean(StreamBridge.class);
bridge.send("user", helloEventAvroMessage);
OutputDestination outputDestination = context.getBean(OutputDestination.class);
Message<byte[]> message = outputDestination.receive(100, "myUserTopic");
assertThat(new String(message.getPayload())).contains("hello");
}
}
}
Considering the above, I still don't get how the bindingName from the send() method of StreamBridge works in correlation with the receive() method from OutputDestination (shouldn't be the same name in both places if I have only one topic? ) and how it is resolved to the kafka topic destination name set.