3

I try to create an Apache Kafka connection to Azure event hubs with reactor kafka in a spring boot application. At first I followed the official Azure tutorial to set up Azure event hubs and the spring backend: https://learn.microsoft.com/en-us/azure/developer/java/spring-framework/configure-spring-cloud-stream-binder-java-app-kafka-azure-event-hub Everything worked fine and I created some more advanced services.

However when trying to get reactor kafka working with Azure event hubs, it doesn't work. When the consumer is triggered it cannot consume any messages and the following is logged:

com.test.Application                  : Started Application in 10.442 seconds (JVM running for 10.771)
o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=test-client-0, groupId=$Default] Discovered group coordinator mynamespacename.servicebus.windows.net:9093 (id: 2147483647 rack: null)
o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=test-client-0, groupId=$Default] (Re-)joining group
o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=test-client-0, groupId=$Default] Successfully joined group with generation Generation{generationId=30, memberId='mynamespacename.servicebus.windows.net:c:$default:I:test-client-0-33016d4334614aa8b9b7bf3fd5e1023e', protocol='range'}
o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=test-client-0, groupId=$Default] Finished assignment for group at generation 30: {mynamespacename.servicebus.windows.net:c:$default:I:test-client-0-33016d4334614aa8b9b7bf3fd5e1023e=Assignment(partitions=[my-event-hub-0])}
o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=test-client-0, groupId=$Default] Successfully synced group in generation Generation{generationId=30, memberId='mynamespacename.servicebus.windows.net:c:$default:I:test-client-0-33016d4334614aa8b9b7bf3fd5e1023e', protocol='range'}
o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=test-client-0, groupId=$Default] Notifying assignor about the new Assignment(partitions=[my-event-hub-0])
o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=test-client-0, groupId=$Default] Adding newly assigned partitions: my-event-hub-0
o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=test-client-0, groupId=$Default] Setting offset for partition my-event-hub-0 to the committed offset FetchPosition{offset=17, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[mynamespacename.servicebus.windows.net:9093 (id: 0 rack: null)], epoch=absent}}
o.s.k.l.KafkaMessageListenerContainer    : $Default: partitions assigned: [my-event-hub-0]
o.a.k.clients.consumer.ConsumerConfig    : ConsumerConfig values: 
        allow.auto.create.topics = true
        auto.commit.interval.ms = 5000
        auto.offset.reset = latest
        bootstrap.servers = [mynamespacename.servicebus.windows.net:9093]
        check.crcs = true
        client.dns.lookup = use_all_dns_ips
        client.id = test-client
        client.rack = 
        connections.max.idle.ms = 540000
        default.api.timeout.ms = 60000
        enable.auto.commit = false
        exclude.internal.topics = true
        fetch.max.bytes = 52428800
        fetch.max.wait.ms = 500
        fetch.min.bytes = 1
        group.id = $Default
        group.instance.id = null
        heartbeat.interval.ms = 3000
        interceptor.classes = []
        internal.leave.group.on.close = true
        internal.throw.on.fetch.stable.offset.unsupported = false
        isolation.level = read_uncommitted
        key.deserializer = class org.springframework.kafka.support.serializer.JsonDeserializer
        max.partition.fetch.bytes = 1048576
        max.poll.interval.ms = 300000
        max.poll.records = 500
        metadata.max.age.ms = 300000
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
        receive.buffer.bytes = 65536
        reconnect.backoff.max.ms = 1000
        reconnect.backoff.ms = 50
        request.timeout.ms = 30000
        retry.backoff.ms = 100
        sasl.client.callback.handler.class = null
        sasl.jaas.config = null
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.min.time.before.relogin = 60000
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        sasl.kerberos.ticket.renew.window.factor = 0.8
        sasl.login.callback.handler.class = null
        sasl.login.class = null
        sasl.login.refresh.buffer.seconds = 300
        sasl.login.refresh.min.period.seconds = 60
        sasl.login.refresh.window.factor = 0.8
        sasl.login.refresh.window.jitter = 0.05
        sasl.mechanism = GSSAPI
        security.protocol = PLAINTEXT
        security.providers = null
        send.buffer.bytes = 131072
        session.timeout.ms = 10000
        socket.connection.setup.timeout.max.ms = 127000
        socket.connection.setup.timeout.ms = 10000
        ssl.cipher.suites = null
        ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
        ssl.endpoint.identification.algorithm = https
        ssl.engine.factory.class = null
        ssl.key.password = null
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.certificate.chain = null
        ssl.keystore.key = null
        ssl.keystore.location = null
        ssl.keystore.password = null
        ssl.keystore.type = JKS
        ssl.protocol = TLSv1.3
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.certificates = null
        ssl.truststore.location = null
        ssl.truststore.password = null
        ssl.truststore.type = JKS
        value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer

o.a.kafka.common.utils.AppInfoParser     : Kafka version: 2.7.1
o.a.kafka.common.utils.AppInfoParser     : Kafka commitId: 61dbce85d0d41457
o.a.kafka.common.utils.AppInfoParser     : Kafka startTimeMs: 1629919378494
r.k.r.internals.ConsumerEventLoop        : SubscribeEvent
o.a.k.clients.consumer.KafkaConsumer     : [Consumer clientId=test-client, groupId=$Default] Subscribed to topic(s): my-event-hub
org.apache.kafka.clients.NetworkClient   : [Consumer clientId=test-client, groupId=$Default] Bootstrap broker mynamespacename.servicebus.windows.net:9093 (id: -1 rack: null) disconnected
org.apache.kafka.clients.NetworkClient   : [Consumer clientId=test-client, groupId=$Default] Bootstrap broker mynamespacename.servicebus.windows.net:9093 (id: -1 rack: null) disconnected

The following is the code:

import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.springframework.stereotype.Service;
import reactor.core.publisher.Flux;
import reactor.kafka.receiver.KafkaReceiver;
import reactor.kafka.receiver.ReceiverOptions;
import reactor.kafka.receiver.ReceiverRecord;

import java.util.Collections;
import java.util.HashMap;
import java.util.Map;

@Slf4j
@Service
public class StreamConsumer {

    public Flux<Object> consumeMessages() {
        Map<String, Object> props = new HashMap<>();
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "mynamespacename.servicebus.windows.net:9093");
        props.put(ConsumerConfig.CLIENT_ID_CONFIG, "test-client");
        props.put(ConsumerConfig.GROUP_ID_CONFIG, "$Default");
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, org.springframework.kafka.support.serializer.JsonDeserializer.class);
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringDeserializer.class);

        ReceiverOptions<String, Object> receiverOptions = ReceiverOptions.create(props);

        ReceiverOptions<String, Object> options = receiverOptions.subscription(Collections.singleton(KafkaConstants.KAFKA_TOPIC))
                .addAssignListener(partitions -> log.debug("onPartitionsAssigned {}", partitions))
                .addRevokeListener(partitions -> log.debug("onPartitionsRevoked {}", partitions));
        Flux<ReceiverRecord<String, Object>> kafkaFlux = KafkaReceiver.create(options).receive();
        return kafkaFlux.map(x -> "Test");
    }

}

The reactor code uses the same topic and group id constants. As the client logs a broken connection:

[Consumer clientId=test-client, groupId=$Default] Bootstrap broker mynamespacename.servicebus.windows.net:9093 (id: -1 rack: null) disconnected

I assume that there is a configuration missing to connect the consumer properly to Azure event hubs.

Rooky
  • 810
  • 3
  • 12
  • 20

0 Answers0