0

We are using Alpakka Kafka streams for consuming events from Kafka. Here is how the stream is defined as:

ConsumerSettings<GenericKafkaKey, GenericKafkaMessage> consumerSettings = 
    ConsumerSettings
        .create(actorSystem, new KafkaJacksonSerializer<>(GenericKafkaKey.class), 
                new KafkaJacksonSerializer<>(GenericKafkaMessage.class))
        .withBootstrapServers(servers).withGroupId(groupId)
        .withClientId(clientId).withProperties(clientConfigs.defaultConsumerConfig());
CommitterSettings committerSettings = CommitterSettings.create(actorSystem)
        .withMaxBatch(20)
        .withMaxInterval(Duration.ofSeconds(30));
Consumer.DrainingControl<Done> control = 
    Consumer.committableSource(consumerSettings, Subscriptions.topics(topics))
        .mapAsync(props.getMessageParallelism(), msg ->
                CompletableFuture.supplyAsync(() -> consumeMessage(msg), actorSystem.dispatcher())
                        .thenCompose(param -> CompletableFuture.supplyAsync(() -> msg.committableOffset())))
        .toMat(Committer.sink(committerSettings), Keep.both())
        .mapMaterializedValue(Consumer::createDrainingControl)
        .run(materializer);

Here is the piece of code that is shutting down the stream:

CompletionStage<Done> completionStage = control.drainAndShutdown(actorSystem.dispatcher());
completionStage.toCompletableFuture().join();

I tried doing a get too on the completable future. But neither join nor get on future are returning. Have anyone else too faced similar problem? Is there something that I am doing wrong here?

Prasanth
  • 1,005
  • 5
  • 19
  • 39

2 Answers2

0

If you want to control stream termination from outside the stream, you need to use a KillSwitch : https://doc.akka.io/docs/akka/current/stream/stream-dynamic.html

Igmar Palsenberg
  • 637
  • 4
  • 10
  • I am using Alpakka connector for Kafka which gives an instance of DrainingControl for initiating the shutdown. So I have not used KillSwitch explicitly. https://doc.akka.io/docs/alpakka-kafka/current/consumer.html – Prasanth May 25 '20 at 12:11
0

Your usage looks correct and I can't identify anything that would hinder draining.

A common thing to miss with Alpakka Kafka consumers is the stop-timeout which defaults to 30 seconds. When using the DrainingControl you can safely set it to 0 seconds.

See https://doc.akka.io/docs/alpakka-kafka/current/consumer.html#draining-control

Enno
  • 283
  • 2
  • 8
  • Few questions here? 1. Even after 30 seconds, I have seen that its not being ended 2. Is stop-timeout being used in other places apart from DrainingControl? – Prasanth Jun 01 '20 at 11:04
  • The `stop-timeout` delays the shut down of Alpakka Kafka consumers that expect commits. This is done to make sure commits for already enqueued messages can be sent to the broker once the consumer is shutting down. – Enno Jun 01 '20 at 15:06