2

I am using Reactive kafka for consuming events. Problem: I pushed 7 events to the queue, but the consumer only consumed 5 of them. (Only happening when deployed on a server, working fine in local environment). This has happened a lot of times, and we are not able to figure out what's the cause here. I am newbie to reactive programming, please do suggest better code practices.

    @PostConstruct
    List<KafkaReceiver<String, String>> kafkaReceiverList = new ArrayList<>();
    for (int i = 0; i < 5; i++) {
        kafkaReceiverList.add(KafkaReceiver.create());
    }

    @EventListener(ApplicationStartedEvent.class)
    for (KafkaReceiver<String, String> receiver : kafkaReceiverList) {
            kafkaReceivers.add(receiver
                    .receive()
                    .log()
                    .bufferTimeout(500,10)
                    .flatMap(this::processRecord) // input - List<ReceiverRecord<String, String>>
                    .flatMap(this::commitRecord) // input - List<ReceiverRecord<String, String>>
                    .subscribe());
        }

     public Flux<Void> commitRecord(List<ReceiverRecord<String, String>> records) {
        log.info(InfoMessageConstants.COMMIT_RECORD, records);
        records.forEach(record -> record.receiverOffset().commit().subscribe());
        return Flux.empty();
     }
    
     @PreDestroy
     kafkaReceiverList.forEach(consumer -> {
            try {
                kafkaReceivers.stream()
                        .forEach(Disposable::dispose);
            } catch (Exception ex) {
                log.error("Error closing consumer: ", ex);
            }
        });

Why creating a list of receiver?

To create consumers on the basis of partitions, and have more control over the number of consumers and partitions separately.

Is it reproducible in local environment?

No

I am looking for reason, why some events are lost when I start service with this consumer. 
Steps to reproduce on a server:
1. Stop consumer/Service
2. Push events to topic
3. Start Consumer. 
  • The code in the `@PreDestory` should be moved to a `Lifecycle.stop()` method - with pre destroy, you don't know what other beans might have been destroyed before this one. `Lifecycle.stop()` is called before any beans are destroyed. If that doesn't help, post an [MCRE](https://stackoverflow.com/help/minimal-reproducible-example) someplace, and provide DEBUG logs showing the problem. – Gary Russell May 15 '23 at 18:27

0 Answers0