0

I'm developing an app which uses Reactive Messaging with Quarkus, I'm using Kafka as the connector between my app and another app. I'd like not to retrieve all messages from one channel while starting the app and only retrieve new ones, is it possible to configure it in the application.properties?

This is my consumer configuration:

    allow.auto.create.topics = true
    auto.commit.interval.ms = 5000
    auto.offset.reset = latest
    bootstrap.servers = [localhost:9092]
    check.crcs = true
    client.dns.lookup = use_all_dns_ips
    client.id = kafka-consumer-list-temperature
    client.rack = 
    connections.max.idle.ms = 540000
    default.api.timeout.ms = 60000
    enable.auto.commit = false
    exclude.internal.topics = true
    fetch.max.bytes = 52428800
    fetch.max.wait.ms = 500
    fetch.min.bytes = 1
    group.id = smart-home
    group.instance.id = null
    heartbeat.interval.ms = 3000
    interceptor.classes = []
    internal.leave.group.on.close = true
    internal.throw.on.fetch.stable.offset.unsupported = false
    isolation.level = read_uncommitted
    key.deserializer = class io.smallrye.reactive.messaging.kafka.fault.DeserializerWrapper
    max.partition.fetch.bytes = 1048576
    max.poll.interval.ms = 300000
    max.poll.records = 500
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
    receive.buffer.bytes = 65536
    reconnect.backoff.max.ms = 10000
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    security.providers = null
    send.buffer.bytes = 131072
    session.timeout.ms = 10000
    socket.connection.setup.timeout.max.ms = 127000
    socket.connection.setup.timeout.ms = 10000
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
    ssl.endpoint.identification.algorithm = https
    ssl.engine.factory.class = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.certificate.chain = null
    ssl.keystore.key = null
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLSv1.3
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.certificates = null
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    value.deserializer = class io.smallrye.reactive.messaging.kafka.fault.DeserializerWrapper

This is my consumer rebalancer

@ApplicationScoped
@Named("rebalanced-example.rebalancer")
public class KafkaRebalancedConsumerRebalanceListener implements KafkaConsumerRebalanceListener {
    /**
     * When receiving a list of partitions will search for the earliest offset within 10 minutes
     * and seek the consumer to it.
     *
     * @param consumer   underlying consumer
     * @param partitions set of assigned topic partitions
     */
    @Override
    public void onPartitionsAssigned(Consumer<?, ?> consumer,
                                     Collection<org.apache.kafka.common.TopicPartition> partitions) {
        long now = System.currentTimeMillis();
        long shouldStartAt = now - 300_000L; //10 minute ago

        Map<org.apache.kafka.common.TopicPartition, Long> request = new HashMap<>();
        for (org.apache.kafka.common.TopicPartition partition : partitions) {
            System.out.println("\n\n\n\n\n\n "+ partition+"\n\n\n\n\n\n");
            request.put(partition, shouldStartAt);
        }
        Map<org.apache.kafka.common.TopicPartition, OffsetAndTimestamp> offsets = consumer
                .offsetsForTimes(request);
        for (Map.Entry<org.apache.kafka.common.TopicPartition, OffsetAndTimestamp> position : offsets.entrySet()) {
            System.out.println("POSICION\n\n\n\n\n\n "+ position.getKey()+"\n\n\n\n\n\n");
            long target = position.getValue() == null ? 0L : position.getValue().offset();
            System.out.println("TARGET\n\n\n\n\n\n "+ target+"\n\n\n\n\n\n");
            consumer.seek(position.getKey(), target);
        }

}

And in the System out partition and position I've seen the same MOISTURE-0 and in the target it gives me 0. Thanks.

1 Answers1

0

Set the auto.offset.reset configuration property to latest: https://kafka.apache.org/documentation/#consumerconfigs_auto.offset.reset

To configure it for specific MP Reactive Messaging incoming channel, use

mp.messaging.incoming.<channel>.auto.offset.reset=latest
Ladicek
  • 5,970
  • 17
  • 20
  • I've tried this and I'm still retrieving all the messages from the topic, I'll edit my post with the consumer configuration. Thanks btw! – Guillermo Fuentes Jun 15 '21 at 17:17
  • auto.offset.reset is happening when a consumer group is created, you are using group.id = smart-home , if the consumer find the the group and only joining it , auto.offset.reset won't kick off, hope it helps you understand this better – Ran Lupovich Jun 15 '21 at 21:12
  • Mmm I've understand it now, so that's not a solution for my problem hehe. Would it be a good practice to create a new group each time that the app is created? For example adding a timestamp or something like that. – Guillermo Fuentes Jun 15 '21 at 22:03
  • As this approach will work, I will advice against it for production solution as it might grow really fast, creating many empty groups for the cleanup process to handle later on, I would recommend you reading about the spring kafka implementation of seekToEnd option – Ran Lupovich Jun 15 '21 at 23:07
  • Ah right, if you're part of an existing consumer group, indeed `auto.offset.reset` won't work. What you can do is implement a rebalance listener, and when you're assigned partitions from the topic you're interested in, you can seek to end on those partitions. See https://smallrye.io/smallrye-reactive-messaging/smallrye-reactive-messaging/3.4/kafka/kafka.html#kafka-consumer-rebalance-listener for an example that does something very similar. – Ladicek Jun 16 '21 at 06:44
  • I've tried implementing just a close approach to the documentation but now I'm retrieving all the messages in Kafka! Even the older ones... I've edited my ask with the rebalance listener – Guillermo Fuentes Jun 17 '21 at 16:38