3

I'm using the camel kafka component and I'm unclear what is happening under the hood with committing the offsets. As can be seen below, I'm aggregating records and I think for my use case that it only makes sense to commit the offsets after the records have been saved to SFTP.

Is it possible to manually control when I can perform the commit?

private static class MyRouteBuilder extends RouteBuilder {

    @Override
    public void configure() throws Exception {

        from("kafka:{{mh.topic}}?" + getKafkaConfigString())
        .unmarshal().string()
        .aggregate(constant(true), new MyAggregationStrategy())
            .completionSize(1000)
            .completionTimeout(1000)
        .setHeader("CamelFileName").constant("transactions-" + (new Date()).getTime())
        .to("sftp://" + getSftpConfigString())

        // how to commit offset only after saving messages to SFTP?

        ;
    }

    private final class MyAggregationStrategy implements AggregationStrategy {
        @Override
        public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
            if (oldExchange == null) {
                return newExchange;
            }
            String oldBody = oldExchange.getIn().getBody(String.class); 
            String newBody = newExchange.getIn().getBody(String.class);
            String body = oldBody + newBody;
            oldExchange.getIn().setBody(body);
            return oldExchange;
        }
    }
}

private static String getSftpConfigString() {
        return "{{sftp.hostname}}/{{sftp.dir}}?"
                + "username={{sftp.username}}"
                + "&password={{sftp.password}}"
                + "&tempPrefix=.temp."
                + "&fileExist=Append"
                ;
}

private static String getKafkaConfigString() {
        return "brokers={{mh.brokers}}" 
            + "&saslMechanism={{mh.saslMechanism}}"  
            + "&securityProtocol={{mh.securityProtocol}}"
            + "&sslProtocol={{mh.sslProtocol}}"
            + "&sslEnabledProtocols={{mh.sslEnabledProtocols}}" 
            + "&sslEndpointAlgorithm={{mh.sslEndpointAlgorithm}}"
            + "&saslJaasConfig={{mh.saslJaasConfig}}" 
            + "&groupId={{mh.groupId}}"
            ;
}
Mike Barlotta
  • 715
  • 4
  • 16
Chris Snow
  • 23,813
  • 35
  • 144
  • 309

3 Answers3

3

No you cannot. Kafka performs an auto commit in the background every X seconds (you can configure this).

There is no manual commit support in camel-kafka. Also this would not be possible as the aggregator is separated from the kafka consumer, and its the consumer that performs the commit.

Claus Ibsen
  • 56,060
  • 7
  • 50
  • 65
2

I think this is change in the latest version of camel (2.22.0) (the doc) you should be able to do that.

// Endpoint configuration &autoCommitEnable=false&allowManualCommit=true
public void process(Exchange exchange) {
     KafkaManualCommit manual = exchange.getIn().getHeader(KafkaConstants.MANUAL_COMMIT, KafkaManualCommit.class);
     manual.commitSync();
}
Michael
  • 416
  • 1
  • 6
  • 16
  • A detailed look at how to use of Kafka Manual Commit can be found on this question https://stackoverflow.com/questions/51843677/how-to-transactionally-poll-kafka-from-camel/52300250#52300250 – Mike Barlotta Sep 13 '18 at 14:18
  • I think manual commit is not possible in case of having an aggregator in the route. The aggregator works in separated threads as the consumer that have to commit messages. Does anyone have a solution for this case? For my usecase it is important to commit after aggregation because I may lose data in case of killing instances and not shutting down routes gracefully. – R_FF92 Jun 19 '19 at 11:52
  • @R_FF92 below an example – Michael Jul 05 '19 at 12:09
1

You can control manual offset commit even in multi threaded route (using aggregator for exemple) by using offset repository (Camel Documentation)

@Override
public void configure() throws Exception {
      // The route
      from(kafkaEndpoint())
            .routeId(ROUTE_ID)
            // Some processors...
            // Commit kafka offset
            .process(MyRoute::commitKafka)
            // Continue or not...
            .to(someEndpoint());
}

private String kafkaEndpoint() {
    return new StringBuilder("kafka:")
            .append(kafkaConfiguration.getTopicName())
            .append("?brokers=")
            .append(kafkaConfiguration.getBootstrapServers())
            .append("&groupId=")
            .append(kafkaConfiguration.getGroupId())
            .append("&clientId=")
            .append(kafkaConfiguration.getClientId())
            .append("&autoCommitEnable=")
            .append(false)
            .append("&allowManualCommit=")
            .append(true)
            .append("&autoOffsetReset=")
            .append("earliest")
            .append("&offsetRepository=")
            .append("#fileStore")
            .toString();

}

@Bean(name = "fileStore", initMethod = "start", destroyMethod = "stop")
private FileStateRepository fileStore() {
    FileStateRepository fileStateRepository = 
    FileStateRepository.fileStateRepository(new File(kafkaConfiguration.getOffsetFilePath()));
    fileStateRepository.setMaxFileStoreSize(10485760); // 10MB max

    return fileStateRepository;
}

private static void commitKafka(Exchange exchange) {
    KafkaManualCommit manual = exchange.getIn().getHeader(KafkaConstants.MANUAL_COMMIT, KafkaManualCommit.class);
    manual.commitSync();
}
Michael
  • 416
  • 1
  • 6
  • 16