0

I currently have a spring cloud stream application that has a listener function that mainly listens to a certain topic and executes the following in sequence:

  1. Consume messages from a topic
  2. Store consumed message in the DB
  3. Call an external service for some information
  4. Process the data
  5. Record the results in DB
  6. Send the message to another topic
  7. Acknowledge the message (I have the acknowledge mode set to manual)

We have decided to move to Spring cloud function, and I have been already able to already do almost all the steps above using the Function interface, with the source topic as input and the sink topic as an output.

@Bean
public Function<Message<NotificationMessage>, Message<ValidatedEvent>> validatedProducts() {
    return message -> {
        Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);

        notificationMessageService.saveOrUpdate(notificationMessage, 0, false);
        String status = restEndpoint.getStatusFor(message.getPayload());
        ValidatedEvent event = getProcessingResult(message.getPayload(), status);
        notificationMessageService.saveOrUpdate(notificationMessage, 1, true);
        Optional.ofNullable(acknowledgment).ifPresent(Acknowledgment::acknowledge);
        return MessageBuilder
                .withPayload(event)
                .setHeader(KafkaHeaders.MESSAGE_KEY, event.getKey().getBytes())
                .build();
    }
}

My problem goes with exception handling in step 7 (Acknowledge the message). We only acknowledge the message if we are sure that it was sent successfully to the sink queue, otherwise we do no acknowledge the message.

My question is, how can such a thing be implemented within Spring cloud function, specially that the send method is fully dependant on the Spring Framework (as the result of the function interface implementation evaluation).

earlier, we could do this through try/catch

@StreamListener(value = NotificationMesage.INPUT)
public void onMessage(Message<NotificationMessage> message) {
    try {
        Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);

        notificationMessageService.saveOrUpdate(notificationMessage, 0, false);
        String status = restEndpoint.getStatusFor(message.getPayload());
        ValidatedEvent event = getProcessingResult(message.getPayload(), status);
        
        Message message = MessageBuilder
                .withPayload(event)
                .setHeader(KafkaHeaders.MESSAGE_KEY, event.getKey().getBytes())
                .build();
        kafkaTemplate.send(message);
        
        notificationMessageService.saveOrUpdate(notificationMessage, 1, true);
        Optional.ofNullable(acknowledgment).ifPresent(Acknowledgment::acknowledge);
    }catch (Exception exception){
        notificationMessageService.saveOrUpdate(notificationMessage, 1, false);
    }
}

Is there a listener that triggers after the Function interface have returned successfully, something like KafkaSendCallback but without specifying a template

WiredCoder
  • 916
  • 1
  • 11
  • 39
  • Please note that the excerpts above are simplified, and intended to describe the concept as the original code spans multiple classes – WiredCoder Nov 08 '21 at 15:53

3 Answers3

2

Building upon what Oleg mentioned above, if you want to strictly restore the behavior in your StreamListener code, here is something you can try. Instead of using a function, you can switch to a consumer and then use KafkaTemplate to send on the outbound as you had previously.

@Bean
public Consumer<Message<NotificationMessage>> validatedProducts() {
return message -> {
  try{
        Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);

        notificationMessageService.saveOrUpdate(notificationMessage, 0, false);
        String status = restEndpoint.getStatusFor(message.getPayload());
        ValidatedEvent event = getProcessingResult(message.getPayload(), status);
        
        Message message = MessageBuilder
                .withPayload(event)
                .setHeader(KafkaHeaders.MESSAGE_KEY, event.getKey().getBytes())
                .build();
        kafkaTemplate.send(message); //here, you make sure that the data was sent successfully by using some callback. 
       //only ack if the data was sent successfully. 
        Optional.ofNullable(acknowledgment).ifPresent(Acknowledgment::acknowledge);
        
  }
  catch (Exception exception){
        notificationMessageService.saveOrUpdate(notificationMessage, 1, false);
    }
  };

}

Another thing that is worth looking into is using Kafka transactions, in which case if it doesn't work end-to-end, no acknowledgment will happen. Spring Cloud Stream binder has support for this based on the foundations in Spring for Apache Kafka. More details here. Here is the Spring Cloud Stream doc on this.

sobychacko
  • 5,099
  • 15
  • 26
  • Thanks for your prompt reply! and while this is for sure an option, we wanted to use the Function interface as it better suited for the use case, and was hoping that there is a way to be able to do this without handling it as a Consumer. Transaction sounds right, I would need to dig through the documentation you kindly shared first. – WiredCoder Nov 08 '21 at 16:37
1

Spring cloud stream has no knowledge of function. It is just the same message handler as it was before, so the same approach with callback as you used before would work with functions. So perhaps you can share some code that could clarify what you mean? I also don't understand what do you mean by ..send method is fully dependant on the Spring Framework..

Oleg Zhurakousky
  • 5,820
  • 16
  • 17
  • Thanks for your prompt response! I have included a sample of my spring function code. I meant by "method is fully dependant on the Spring Framework" is that the spring framework fully handles the message routing and sending processes, before I would do this myself though KafkaTemplate, and hence could get call back if the send operation was a success or failure – WiredCoder Nov 08 '21 at 15:35
0

Alright, So what I opted in was actually not to use KafkaTemplate (Or streamBridge)for that matter. While it is a feasible solution it would mean that my Function is going to be split into Consumer and some sort of an improvised supplied (the KafkaTemplate in this case).

As I wanted to adhere to the design goals of the functional interface, I have isolated the behaviour for Database update in a ProducerListener interface implementation

@Configuration
public class ProducerListenerConfiguration {
    private final MongoTemplate mongoTemplate;

    public ProducerListenerConfiguration(MongoTemplate mongoTemplate) {
        this.mongoTemplate = mongoTemplate;
    }

    @Bean
    public ProducerListener myProducerListener() {
        return new ProducerListener() {
            @SneakyThrows
            @Override
            public void onSuccess(ProducerRecord producerRecord, RecordMetadata recordMetadata) {
                final ValidatedEvent event = new ObjectMapper().readerFor(ValidatedEvent.class).readValue((byte[]) producerRecord.value());
                final var updateResult = updateDocumentProcessedState(event.getKey(), event.getPayload().getVersion(), true);
            }

            @SneakyThrows
            @Override
            public void onError(ProducerRecord producerRecord, @Nullable RecordMetadata recordMetadata, Exception exception) {
                ProducerListener.super.onError(producerRecord, recordMetadata, exception);
            }
        };
    }

    public UpdateResult updateDocumentProcessedState(String id, long version, boolean isProcessed) {
        Query query = new Query();
        query.addCriteria(Criteria.where("_id").is(id));
        Update update = new Update();
        update.set("processed", isProcessed);
        update.set("version", version);
        return mongoTemplate.updateFirst(query, update, ProductChangedEntity.class);
    }
}

Then with each successful attempt, the DB is updated with the processing result and the updated version number.

WiredCoder
  • 916
  • 1
  • 11
  • 39