0

I have written the following code to leverage the cloud stream functional approach to get the events from the RabbitMQ and publish those to KAFKA, I am able to achieve the primary goal with caveat while running the application if the KAFKA broker goes down due to any reason then I am getting the logs of KAFKA BROKER it's down but at the same time I want to stop the event from rabbitMQ or until the broker comes up those messages either should be routed to Exchange or DLQ topic. however, I have seen at many places to use producer sync: true but in my case that is either not helping, a lot of people talked about @ServiceActivator(inputChannel = "error-topic") for error topic while having a failure at target channel, this method is also not getting executed. so in short I don't want to lose my messages received from rabbitMQ during kafka is down due to any reason

application.yml

management:
  health:
    binders:
      enabled: true
    kafka:
      enabled: true
server:
  port: 8081

spring:
  rabbitmq:
    publisher-confirms : true
  kafka:
    bootstrap-servers: localhost:9092
    producer:
      properties:
        max.block.ms: 100
    admin:
      fail-fast: true
  cloud:
    function:
      definition: handle
    stream:
      bindingRetryInterval : 30
      rabbit:
        bindings:
          handle-in-0:
            consumer:
              bindingRoutingKey: MyRoutingKey
              exchangeType: topic
              requeueRejected : true
              acknowledgeMode: AUTO
      #              ackMode: MANUAL
      #              acknowledge-mode: MANUAL
      #              republishToDlq : false
      kafka:
        binder:
          considerDownWhenAnyPartitionHasNoLeader: true
          producer:
            properties:
              max.block.ms : 100
          brokers:
            - localhost
      bindings:
        handle-in-0:
          destination: test_queue
          binder: rabbit
          group: queue
        handle-out-0:
          destination: mytopic
          producer:
            sync: true
            errorChannelEnabled: true
          binder: kafka
      binders:
        error:
          destination: myerror
        rabbit:
          type: rabbit
          environment:
            spring:
              rabbitmq:
                host: localhost
                port: 5672
                username: guest
                password: guest
                virtual-host: rahul_host
        kafka:
          type: kafka


json:
  cuttoff:
    size:
      limit: 1000

CloudStreamConfig.java

@Configuration
public class CloudStreamConfig {
    private static final Logger log = LoggerFactory.getLogger(CloudStreamConfig.class);

    @Autowired
    ChunkService chunkService;

    @Bean
    public Function<Message<RmaValues>,Collection<Message<RmaValues>>> handle() {
        return rmaValue -> {
            log.info("processor runs : message received with request id : {}", rmaValue.getPayload().getRequestId());
            ArrayList<Message<RmaValues>> msgList = new ArrayList<Message<RmaValues>>();
            try {
                List<RmaValues> dividedJson = chunkService.getDividedJson(rmaValue.getPayload());
                for(RmaValues rmaValues : dividedJson) {
                    msgList.add(MessageBuilder.withPayload(rmaValues).build());
                }
            } catch (Exception e) {
                e.printStackTrace();

            }
            Channel channel = rmaValue.getHeaders().get(AmqpHeaders.CHANNEL, Channel.class);
            Long deliveryTag = rmaValue.getHeaders().get(AmqpHeaders.DELIVERY_TAG, Long.class);

//            try {
//                channel.basicAck(deliveryTag, false);
//            } catch (IOException e) {
//                e.printStackTrace();
//            }
            return msgList;
        };
    };
    @ServiceActivator(inputChannel = "error-topic")
    public void errorHandler(ErrorMessage em) {
        log.info("---------------------------------------got error message over errorChannel: {}", em);
        if (null != em.getPayload() && em.getPayload() instanceof KafkaSendFailureException) {
            KafkaSendFailureException kafkaSendFailureException = (KafkaSendFailureException) em.getPayload();
            if (kafkaSendFailureException.getRecord() != null && kafkaSendFailureException.getRecord().value() != null
                    && kafkaSendFailureException.getRecord().value() instanceof byte[]) {
                log.warn("error channel message. Payload {}", new String((byte[])(kafkaSendFailureException.getRecord().value())));
            }
        }
    }

KafkaProducerConfiguration.java

@Configuration
        public class KafkaProducerConfiguration {
        
            @Value(value = "${spring.kafka.bootstrap-servers}")
            private String bootstrapAddress;
        
            @Bean
            public ProducerFactory<String, Object> producerFactory() {
                Map<String, Object> configProps = new HashMap<>();
                configProps.put(
                        ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
                        bootstrapAddress);
                configProps.put(
                        ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
                        StringSerializer.class);
                configProps.put(
                        ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
                        StringSerializer.class);
                return new DefaultKafkaProducerFactory<>(configProps);
            }
        
            @Bean
            public KafkaTemplate<String, String> kafkaTemplate() {
                return new KafkaTemplate(producerFactory());
            }
        

RmModelOutputIngestionApplication.java

@SpringBootApplication(scanBasePackages = "com.abb.rm")
        public class RmModelOutputIngestionApplication {
            private static final Logger LOGGER = LogManager.getLogger(RmModelOutputIngestionApplication.class);
        
            public static void main(String[] args) {
                SpringApplication.run(RmModelOutputIngestionApplication.class, args);
            }
        
            @Bean("objectMapper")
            public ObjectMapper objectMapper() {
                ObjectMapper mapper = new ObjectMapper();
                LOGGER.info("Returning object mapper...");
            return mapper;
        }
rahul sharma
  • 505
  • 4
  • 17

1 Answers1

0

First, it seems like you are creating too much unnecessary code. Why do you have ObjectMapper? Why do you have KafkaTemplate? Why do you have ProducerFactory? These are all already provided for you. You really only have to have one function and possibly an error handler - depending on error handling strategy you select, which brings me to the error handling topic. There are 3 primary ways of handling errors. Here is the link to the doc explaining them all and providing samples. Please read thru that and modify your app accordingly and if something doesn't work or unclear feel free to follow up.

Oleg Zhurakousky
  • 5,820
  • 16
  • 17