1

I have an application that is using Kafka to Synchronize data between instances, therefore it both produces and consumes data from Kafka, additionally the application is consuming a Kafka Topic and transforming and streaming that data into another topic for clients to consume.

My Application has two Clusters for failover. Going through the Kafka Documentation I found this https://docs.spring.io/spring-kafka/docs/current/reference/html/#connecting that talks about ABSwitchCluster.

How can I use ABSwitchCluster to Failover Automagically if the Kafka Cluster goes down, for both KafkaTemplate.send() and @KafkaListener annotated methods?


Update with More Info

I've added some error Handlers for KafkaTemplate.send and Kafka Consumer Events NonResponsiveConsumerEvent and ListenerContainerIdleEvent

Ultimately they call a shared Method to switch, and a BeanPostProcessor is used to actually add the ABSwitchCluster to KafkaResourceFactory Beans.

The Switch over code looks like so:

   @Autowired
   KafkaSwitchCluster kafkaSwitchCluster;

   @Autowired
   WebApplicationContext context;

   @Autowired
   KafkaListenerEndpointRegistry registry;

   /**
    *  Unable to use {@link Autowired} due to circular dependency
    *  with {@link KafkaPostProcessor}
    *  @return
    */
   public DefaultKafkaProducerFactory getDefaultKafkaProducerFactory()
   { return context.getBean(DefaultKafkaProducerFactory.class); }

   /** Back-End Method to Actually Switch between the clusters */
   private void switchCluster()
   {
      if (kafkaSwitchCluster.isPrimary()) { kafkaSwitchCluster.secondary(); }
      else { kafkaSwitchCluster.primary(); }

      getDefaultKafkaProducerFactory().reset();

      registry.stop();
      registry.destroy();
      registry.start();

      for(MessageListenerContainer listener : registry.getListenerContainers() )
      {
         listener.stop();
         listener.start();
      }
   }

Given the Updates Above when Looking in the Test Logs, it appears that the Producer is correctly, switching clusters, but my consumers are not.

So how can I get the @KafkaListener consumers to switch?

Raystorm
  • 6,180
  • 4
  • 35
  • 62

1 Answers1

3

The default producer and consumer factories as well as the KafkaAdmin are suclasses of KafkaResourceFactory.

You pass the ABSwitchCluster in by calling setBootstrapServersSupplier().

The ABSwitchCluster does not fail over automatically.

You need your own code to perform the failover and then call reset() on the producer factory and stop/start all the listener containers (KafkaListenerEndpointRegistry.stop()/start() for all @KafkaListeners.

Gary Russell
  • 166,535
  • 14
  • 146
  • 179
  • Thank you for answering, It helped verify some things I was already trying and gave me a direction. Can you please help with my update. – Raystorm Jul 16 '20 at 20:51
  • You don't need to stop/start the registry AND the containers - the registry does that. Stopping and starting the containers should work, as long as you have not overridden the bootstrap server property at the `@KafkaListener` level (which overrides the factory setting), which is rare. I'll take a look... – Gary Russell Jul 16 '20 at 21:01
  • It works fine for me - I just added a [test case](https://github.com/spring-projects/spring-kafka/blob/d6a17b5e4cbc95e736744a9a486e0292604dbaed/spring-kafka/src/test/java/org/springframework/kafka/listener/ABSwitchClusterTests.java#L52-L127) to verify the correct operation. If you still can't get it to work, try to create a stripped down small project that exhibits the behavior, and I'll take a look. – Gary Russell Jul 16 '20 at 21:41
  • Looks like what I did wrong was, switching the cluster then `stop()/start()` instead of 1. `stop()` 2. switch 3. `start()` **Ordering is important.** – Raystorm Jul 17 '20 at 13:44