I have 2 Spring Boot projects, 1st one is a Producer that sends message to a Topic with one partition.
2nd one is a Consumer application, that reads from the Topic and one partition. For the consumer, I use KafkaMessageDrivenChannelAdapter, KafkaMessageListenerContainer and specify the Consumer Group Id in the ConsumerFactory.
Note, I am using spring-integration-kafka 2.0.0.RELEASE with spring-kafka 1.0.2.RELEASE, which uses kakfa 0.9. I am running 3 docker instances or kafka 0.10.0 with one instance of zookeeper in docker containers.
When I run one instance of my consumer, it works beautifully, reading message, processing it.
However, when I run a second instance of the application (I just change the port), any message produced by the producer application gets received by both of the instances, resulting in processing each message twice.
Based on the documentation, I felt this scenario should work, as the reason for second instance in this example is for resiliency, in case one app instance goes down, the other one takes over, but not both should get the message for the same topic/partition within a Consumer Group. Note, I am using Service Activators (Facade) to process a message.
Is there something that I am missing.
Please help.
Here is my consumer app configuration based on example from spring-integration-kafka:
{
@ServiceActivator(inputChannel = "received", outputChannel = "nullChannel", adviceChain = {"requestHandlerRetryAdvice"})
@Bean
public MessageConsumerServiceFacade messageConsumerServiceFacade() {
return new DefaultMessageConsumerServiceFacade();
}
@ServiceActivator(inputChannel = "errorChannel", outputChannel = "nullChannel")
@Bean
public MessageConsumerServiceFacade messageConsumerErrorServiceFacade() {
return new DefaultMessageConsumerErrorServiceFacade();
}
@ServiceActivator(inputChannel = "received", outputChannel = "nullChannel", adviceChain = {"requestHandlerRetryAdvice"})
@Bean
public MessageConsumerServiceFacade messageConsumerServiceFacade() {
return new DefaultMessageConsumerServiceFacade();
}
@ServiceActivator(inputChannel = "errorChannel", outputChannel = "nullChannel")
@Bean
public MessageConsumerServiceFacade messageConsumerErrorServiceFacade() {
return new DefaultMessageConsumerErrorServiceFacade();
}
@Bean
public IntegrationFlow consumer() throws Exception {
LOGGER.info("starting consumer..");
return IntegrationFlows
.from(adapter(container()))
.get();
}
@Bean
public KafkaMessageListenerContainer<String, byte[]> container() throws Exception {
// This variant of the constructors DOES NOT WORK with Consumer Group, with this setting, all consumers receives the message - BAD for a cluster of consumer apps - duplicate message
//ContainerProperties containerProperties = new ContainerProperties( new TopicPartitionInitialOffset(this.topic, 0));
// Use THIS variant of the constructors to use Consumer Group successfully
// with auto re-balance of partitions to distribute loads among consumers, perfect for a cluster of consumer app
ContainerProperties containerProperties = new ContainerProperties(this.topic);
containerProperties.setAckOnError(false);
containerProperties.setAckMode(AbstractMessageListenerContainer.AckMode.MANUAL_IMMEDIATE);
KafkaMessageListenerContainer kafkaMessageListenerContainer = new KafkaMessageListenerContainer<>(consumerFactory(), containerProperties);
return kafkaMessageListenerContainer;
}
@Bean
public ConsumerFactory<String, byte[]> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, this.brokerAddress);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, 2);
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 15000);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest"); // earliest, latest, none
return new DefaultKafkaConsumerFactory<>(props);
}
@Bean
public KafkaMessageDrivenChannelAdapter<String, byte[]> adapter(KafkaMessageListenerContainer<String, byte[]> container) {
KafkaMessageDrivenChannelAdapter<String, byte[]> kafkaMessageDrivenChannelAdapter =
new KafkaMessageDrivenChannelAdapter<>(container);
kafkaMessageDrivenChannelAdapter.setOutputChannel(received());
kafkaMessageDrivenChannelAdapter.setErrorChannel(errorChannel());
return kafkaMessageDrivenChannelAdapter;
}
@Bean
public MessageChannel received() {
return new PublishSubscribeChannel();
}
@Bean
public MessageChannel errorChannel() {
return new PublishSubscribeChannel();
}
}