0

I have two tomcat with a webapp. I run kafka and zookeeper service with docker, and i run tomcats. In kafka console i see that are created 2 consumer at second with this message:

kafka_1      | [2019-12-20 16:30:20,725] INFO [GroupCoordinator 1001]: Stabilized group 1001 generation 12902 (__consumer_offsets-24) (kafka.coordinator.group.GroupCoordinator)
kafka_1      | [2019-12-20 16:30:20,730] INFO [GroupCoordinator 1001]: Assignment received from leader for group 1001 for generation 12902 (kafka.coordinator.group.GroupCoordinator)
kafka_1      | [2019-12-20 16:30:21,059] INFO [GroupCoordinator 1001]: Preparing to rebalance group 1001 in state PreparingRebalance with old generation 12902 (__consumer_offsets-24) (reason: Adding new member consumer-1-5c607368-a22c-44dd-b460-6f33101e3e7a with group instanceid None) (kafka.coordinator.group.GroupCoordinator)
kafka_1      | [2019-12-20 16:30:21,060] INFO [GroupCoordinator 1001]: Stabilized group 1001 generation 12903 (__consumer_offsets-24) (kafka.coordinator.group.GroupCoordinator)
kafka_1      | [2019-12-20 16:30:21,063] INFO [GroupCoordinator 1001]: Assignment received from leader for group 1001 for generation 12903 (kafka.coordinator.group.GroupCoordinator)
kafka_1      | [2019-12-20 16:30:21,749] INFO [GroupCoordinator 1001]: Preparing to rebalance group 1001 in state PreparingRebalance with old generation 12903 (__consumer_offsets-24) (reason: Adding new member consumer-1-01c204d3-0e36-487e-ac13-374aaf4d84fd with group instanceid None) (kafka.coordinator.group.GroupCoordinator)
kafka_1      | [2019-12-20 16:30:21,751] INFO [GroupCoordinator 1001]: Stabilized group 1001 generation 12904 (__consumer_offsets-24) (kafka.coordinator.group.GroupCoordinator)
kafka_1      | [2019-12-20 16:30:21,754] INFO [GroupCoordinator 1001]: Assignment received from leader for group 1001 for generation 12904 (kafka.coordinator.group.GroupCoordinator)
kafka_1      | [2019-12-20 16:30:22,081] INFO [GroupCoordinator 1001]: Preparing to rebalance group 1001 in state PreparingRebalance with old generation 12904 (__consumer_offsets-24) (reason: Adding new member consumer-1-4993cf30-5924-47db-9c63-2b1008f98924 with group instanceid None) (kafka.coordinator.group.GroupCoordinator)

I use this docker-compose.yml

version: '2'
services:
  zookeeper:
image: wurstmeister/zookeeper
ports:
  - "2181:2181"
kafka:
build: .
ports:
  - "9092:9092"
environment:
  KAFKA_ADVERTISED_HOST_NAME: 127.0.0.1
  KAFKA_CREATE_TOPICS: "clinicaleventmanager:1:1"
  KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
  - /var/run/docker.sock:/var/run/docker.sock

This problem not exist if i run only a tomcat. Why? How can i avoid it? thanks

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245

2 Answers2

1

That is happening because whenever you attach new consumers to the same topic rebalancing will happen. The Topic is just a layer in front of Partitions. In reality, when you subscribe new consumer it will be subscribed to the partition. Kafka was designed in that way because order matters and you can maintain order only when you have no more consumers than partitions(you cannot have more than 1 consumer consuming from the same partition). That's why you see that log.

OlegI
  • 5,472
  • 4
  • 23
  • 31
  • so is this correct even if no user connects to the webapp? Can't there be a resource allocation problem over time? Thanks – Marco Di Falco Dec 23 '19 at 08:39
  • You connect your consumer and it also should keep the connection. You shouldn’t poll for messages. So if you have any kind of application, subscribe your consumer on application startup and don’t close the connection – OlegI Dec 23 '19 at 09:03
  • I use Kafka-Atmosphere and this is the native consumer implementation Method startConsumer() https://github.com/Atmosphere/atmosphere-extensions/blob/master/kafka/modules/src/main/java/org/atmosphere/kafka/KafkaBroadcaster.java – Marco Di Falco Dec 23 '19 at 09:10
  • After broadcastReceivedMessage methods I added the consumer.commitAsync () statement; because the messages looped between the two tomcats – Marco Di Falco Dec 23 '19 at 09:14
0

RESOLVED! The problem was that in kafka.properties, the properties group.id must be different for each tomcat.

I removed group.id from properties file and Magic!