34

I've created docker with kafka broker and zookeeper to start it with run script. If I do fresh start it starts normally and runs ok (Windows -> WSL -> two tmux windows, one session). If I shut down kafka or zookeeper and start it again it will connect normally.

Problem occurs when I stop docker container (docker stop my_kafka_container). Then I start with my script ./run_docker. In that script before start I delete old container docker rm my_kafka_containerand then docker run.

Zookeeper starts normally and in file meta.properties it has old cluster id from previous start up, but kafka broker for some reason cannot find by znode cluster/id this id and creates new one which is not that which is stored in meta.properties. And I get

  ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.InconsistentClusterIdException: The Cluster ID m1Ze6AjGRwqarkcxJscgyQ doesn't match stored clusterId Some(1TGYcbFuRXa4Lqojs4B9Hw) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.
        at kafka.server.KafkaServer.startup(KafkaServer.scala:220)
        at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
        at kafka.Kafka$.main(Kafka.scala:84)
        at kafka.Kafka.main(Kafka.scala)
[2020-01-04 15:58:43,303] INFO shutting down (kafka.server.KafkaServer)

How to avoid broker change it cluster id?

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
Bohdan Myslyvchuk
  • 1,657
  • 3
  • 24
  • 39

17 Answers17

43

If you are 100% sure you are connecting to the right ZooKeeper and the right Kafka log directories, but for some reason things don't match and you don't feel like losing all your data while trying to recover:

The Kafka data directory (check config/server.properties for log.dirs property, it defaults to /tmp/kafka-logs) contains a file called meta.properties. It contains the cluster ID. Which should have matched the ID registered to ZK. Either edit the file to match ZK, edit ZK to match the file, or delete the file (it contains the cluster id and the broker id, the first is currently broken and the second is in the config file normally). After this minor surgery, Kafka will start with all your existing data, since you didn't delete any data file.

Like this: mv /tmp/kafka-logs/meta.properties /tmp/kafka-logs/meta.properties_old

Gwen Shapira
  • 4,978
  • 1
  • 24
  • 23
  • It's important to stress to ensure your Zookeeper state is intact, as this indicates potentially catastrophic loss of data on the Zookeeper cluster. Updating the cluster id on the brokers in this scenario will cause the Kafka cluster to be essentially blank with no reference to which partitions exist and where the replica logs reside. – amcc Nov 01 '21 at 14:17
32

I had the same issue when using Docker. This issue occurs since Kafka 2.4 because a check was added to see if the Cluster ID in Zookeeper matches. It's storing the cluster id in meta.properties.

This can be fixed by making the Zookeeper data persistent and not only the Zookeeper logs. E.g. with the following config:

volumes:
  - ~/kafka/data/zookeeper_data:/var/lib/zookeeper/data
  - ~/kafka/data/zookeeper_log:/var/lib/zookeeper/log

You should also remove the meta.properties file in the Kafka logs once so that Kafka retrieves the right cluster id from Zookeeper. After that the IDs should match and you don't have to do this anymore.

You may also run into a snapshot.trust.empty error which was also added in 2.4. You can solve this by either adding the snapshot.trust.empty=true setting or by making the Zookeeper data persistent before doing the upgrade to 2.4.

Marcel
  • 983
  • 10
  • 8
  • 3
    Making the Zookeeper data persistent fixes the root cause – earthling paul Apr 17 '20 at 07:08
  • 1
    persisting just the data dir is sufficient to resolve the issue (checked on 2.5) – Mazerunner72 May 31 '20 at 14:19
  • I had the same issue from confluent kafka 5.1.0 to 6.0.1 migration. I need to add volumes in zookeeper – NIrav Modi Apr 21 '21 at 08:52
  • @Marcel thanks. that worked. didn't get back to that project for a long time ;) – Bohdan Myslyvchuk Oct 22 '21 at 21:13
  • 2
    Where to add these lines? – Talha Akbar Oct 26 '21 at 11:09
  • 1
    @TalhaAkbar the example I used is in a docker-compose yaml file. If you use docker directly then you need to use the --volume argument for each row – Marcel Oct 27 '21 at 14:04
  • The problem will remain if you have volumes for kafka and zookeeper but when you want to clean all data with `docker-compose down --volumes`. After restart ClusterID will be different but old value will remain in the kafka volume forlder(`~/kafka/data/kafka1_volume:/bitnami/kafka` for example). Solution here is to update **meta.properties** ClusterID value(or remove them) [check](https://stackoverflow.com/a/64101207/3549038) Or in my case i am good with removing all data because i want to drop all volumes. – stopanko Apr 15 '22 at 10:28
14

There is a cluster.id property in meta.properties just replace id with the stated in the error log.
meta.properties file is in kafka.logdir. You can learn kafka.logdir from Kafka config server.properties. An example below.

cat /opt/kafka/config/server.properties | grep log.dirs
Expected output:
log.dirs=/data/kafka-logs

Once you find meta.properties file change it. After change it should look like.

#
#Tue Apr 14 12:06:31 EET 2020
cluster.id=m1Ze6AjGRwqarkcxJscgyQ
version=0
broker.id=0
Erkan Şirin
  • 1,935
  • 18
  • 28
12

I have tried most of the answers and found the hard way (loosing all my data and records) what actually works.
For WINDOWS Operating System Only
So as suggested by others we do need to change and set default path for data directories for both

Kafka in server.properties and
Zookeeper in zookeeper.properties

//Remember this is important if you are on windows give double slash .
for kafka
log.dirs=C://kafka_2.13-2.5//data//kafka

Same goes for zookeeper
dataDir=C://kafka_2.13-2.5//data//zookeeper

and obviously you need to create the above listed folders first before setting anything

then try to run zookeeper and Kafka haven't faced the issue since changing the path.
Prior to this I had single "/" which worked only once then changed to "" again this worked also but once.

EDIT And don't forget to properly kill the process
kafka-server-stop.bat and
zookeeper-server-stop.bat

NAVJEET SINGH
  • 121
  • 1
  • 4
6

Kafka was started in past with other/other instance of zookeeper, thus old zookeeper id is registered in it. In Kafka config directory, open kafka config properties file lets say server.properties Find the log path directory with parameter log.dirs= then go to log path directory and find the file meta.properties in it. Open the file meta.properties and update the cluster.id= or delete this file or all the log file from log path directory and restart kafka.

user3808727
  • 71
  • 1
  • 1
4

To Solve this issue :

  1. Just Delete all the log/Data file created (or generated) into zookeeper and kafka.
  2. Run Zookeper
  3. Run Kafka
OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
TourEiffel
  • 4,034
  • 2
  • 16
  • 45
  • 8
    This is not advised if you actually want to preserve any existing data – OneCricketeer Jan 22 '20 at 03:18
  • 1
    @cricket_007 Of course but it is the only way I found. If someone got a better Answer I would be pleased to Know it ... – TourEiffel Jan 22 '20 at 03:57
  • I'll add: if you're using Docker - you need to remove "volume". – Ernestas Kardzys Jan 23 '20 at 08:24
  • 1
    @ErnestasKardzys Since there are four volumes involved, which `volume` are you referring to? And must the whole volume be removed or just certain files in the volume? Remember that the goal is to retain all actual data and let the docker services reuse that data. – Jesse Chisholm Apr 23 '20 at 03:46
4

Edit meta.properties and remove line with cluster.id and restart kafka.

On linux servers it is located in /var/lib/kafka/meta.properties

Do this for all servers. New cluster id will be provided by zookeeper for the brokers.

user987339
  • 10,519
  • 8
  • 40
  • 45
  • That what you need. In my case the issue appeared when i have added static volumes to my kafka instances ```yaml - ~/kafka/data/kafka1_volume:/bitnami/kafka ``` So now when you cleaning volumes with `docker-compose down --volumes` it will remove them but all data will remain persisted in the **~/kafka/data/kafka1_volume** along with `meta.properties` file that contain's **ClusterID**. You can edit speciffic file as mentioned in your volume's folder. Or as in my case that's experimental project i just cleaning all folder's with all data. – stopanko Apr 15 '22 at 09:59
2

In my case this was due to missing configuration of the zookeeper cluster or more precisely, each zookeeper node was working independently and thus data such as the cluster id was not shared between the kafka nodes. When a kafka node started after other nodes have already started running, it did not recognize via zookeeper that a cluster id have already been established and created a new cluster id and tried communicating with other nodes that similarly had given themselves different ids.

To resolve this:

  1. We need to clear the zookeeper dir defined by dataDir in the kafka/config/zookeeper.properties file
  2. In this folder add a file called myid containing a uniqe id for each zookeeper node
  3. Add the following configuration to each kafka/config/zookeeper.properties file:
tickTime=2000
initLimit=5
syncLimit=2
server.1=<zookeeper node #1 address>:2888:3888
server.2=<zookeeper node #2 address>:2888:3888
server.3=<zookeeper node #3 address>:2888:3888
  1. Remove the cluster.id line from the meta.properties file which resides in the path described by log.dirs property in the kafka/config/server.properties file or delete this file altogether

You can refer to the zookeeper documentation for more info: https://zookeeper.apache.org/doc/r3.3.3/zookeeperStarted.html#sc_RunningReplicatedZooKeeper

Assaf
  • 316
  • 2
  • 6
1

Try the following...

  1. Enable following line in ./config/server.properties

    listeners=PLAINTEXT://:9092

  2. Modify default ZooKeeper dataDir

  3. Modify default Kafka log dir

Rahamath
  • 491
  • 6
  • 4
1

For windows, renaming or deleting this meta.properties helped to launch kafka and observed file has been created once launched.

{kafka-installation-folder}\softwareskafkalogs\meta.properties
Jaison
  • 715
  • 1
  • 10
  • 33
1

This is due to a new feature that was introduced in the Kafka 2.4.0 release and it is [KAFKA-7335] - Store clusterId locally to ensure broker joins the right cluster. When the docker restart happens, Kafka tries to match the locally stored clusterId to the Zookeeper's clusterId(which changed because of docker restart) due to this mismatch, the above error is thrown. Please refer to this link for more information.

borz
  • 313
  • 4
  • 10
0

I encountered the same issue while running Kafka server on my Windows Machine.

You can try Following to resolve this issue:

  1. Open server server.properties file which is located in your kafka folder kafka_2.11-2.4.0\config (considering your version of kafka, folder name could be kafka_)
  2. Search for entry log.dirs
  3. If your log.dir path contains windows directory path like this E:\Shyam\Software\kafka_2.11-2.4.0\kafka-logs which has a single backslash i.e \, change it to double back-slash i.e with \

Hope it helps. Cheers

0

Try this:

  • Open server server.properties file which is located in your kafka folder kafka_2.11-2.4.0\config
  • Search for entry log.dirs
  • If you have the directory specified C:....... change it to relative to current directory. Example log.dirs=../../logs

This worked for me :)

0

enter image description here

This is how I solved it. I searched this file, renamed it and started it successfully and a new file was created.

I am Kafka installed by brew under mac

Hope this helps you.

Carlos Luis Rivera
  • 3,108
  • 18
  • 45
Li danyang
  • 76
  • 6
0

If during testing, you are trying to launch an EmbeddedKafka broker, and if your test case doesnt do clean-up of the temp directory, then you will have to manually delete the kafka log directory to get past this error.

Soni
  • 142
  • 8
0

Error -> The Cluster ID Ltm5IhhbSMypbxp3XZ_onA doesn't match stored clusterId Some(sAPfAIxcRZ2xBew78KDDTg) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.

Linux ->

Go to /tmp/kafka-logs Check the meta.properties file

use vi meta.properties Change the cluster id to the required id

CodingBee
  • 1,011
  • 11
  • 8
0

For me, as mentioned above, deleting the meta.properties helped. Since I had kafka and zookeeper running in a terminal, and I had installed kafka and zookeeper through homebrew, for me the package where the file lied was /opt/homebrew/var/lib/kafka-logs. Once I reached there I ran an rm command to delete the file.

anshul6297
  • 25
  • 4