39

Below the steps I did to get this issue :

  1. Launch ZooKeeper
  2. Launch Kafka : .\bin\windows\kafka-server-start.bat .\config\server.properties

And at the second step the error happens :

ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) kafka.common.InconsistentClusterIdException: The Cluster ID Reu8ClK3TTywPiNLIQIm1w doesn't match stored clusterId Some(BaPSk1bCSsKFxQQ4717R6Q) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong. at kafka.server.KafkaServer.startup(KafkaServer.scala:220) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44) at kafka.Kafka$.main(Kafka.scala:84) at kafka.Kafka.main(Kafka.scala)

When I trigger .\bin\windows\kafka-server-start.bat .\config\server.properties zookeeper console returns :

INFO [SyncThread:0:FileTxnLog@216] - Creating new log file: log.1

How to fix this issue to get kafka running ?

Edit You can access to the proper issue on the right site (serverfault) here

Edit Here is the Answer

TourEiffel
  • 4,034
  • 2
  • 16
  • 45
  • https://stackoverflow.com/questions/59592518/kafka-broker-doesnt-find-cluster-id-and-creates-new-one-after-docker-restart/60093334#60093334 – Rahamath Feb 06 '20 at 10:55
  • Voting to reopen in order to close for the right reason, since: [1] The question is clearly a duplicate of [Kafka Broker doesn't find cluster id and creates new one after docker restart](https://stackoverflow.com/q/59592518/2985643), as noted by the OP. [2] The current reason for closing is invalid since the question is not about _"professional server or networking-related infrastructure administration"_ at all; it is about a Kafka exception on startup. (And if this question really was off topic then thousands of other questions tagged `Kafka` on SO would be as well.) – skomisa Apr 15 '20 at 20:19
  • @skomisa this issue is slightly différent than the other one since it doesn’t use docker. And please also note that my issue was posted before the issue you are talking about ... – TourEiffel Apr 15 '20 at 20:22
  • @Dorian: I'm really confused now!... You have updated this question and **linked to an another answer written by yourself as the solution**! If you are now claiming that it is not a solution then delete the text "Edit Here is the Answer" from your question above. – skomisa Apr 15 '20 at 20:31
  • @skomisa yes because I wasn’t allowed to ask to re open until today... ans I wanted to share with the community how I did solve my issue ... – TourEiffel Apr 15 '20 at 20:34
  • @Dorian Well your question got reopened! Do you care to post an answer to it now? – skomisa Apr 17 '20 at 04:24

12 Answers12

50

I managed to Solve this issue with the following steps :

  1. Just Delete all the log/Data file created (or generated) into zookeeper and kafka.
  2. Run Zookeper
  3. Run Kafka

[Since this post is open again I post my answer there so you got all on the same post]

TourEiffel
  • 4,034
  • 2
  • 16
  • 45
37

** 1. The easiest solution is to remove all kafka logs and start again. This is enough to solve the problem. e.g.

rm -f /tmp/kafka-logs/*

** 2. How to find Kafka log path:**

  • Open server server.properties file which is located in your kafka folder kafka_2.11-2.4.0\config\server.properties (considering your version of kafka, folder name could be kafka_<kafka_version>):

  • Then search for entry log.dirs to check where logs locate log.dirs=/tmp/kafka-logs

** 3. Why: the root cause is Kafka saved failed cluster ID in meta.properties.**

Try to delete kafka-logs/meta.properties from your tmp folder, which is located in C:/tmp folder by default on windows, and /tmp/kafka-logs on Linux

if kafka is running in docker containers, the log path may be specified by volume config in the docker-compose - see docs.docker.com/compose/compose-file/compose-file-v2/#volumes -- Chris Halcrow

Haili Sun
  • 621
  • 8
  • 13
  • 1
    if you need to know where your log directory is at first, look at: /config/server.properties and search for the log.dirs=.. row. – Emre Jun 20 '21 at 12:05
  • 2
    Note that if kafka is running in docker containers, the log path may be specified by volume config in the docker-compose - see https://docs.docker.com/compose/compose-file/compose-file-v2/#volumes – Chris Halcrow Jun 21 '21 at 03:50
  • 2
    In my case, this solved the problem: `rm -f /tmp/kafka-logs/*` – payne Aug 11 '21 at 19:16
  • 1
    Or we can just locate the file and delete it wherever it is `locate kafka-logs/meta.properties` gives you /kafka-logs/meta.properties and then `rm /kafka-logs/meta.properties` – Rohit Rokde Jul 07 '23 at 18:22
22

For mac, the following steps are needed.

  • Stop kafka service: brew services stop kafka
  • open kafka server.properties file: vim /usr/local/etc/kafka/server.properties
  • find value of log.dirs in this file. For me, it is /usr/local/var/lib/kafka-logs
  • delete path-to-log.dirs/meta.properties file
  • start kafka service brew services start kafka
mdaftab88
  • 291
  • 3
  • 8
13

No need to delete the log/data files on Kafka. Check the Kafka error logs and find the new cluster id. Update the meta.properties file with cluster-ID then restart the Kafka.

/home/kafka/logs/meta.properties

To resolve this issue permanently follow below.

Check your zookeeper.properties file and look for dataDirpath and change the path tmp location to any other location which should not be removed after server restart.

/home/kafka/kafka/config/zookeeper.properties

Copy the zookeeper folder and file to the new(below or non tmp) location then restart the zookeeper and Kafka.

cp -r /tmp/zookeeper /home/kafka/zookeeper

Now server restart won’t affect the Kafka startup.

Aditya Y
  • 651
  • 6
  • 12
4

If you use Embedded Kafka with Testcontainers in your Java project like myself, then simply delete your build/kafka folder and Bob's your uncle.

The mentioned meta.properties can be found under build/kafka/out/embedded-kafka.

Aston
  • 3,654
  • 1
  • 21
  • 18
2

I had some old volumes lingering around. I checked the volumes like this:

docker volume list

And pruned old volumes:

 docker volume prune

And also removed the ones that were kafka: example:

docker volume rm test_kafka
Sandip Subedi
  • 1,039
  • 1
  • 14
  • 34
1

I deleted the following directories :-

a.) logs directory from kafka-server's configured location i.e. log.dir property path.

b.) tmp directory from kafka broker's location.

log.dirs=../tmp/kafka-logs-1

Aditya Goel
  • 201
  • 1
  • 15
1

I was using docker-compose to re-set up Kafka on a Linux server, with a known, working docker-compose.config that sets up a number of Kafka components (broker, zookeeper, connect, rest proxy), and I was getting the issue described in the OP. I fixed this for my dev server instance by doing the following

  • docker-compose down
  • backup kafka-logs directory using cp kafka-logs -r kafka-logs-bak
  • delete the kafka-logs/meta.properties file
  • docker-compose up -d

Note for users of docker-compose:

My log files weren't in the default location (/tmp/kafka-logs). If you're running Kafka in Docker containers, the log path can be specified by volume config in the docker-compose e.g.

volumes:
      - ./kafka-logs:/tmp/kafka-logs

This is specifying SOURCE:TARGET. ./kafka-logs is the source (i.e. a directory named kafka-logs, in the same directory as the docker-compose file). This is then targeted to /tmp/kafka-logs as the mounted volume within the kafka container). So the logs can either be deleted from the source folder on the host machine, or by deleting them from the mounted volume after doing a docker exec into the kafka container.

see https://docs.docker.com/compose/compose-file/compose-file-v2/#volumes

Chris Halcrow
  • 28,994
  • 18
  • 176
  • 206
1

For me, meta.properties was in /usr/local/var/lib/kafka-logs By removing it, the kafka started working.

Mike En
  • 51
  • 2
0

I also deleted all the content of the folder containing all data generated by Kafka. I could find the folder in my .yml file:

 kafka:
    image: confluentinc/cp-kafka:7.0.0
    ports:
      - '9092:9092'
    environment:
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
      KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
      KAFKA_BROKER_ID: 1
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE: "true"
    volumes:
      - ./kafka-data/data:/var/lib/kafka/data
    depends_on:
      - zookeeper
    networks:
      - default

Under volumes: stays the location. So, in my case I deleted all files of the data folder located under kafka-data.

Laura Corssac
  • 1,217
  • 1
  • 13
  • 23
0

I've tried deleting the meta.properties file but didn't work.

In my case, it's solved by deleting legacy docker images.

But the problem with this is that deletes all previous data. So be careful if you want to keep the old data this is not the right solution for you.

docker rm $(docker ps -q -f 'status=exited')
docker rmi $(docker images -q -f "dangling=true")
0

I ran it on my Windows environment and had the same issue tried deleting logs from C:/tmp/logs and restarted , still failed

Then tried to manually match the cluster ID and it worked although I don't know if its safe or not but once you locate meta.properties somewhere in kafka directory you can replace the cluster ID to match with the kafka server then you are good to go