0

i'm running with kafka 3.2.0 with kraft mode when i restart the kafka brokers, the brokers keep fatal start up it keep throwing the same error log

2023-01-05 04:36:00,267 - ERROR [EventHandler:Logging@76] - [BrokerMetadataPublisher id=1] Error publishing broker metadata at OffsetAndEpoch(offset=1284527, epoch=5864)
org.apache.kafka.common.errors.InconsistentTopicIdException: Tried to assign topic ID aCqb9NV2QJOd7_4ELWo4pg to log for topic partition public.task_assignment-2,but log already contained topic ID 5iPMzQEuTXCk1-RHLyrRig
2023-01-05 04:36:00,268 - INFO  [main:Logging@66] - [BrokerServer id=1] Transition from STARTING to STARTED
2023-01-05 04:36:00,285 - ERROR [main:MarkerIgnoringBase@159] - [BrokerServer id=1] Fatal error during broker startup. Prepare to shutdown
java.util.concurrent.ExecutionException:org.apache.kafka.common.errors.InconsistentTopicIdException: Tried to assign topic ID aCqb9NV2QJOd7_4ELWo4pg to log for topic partition public.task_assignment-2,but log already contained topic ID 5iPMzQEuTXCk1-RHLyrRig
    at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
    at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
    at kafka.server.BrokerServer.startup(BrokerServer.scala:426)
    at kafka.server.KafkaRaftServer.$anonfun$startup$2(KafkaRaftServer.scala:114)
    at kafka.server.KafkaRaftServer.$anonfun$startup$2$adapted(KafkaRaftServer.scala:114)
    at scala.Option.foreach(Option.scala:437)
    at kafka.server.KafkaRaftServer.startup(KafkaRaftServer.scala:114)
    at kafka.Kafka$.main(Kafka.scala:109)
    at kafka.Kafka.main(Kafka.scala)

expect that kafka brokers can start up as normal

edit: i deleted the topic public.task_assignment before edit: it seem relay the topic metadata

2023-01-05 11:04:48,234 - ERROR [EventHandler:EventQueue$FailureLoggingEvent@60] - [BrokerMetadataListener id=1] Unexpected error handling HandleCommitsEvent
java.lang.RuntimeException: Unable to delete topic with id MXaqZEBbQDOl5zzVOL3ILw: no such topic found.
    at org.apache.kafka.image.TopicsDelta.replay(TopicsDelta.java:104)
    at org.apache.kafka.image.MetadataDelta.replay(MetadataDelta.java:250)
    at org.apache.kafka.image.MetadataDelta.replay(MetadataDelta.java:186)
    at kafka.server.metadata.BrokerMetadataListener.$anonfun$loadBatches$3(BrokerMetadataListener.scala:212)
    at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
    at kafka.server.metadata.BrokerMetadataListener.kafka$server$metadata$BrokerMetadataListener$$loadBatches(BrokerMetadataListener.scala:204)
    at kafka.server.metadata.BrokerMetadataListener$HandleCommitsEvent.run(BrokerMetadataListener.scala:111)
    at org.apache.kafka.queue.KafkaEventQueue$EventContext.run(KafkaEventQueue.java:121)
    at org.apache.kafka.queue.KafkaEventQueue$EventHandler.handleEvents(KafkaEventQueue.java:200)
    at org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:173)
    at java.base/java.lang.Thread.run(Thread.java:829)

my current kafka broker config with kraft mode

############################# Server Basics #############################

process.roles=broker,controller
node.id=1

# The connect string for the controller quorum
controller.quorum.voters=1@kafka-0.kafka-svc-headless.kafka-ns.svc.cluster.local:9093,2@kafka-1.kafka-svc-headless.kafka-ns.svc.cluster.local:9093,3@kafka-2.kafka-svc-headless.kafka-ns.svc.cluster.local:9093

############################# Socket Server Settings #############################

listeners=PLAINTEXT://10.72.9.205:9092,CONTROLLER://10.72.9.205:9093
inter.broker.listener.name=PLAINTEXT
advertised.listeners=PLAINTEXT://10.72.9.205:9092
controller.listener.names=CONTROLLER

listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

num.network.threads=3

num.io.threads=8

socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400

socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma separated list of directories under which to store log files
log.dirs=/kafka/data/1

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

############################# Log Flush Policy #############################

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################
log.retention.hours=168

#log.retention.bytes=1073741824
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000

svc.service.port.voter=9093
url.path=kafka/3.2.0/kafka_2.13-3.2.0.tgz
exporter.kafka.service.port.metrics=9308
ui.port=tcp://10.76.7.168:80
svc.port.9093.tcp=tcp://10.76.6.196:9093
exporter.kafka.port=tcp://10.76.0.48:9308
ui.port.80.tcp=tcp://10.76.7.168:80
svc.service.port.client=9092
svc.port.9092.tcp=tcp://10.76.6.196:9092
ui.port.80.tcp.addr=10.76.7.168
svc.port.9092.tcp.proto=tcp
connect.service.host=10.76.0.215
replica.fetch.max.bytes=10485760
exporter.kafka.port.9308.tcp=tcp://10.76.0.48:9308
exporter.kafka.port.9308.tcp.addr=10.76.0.48
exporter.kafka.port.9308.tcp.proto=tcp
svc.port.9092.tcp.port=9092
ui.port.80.tcp.proto=tcp
ui.service.host=10.76.7.168
heap.opts=-Xms768M -Xmx768M
svc.service.host=10.76.6.196
message.max.bytes=10485760
svc.port.9093.tcp.proto=tcp
connect.port.8083.tcp=tcp://10.76.0.215:8083
connect.service.port.client=8083
svc.port.9093.tcp.addr=10.76.6.196
connect.port=tcp://10.76.0.215:8083
exporter.kafka.service.port=9308
svc.port.9093.tcp.port=9093
connect.port.8083.tcp.proto=tcp
zookeeper.connect=
ui.service.port=80
exporter.kafka.port.9308.tcp.port=9308
ui.service.port.http=80
svc.service.port=9092
exporter.kafka.service.host=10.76.0.48
auto.create.topics.enable=false
connect.port.8083.tcp.port=8083
svc.port=tcp://10.76.6.196:9092
data=/kafka/data
connect.port.8083.tcp.addr=10.76.0.215
connect.service.port=8083
ui.port.80.tcp.port=80
svc.port.9092.tcp.addr=10.76.6.196
broker.id=1
onemin
  • 11
  • 2
  • Please show your kafka server config. When you say restart, do you mean a full server restart, or only the JVM process? – OneCricketeer Jan 05 '23 at 15:51
  • yeah, the full server kafka restart and cannot live, it has the error log every time restart that "Tried to assign topic ID aCqb9NV2QJOd7_4ELWo4pg to log for topic partition public.task_assignment-2,but log already contained topic ID 5iPMzQEuTXCk1-RHLyrRig" And shutdown at this step – onemin Jan 09 '23 at 02:32
  • Kafka defaults to store data in /tmp. You need persistent storage if you are trying to preserve any data (including cluster id) – OneCricketeer Jan 09 '23 at 13:47
  • yeah i use persistent storage - persistent volume in k8s. after i delete the topic public.task_assignment, but somehow it still exists, and error "tried to assign topic id for topic public.task_assignment" – onemin Jan 10 '23 at 04:06
  • Are you sure? I see you have set `log.dirs=/tmp/kraft-broker-logs`, and therefore not overrode that via k8s. And if you are using k8s, then I would suggest using Strimzi, not rolling your own configs. Also, Kraft isn't considered "production ready" until Kafka 3.3.1 – OneCricketeer Jan 10 '23 at 04:12
  • i showed wrong config (editted in the question), it is `log.dirs=/kafka/data/1` i deploy kafka in kraft mode with kafka instance has both role broker and controller. Problem is before that kafka can restart and work normally, but this time when i restart kafka it keep has the same error log – onemin Jan 10 '23 at 04:24
  • I have not used Kraft mode with Kafka, so don't know where the ID is stored/configured. Like I said, you may want to upgrade to 3.3.1 – OneCricketeer Jan 10 '23 at 04:30
  • let me upgrade to 3.3.1 – onemin Jan 10 '23 at 04:47

0 Answers0