6

I have an kafka environment which has 3 brokers and 1 zookeeper. I had pushed around >20K message in my topic. Apache Storm is computing the data in topic which is added by producer.

After few hours passed, While I am trying to produce messages to kafka, its showing the following exception

org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

After restarting the kafka servers its working fine. but on production i can't restart my server everytime. so can any one help me out to figure out my issue.

my kafka configuration are as follows :

prodProperties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"list of broker");
prodProperties.put(ProducerConfig.ACKS_CONFIG, "1");
prodProperties.put(ProducerConfig.RETRIES_CONFIG, "3");
prodProperties.put(ProducerConfig.LINGER_MS_CONFIG, 5);
avr
  • 4,835
  • 1
  • 19
  • 30
Anirudh Kaki
  • 83
  • 1
  • 7
  • 2
    Did you look in the logs of the kafka server and see anything? – Tarun Lalwani Feb 23 '18 at 14:48
  • I cannot help with Storm, but with Flink we had problems, that Flink had every 10ms synchronized with Kafka Broker, and produced VERY high load on __consumer_offset topic, so nothing was working well. Check logs and check with monitoring tools what load do you have at broker side. And BTW - sigle instance of zookeeper is very Bad Idea (TM) - you need at least 3 for the production system. – Seweryn Habdank-Wojewódzki Feb 26 '18 at 09:53

1 Answers1

0

Although Kafka producer tuning is a quite hard topic, I can imagine that your producer is trying to generate records faster than it can transfer to your Kafka cluster.

There is a producer setting buffer.memory which defines how much memory producer can use before blocking. Default value is 33554432 (33 MB).

If you increase the producer memory, you will avoid blocking. Try different values, like 100MB.

codejitsu
  • 3,162
  • 2
  • 24
  • 38