-2

I am trying to auto-scale with HPA with kafka metrics as per the steps given in this URL https://medium.com/google-cloud/kubernetes-hpa-autoscaling-with-kafka-metrics-88a671497f07

However, when I am deploying the yaml file given in step 3, the pod doesn't come up. The log displays the below error.


panic: kafka: client has run out of available brokers to talk to (Is your cluster reachable?) goroutine 1 [running]: main.NewExporter(0xc4200bb7e0, 0x2, 0x2, 0x100, 0x96d438, 0x0, 0x96d438, 0x0, 0x0, 0x96d438, ...) /home/travis/gopath/src/github.com/danielqsj/kafka_exporter/kafka_exporter.go:185 +0xbbc main.main() /home/travis/gopath/src/github.com/danielqsj/kafka_exporter/kafka_exporter.go:606 +0x3aa7

stackdriver-error-screenshot

1 Answers1

1

Looking at the prerequisites in provided URL:

  1. You have a Docker running. You know the rules of this game. ;)
  2. You have a Kubernetes cluster (GKE) running on GCP.
  3. You have kubectl CLI installed and configured to your GKE cluster.

These fail to mention the presence for a kafka-cluster and the steps do not include any kafka deployments. So how exactly are you going to export metrics from kafka?

In step 3 the spec.template.spec.containers[0].command path in yaml clearly defines two brokers to use:

- "--kafka.server=my-kafka-broker-1:9092"
- "--kafka.server=my-kafka-broker-2:9092"

If these are non-existent, ofcourse the kafka exporter will throw an error stating there are no brokers!

J. Roovers
  • 71
  • 1
  • 8
  • Thank you for your reply! basically, I want to scale based on the kafka metrics. I have GCP cluster running where I am performing step 3. Do I need anther cluster to run KafKa? – Mitesh Gangaramani Dec 04 '19 at 09:37
  • how exactly are you going to export metrics from kafka? >> as mentioned in the article, I am using promethus to read matrics fromm Kafka and push to Google stackdriver. – Mitesh Gangaramani Dec 04 '19 at 09:45
  • Can you help me what should I do to complete the absent components of the article? – Mitesh Gangaramani Dec 04 '19 at 10:08
  • Please clarify: do you already have Kafka installed somewhere? This may be obvious but there is no mention of this in your original question and there are no steps defined in the URL to setup an instance of Kafka... – J. Roovers Dec 04 '19 at 10:09
  • no. I have just followed the steps in the article so I think I will need it running. Can I can use my existing GKE cluster or I should spin up one GCP instance and run it? – Mitesh Gangaramani Dec 04 '19 at 10:20
  • Yeah so that is your problem. I'm not familiar with your environment but I would guess you want kafka in the same cluster so you can resolve the domainnames as specified in step 3 from my original answer: (kafka.server=my-kafka-broker-1:9092) You'd have to look up another guide for installing kafka on GKE because I'm not really familiar with Kafka, just the basics of Kubernetes. – J. Roovers Dec 04 '19 at 10:56
  • environment is just specified in the article. I created a cluster, enabled stackdriver on it till step 2. That's it. Can you suggest me a URL which can be suitable to complete the article. – Mitesh Gangaramani Dec 04 '19 at 10:58
  • Like I said I'm not familiar with Kafka on GKE, so I can't really recommend a guide – J. Roovers Dec 04 '19 at 11:02
  • Ok. Thank you helping me to figure out the exact issue. I'll check it further. – Mitesh Gangaramani Dec 04 '19 at 11:03