2

I have a Kafka cluster running on AWS MSK with Kafka producer and consumer go clients running in kubernetes. The producer is responsible for sending the stream of data to Kafka. I need help solving the following problems:

  1. Let's say, there is some code change in producer code and have to redeploy it in kubernetes. How can I do that? Since the data is continuously generated, I cannot just simply stop the already running producer and deploy the updated one. In this case, I will lose the data between the update process.

  2. Sometimes due to a panic(golang) in the code, the client crashes, but since it is running as a pod, kubernetes restarts it. I am not able to understand as to whether it's a good thing or bad.

Thanks

Alexandre Dupriez
  • 3,026
  • 20
  • 25

1 Answers1

0

For your first question, I would suggest having rolling update of your deployment in the cluster. For second, that is the general behavior of deployments in kubernetes. I could think of an external monitoring solution that de-deploys your application or stops handling requests in case of a panic. It would be better if you could explain why exactly you need such kind of behavior.!

Avik Aggarwal
  • 599
  • 7
  • 28
  • Thanks for the answer. Regarding the second question, I don't exactly need such behavior but I was getting that from k8s. You mention an external solution, what would that be? – Piyush Kumar Jun 09 '19 at 08:23
  • Maybe a log analyser or log based trigger through some tool. You could export your application's log through side car containers in each pod and then use those logs to configure triggers. – Avik Aggarwal Jun 09 '19 at 09:23