I have a .Net application hosted in Azure Kubernetes cluster.
One is Web application - TestWeb
. Another one is console library running as background service - TestWorker
.
TestWeb
receives the request from user and pushes the message to Storage Queue st1
, whereas TestWorker
reads the message from st1
and processes it.
Suppose the deployment is created in cluster AKS1
for both service.
Now I want to create new cluster AKS2
and deploy the services TestWeb
and TestWorker
writing to and reading from same queue st1
.
I can update the traffic manager endpoint to AKS2
after deployment in AKS2 succeeds. In this way, all the request will route to TestWeb
hosted in AKS2
. However, TestWorker
in AKS1 and AKS2 will still be reading from same queue.
How can I gracefully shutdown pods in AKS1 so that TestWorker deployed in AKS2 only reads from storage queue st1?
I can set replica count to 0 for all pods in AKS1 but this will not ensure that messages picked up from the queue by TestWorker in AKS1 is completely processed. If I set grace period
to more than 30s , It may be possible that message is read at last nth second and pod shutdowns abruptly. What is the best way to gracefully shut down the pod in this case?
Asked
Active
Viewed 25 times
0

avocado
- 39
- 1
- 3
-
Your service should perform cleanup when it receives a TERM signal. K8s sends this signal before starting the grace period. – jordanm Aug 02 '23 at 14:19