I have cilium installed in my test cluster (AWS, with the AWS CNI deleted because we use the cilium CNI plugin) and whenever I delete the cilium namespace (or run helm delete
), the hubble-ui
pod gets stuck in terminating state. The pod has a couple of containers, but I notice that one container named backend exits with code 137 when the namespace is deleted, leaving the hubble-ui pod and the namespace that the pod is in, stuck in Terminating
state. From what I am reading online, containers exit with 137 when they attempt to use more memory that they have been allocated. In my test cluster, no resource limits have been defined (spec.containers.[*].resources = {}
) on the pod or namespace. There is no error message displayed as reason for the error. I am using the cilium helm package v1.12.3, but this issue has been going on even before we updated the helm package version.
I would like to know what is causing this issue as it is breaking my CI pipeline. How can I ensure a graceful exit of the backend container? (as opposed to clearing finalizers).