If you are running your Kubernetes cluster behind a load balancer (like HAProxy), it could happen that the timeout configured in kubelet is bigger than the timeout configured in the HAProxy.
For instance, the streamingConnectionIdleTimeout
setting in Kubelet by default is 4h:
$ kubectl proxy --port=8001 &
$ NODE_NAME="XXXX"; curl -sSL "http://localhost:8001/api/v1/nodes/${NODE_NAME}/proxy/configz" | jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"' | grep streaming
"streamingConnectionIdleTimeout": "4h0m0s",
But if in HAProxy (or your preferred LB) you have these settings:
defaults
timeout client 1m
timeout server 1m
...
Trying to execute a port-forwarding will timeout if you don't have any activity over the app:
$ date; kubectl port-forward service/XXXX 1234:80
Mon Jul 5 10:58:20 CEST 2021
Forwarding ...
# after a minute
E0705 10:59:21.217577 64160 portforward.go:233] lost connection to pod
In order to fix this, a solution would be to increase the timeout (be careful with this, because depending on your cluster it can have undesirable effects) or bypass the LB when doing port-forwarding connecting directly to the API server (if your environment allows it).