I have a working Kubernetes deployment of my application.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
...
template:
...
spec:
containers:
- name: my-app
image: my-image
...
readinessProbe:
httpGet:
port: 3000
path: /
livenessProbe:
httpGet:
port: 3000
path: /
When I apply my deployment I can see it runs correctly and the application responds to my requests.
$ kubectl describe pod -l app=my-app
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m7s default-scheduler Successfully assigned XXX
Normal Pulled 4m5s kubelet, pool-standard-4gb-2cpu-b9vc Container image "my-app" already present on machine
Normal Created 4m5s kubelet, pool-standard-4gb-2cpu-b9vc Created container my-app
Normal Started 4m5s kubelet, pool-standard-4gb-2cpu-b9vc Started container my-app
The application has a defect and crashes under certain circumstances. I "invoke" such a condition and then I see the following in pod events:
$ kubectl describe pod -l app=my-app
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m45s default-scheduler Successfully assigned XXX
Normal Pulled 6m43s kubelet, pool-standard-4gb-2cpu-b9vc Container image "my-app" already present on machine
Normal Created 6m43s kubelet, pool-standard-4gb-2cpu-b9vc Created container my-app
Normal Started 6m43s kubelet, pool-standard-4gb-2cpu-b9vc Started container my-app
Warning Unhealthy 9s kubelet, pool-standard-4gb-2cpu-b9vc Readiness probe failed: Get http://10.244.2.14:3000/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Warning Unhealthy 4s (x3 over 14s) kubelet, pool-standard-4gb-2cpu-b9vc Liveness probe failed: Get http://10.244.2.14:3000/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Normal Killing 4s kubelet, pool-standard-4gb-2cpu-b9vc Container crawler failed liveness probe, will be restarted
It is expected the liveness probe fails and the container is restarted. But why do I see Readiness probe failed
event?