Below is the Pod resource type for deploying a container:
apiVersion: v1
kind: Pod
metadata:
name: my-container
labels:
app: myapp
rel: stable
spec:
containers:
- name: my-container
image: myimage:latest
resources:
limits:
memory: "128Mi" #128 MB
cpu: "200m" #200 millicpu (.2 cpu or 20% of the cpu)
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /health-check
port: 80
initialDelaySeconds: 15
timeoutSeconds: 2 # Default is 1
periodSeconds: 5 # Default is 10
failureThreshold: 1 # Default is 3
readinessProbe:
httpGet:
path: /health-check
port: 80
initialDelaySeconds: 3
periodSeconds: 5 # Default is 10
failureThreshold: 1 # Default is 3
the /health-check
end point returns http status 200
status with below json:
{
"details": {
"app": {
"framework": "gin",
"name": "my-app-local",
"version": "v1"
},
"databases": [
{
"database": "my_db",
"host": "localhost",
"name": "mysql",
"status": "Normal"
}
]
},
"status": "Normal"
}
Given above Pod yaml, how does kubelet read the status
value for "databases"
& "app"
? as part of livenessProbe
or readinessProbe
, to ensure container is running fine.