I have a k8s service which maps to pod deployment with 2 replicas and is exposed as clusterIp service. I am seeing an issue when the 2nd pod gets scheduled to the same node the readiness probe (http call to an api in container port) is failing with "unable to connect error" . Is this due to some port conflict?
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-deployment
namespace: demo
labels:
app: demo
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: demo
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/app-configmap.yaml") . | sha256sum }}
labels:
app: demo
spec:
containers:
- name: demo
image: demo-app-image:1.0.1
ports:
- containerPort: 8081
livenessProbe:
httpGet:
path: /healthcheck
port: 8081
initialDelaySeconds: 30
periodSeconds: 60
failureThreshold: 3
successThreshold: 1
timeoutSeconds: 15
readinessProbe:
httpGet:
path: /healthcheck
port: 8081
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 3
successThreshold: 1
timeoutSeconds: 15
volumeMounts:
- name: config-volume
mountPath: /config/app
volumes:
- name: config-volume
configMap:
name: demo-configmap
items:
- key: config
path: config.json
nodeSelector:
usage: demo-server