I have setup an EKS cluster and configured it to run 6 or so different microservices in their own pods. I am using an ALB as the ingress to said pods and have noticed that sometimes the connection to the pods will time out. I am struggling to determine exactly what the cause of this is.
The pods work as expected for the first X amount of requests but once they have been left for a while and then I try to make a new request, the connection will time out. Could this be due to the ALB or am I missing something with Kubernetes?
One of the deployments looks like this:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: challengepasswordlookupservice
labels:
app: challengepasswordlookupservice
spec:
replicas: 1
selector:
matchLabels:
app: challengepasswordlookupservice
template:
metadata:
labels:
app: challengepasswordlookupservice
spec:
containers:
- name: challengepasswordlookupservice
image: *withheld*
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: challengepasswordlookupservice
spec:
type: NodePort
selector:
app: challengepasswordlookupservice
ports:
- protocol: TCP
port: 80
targetPort: 80
And the ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: golf-high-service-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig":{ "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:eu-west-2:206106816545:certificate/8d91b886-9d57-4fcf-b016-04959cf4d97d
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=60
spec:
rules:
- host: "*withheld*"
http:
paths:
- path: "/*"
pathType: Prefix
backend:
service:
name: ssl-redirect
port:
name: use-annotation
- path: "/*"
pathType: Prefix
backend:
service:
name: challengepasswordlookupservice
port:
number: 80