There is EKS Cluster, AWS CNI plugin replaced by Calico(the cause of installing Calico here). After installing chart by helm I execute kubectl describe ingress -n my-ns
and see an error:
...Failed deploy model due to Internal error occurred: failed calling webhook "mtargetgroupbinding.elbv2.k8s.aws"...
My ingress.yaml
:
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "front.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "front.labels" . | nindent 4 }}
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/subnets: subnet-0bbd31e479f6211d7, subnet-017bb4e710d71fcc1, subnet-0e8474c825ada2138 # Public Subnets
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
- http:
paths:
- path: /hello
backend:
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
service.yaml
:
apiVersion: v1
kind: Service
metadata:
name: {{ include "front.fullname" . }}
labels:
{{- include "front.labels" . | nindent 4 }}
spec:
type: NodePort
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "front.selectorLabels" . | nindent 4 }}
In the AWS console it seems everything ok, ALB has active status, but following the DNS name failed.
The interesting fact if I replace the Service type with LoadBalancer, Classic Load Balancer will be deployed and will work fine, DNS name works.
I realize that initial information could seem messy and insufficient, but I have no idea what direction to follow to solve the problem. I will be glad to provide all needed details whatever asked and will be appreciated for any help.