I am doing some tests with ingress-nginx load balancer. When I initially deployed it, it automaticaly creates an ELB and works perfectly fine. However, when I scale down the node group and then scale it back up. I don't see ELB registering the new nodes to the instance list. I need to add them to the list manually. It wasn't the case before. I recently upgraded ingress-nginx then this behaviour showed up. Do you think any other annotation should be added to fix this?
kubectl describe svc ingress-nginx-controller
Name: ingress-nginx-controller
Namespace: x
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=x-helm
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.3.1
helm.sh/chart=ingress-nginx-4.2.5
Annotations: external-dns.alpha.kubernetes.io/hostname: lb.x.net
service.beta.kubernetes.io/aws-load-balancer-internal: false
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:x:x:certificate/x
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 172.x.x.94
IPs: 172.x.x.94
LoadBalancer Ingress: x.us-east-1.elb.amazonaws.com
Port: https 443/TCP
TargetPort: http/TCP
NodePort: https 31488/TCP
Endpoints: 10.x.166.111:80,10.x.235.100:80,10.x.90.71:80
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 31480
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 10m (x2 over 10m) service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 10m (x2 over 10m) service-controller Ensured load balancer
Normal UpdatedLoadBalancer 7m11s (x5 over 7m12s) service-controller Updated load balancer with new hosts