A MCVE example is here: https://github.com/chrissound/k8s-metallb-nginx-ingress-minikube
(just run ./init.sh
and minikube addons enable ingress
).
The IP assigned to the ingress keeps getting reset, I don't know what is causing it? Do I need additional configuration perhaps?
kubectl get ingress --all-namespaces
NAMESPACE NAME HOSTS ADDRESS PORTS AGE
chris-example app-ingress example.com 192.168.122.253 80, 443 61m
And a minute later:
NAMESPACE NAME HOSTS ADDRESS PORTS AGE
chris-example app-ingress example.com 80, 443 60m
In terms of configuration I've just applied:
# metallb
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml
# nginx
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/service-nodeport.yaml
ingress controller logs logs:
I0714 22:00:38.056148 7 event.go:258] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"chris-example", Name:"app-ingress", UID:"cbf3b5bf-a67a-11e9-be9a-a4cafa3aa171", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"8681", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress chris-example/app-ingress
I0714 22:01:19.153298 7 event.go:258] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"chris-example", Name:"app-ingress", UID:"cbf3b5bf-a67a-11e9-be9a-a4cafa3aa171", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"8743", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress chris-example/app-ingress
I0714 22:01:38.051694 7 status.go:296] updating Ingress chris-example/app-ingress status from [{192.168.122.253 }] to []
I0714 22:01:38.060044 7 event.go:258] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"chris-example", Name:"app-ingress", UID:"cbf3b5bf-a67a-11e9-be9a-a4cafa3aa171", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"8773", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress chris-example/app-ingress
And the metallb controller logs:
{"caller":"main.go:72","event":"noChange","msg":"service converged, no change","service":"kube-system/kube-dns","ts":"2019-07-14T21:58:39.656725017Z"}
{"caller":"main.go:73","event":"endUpdate","msg":"end of service update","service":"kube-system/kube-dns","ts":"2019-07-14T21:58:39.656741267Z"}
{"caller":"main.go:49","event":"startUpdate","msg":"start of service update","service":"chris-example/app-lb","ts":"2019-07-14T21:58:39.6567588Z"}
{"caller":"main.go:72","event":"noChange","msg":"service converged, no change","service":"chris-example/app-lb","ts":"2019-07-14T21:58:39.656842026Z"}
{"caller":"main.go:73","event":"endUpdate","msg":"end of service update","service":"chris-example/app-lb","ts":"2019-07-14T21:58:39.656873586Z"}
As a test I deleted the deployment+daemonset relating to metallb:
kubectl delete deployment -n metallb-system controller
kubectl delete daemonset -n metallb-system speaker
And after the external IP is set, it'll once again reset...