0

I install and configured MetalLB on My Kubernetes cluster. Then try to create LoadBalancer Type service. ( NodePort type service is working well. )

But, EXTERNAL-IP is pending status.

I got below error on MetalLB controller pod. Somebody can help to resolve this issues.

I also have simular issue when I try to install nginx ingress-controller.

# kubectl logs controller-65db86ddc6-4hkdn -n metallb-system
{"branch":"HEAD","caller":"main.go:142","commit":"v0.9.5","msg":"MetalLB controller starting version 0.9.5 (commit v0.9.5, branch HEAD)","ts":"2021-03-21T09:30:28.244151786Z","version":"0.9.5"}
I0321 09:30:58.442987       1 trace.go:81] Trace[1298498081]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2021-03-21 09:30:28.44033291 +0000 UTC m=+1.093749549) (total time: 30.001755286s):
Trace[1298498081]: [30.001755286s] [30.001755286s] END
E0321 09:30:58.443118       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0321 09:30:58.443263       1 trace.go:81] Trace[2019727887]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2021-03-21 09:30:28.342686736 +0000 UTC m=+0.996103363) (total time: 30.100527846s):
Trace[2019727887]: [30.100527846s] [30.100527846s] END
E0321 09:30:58.443298       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.ConfigMap: Get https://10.96.0.1:443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0321 09:31:29.444994       1 trace.go:81] Trace[1427131847]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2021-03-21 09:30:59.443509127 +0000 UTC m=+32.096925747) (total time: 30.001450692s):
Trace[1427131847]: [30.001450692s] [30.001450692s] END

Below is my env.

# kubectl version --short
Client Version: v1.20.4
Server Version: v1.20.4

Calico CNI is installed.

# Installing Flannel network-plug-in for cluster network (calico)
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

Metal LB 0.9.5 is install & configured.

Access From Node is working as blow. # curl -k https://10.96.0.1:443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {

  },
  "status": "Failure",
  "message": "configmaps \"config\" is forbidden: User \"system:anonymous\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"metallb-system\"",
  "reason": "Forbidden",
  "details": {
    "name": "config",
    "kind": "configmaps"
  },
  "code": 403
}

But, From POD is not accessible as below. I think, It should be work.

# kubectl -n metallb-system exec -it controller-65db86ddc6-4hkdn /bin/sh
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
4: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1480 qdisc noqueue state UP
    link/ether 76:54:44:f1:8f:50 brd ff:ff:ff:ff:ff:ff
    inet 192.168.41.146/32 brd 192.168.41.146 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::7454:44ff:fef1:8f50/64 scope link
       valid_lft forever preferred_lft forever
/bin $ **wget --no-check-certificate  https://10.96.0.1:443/
Connecting to 10.96.0.1:443 (10.96.0.1:443)
^C**
/bin $
  • AFAIU, the `10.96.0.1` should point to the kubernetes Service in the default Namespace. For some reason, your metallb controller could not reach it. Maybe try to restart it (if it started before the calico/SDN Pod that runs on that node). Make sure the controller can reach other services in the SDN. Make sure there's no networkpolicy that would prevent your controller from reaching kubernetes API. Make sure the kubernetes Service / Endpoints actually sends you to kubernetes API nodes. – SYN Mar 21 '21 at 11:28

2 Answers2

1

I changed my k8s cluster configuration as below. Now It works.

kubeadm init --apiserver-advertise-address=192.168.64.150 --apiserver-cert-extra-sans=192.168.64.150 --node-name kmaster --pod-network-cidr=10.10.0.0/16


cat /etc/hosts
192.168.64.150 kmaster
192.168.64.151 kworker1

And I change calico configuration as below.

- name: CALICO_IPV4POOL_CIDR
  value: "10.10.0.0/16"    ### Same pod-cidr in calico
0
  • What is the output of below ping 10.96.0.1 command from your metallb controller pod ?
kubectl -n metallb-system exec controller-65db86ddc6-4hkdn -- ping 10.96.0.1
  • Please also provide output of below commands
kubectl -n metallb-system exec controller-65db86ddc6-4hkdn -- ip r
kubectl -n metallb-system exec controller-65db86ddc6-4hkdn -- ip n
Sagar Velankar
  • 845
  • 5
  • 5