1

I just trying to install metallb using manifest just like described at https://metallb.universe.tf/installation/

My setup is a fresh setup k8s master node only. No worker joined to the cluster yet.

No error found for -->

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.4/config/manifests/metallb-native.yaml

compose a yaml (00-ippool_first-pool.yaml) for ip pool definition

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool
  namespace: metallb-system
spec:
  addresses:
  - 192.168.1.186-192.168.1.191

applying this yaml, got error

 bino@corobalap  ~/k8nan/bino-blajar-metalLB   pertamax  kubectl apply -f ./00-ippool_first-pool.yaml   

         
Error from server (InternalError): error when creating "./00-ippool_first-pool.yaml": Internal error occurred: failed calling webhook "ipaddresspoolvalidationwebhook.metallb.io": failed to call webhook: Post "https://webhook-service.metallb-system.svc:443/validate-metallb-io-v1beta1-ipaddresspool?timeout=10s": dial tcp 10.109.189.131:443: connect: connection refused

Note: My laptop ('corobalap') is at IP 192.168.12.71, and the k8s master node is at 192.168.1.66. I'm pretty sure there is no proxy between us.

ubuntu@bino-k8-master:/etc$ sudo ufw status |grep 7946
7946/tcp                   ALLOW       Anywhere                  
7946/udp                   ALLOW       Anywhere                  
7946                       ALLOW       Anywhere                  
7946/tcp (v6)              ALLOW       Anywhere (v6)             
7946/udp (v6)              ALLOW       Anywhere (v6)             
7946 (v6)                  ALLOW       Anywhere (v6)             

            
ubuntu@bino-k8-master:/etc$ sudo ufw status |grep 443
6443/tcp                   ALLOW       Anywhere                  
443/tcp                    ALLOW       Anywhere                  
9443/tcp                   ALLOW       Anywhere                  
6443/tcp (v6)              ALLOW       Anywhere (v6)             
8443 (v6)                  ALLOW       Anywhere (v6)             
443/tcp (v6)               ALLOW       Anywhere (v6)             
9443/tcp (v6)              ALLOW       Anywhere (v6)

 bino@corobalap  ~/k8nan/bino-blajar-metalLB   pertamax  kubectl -n metallb-system get all -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
pod/controller-5bd9496b89-qsts2   0/1     Pending   0          43m   <none>   <none>   <none>           <none>

NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service/webhook-service   ClusterIP   10.109.189.131   <none>        443/TCP   43m   component=controller

NAME                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE   CONTAINERS   IMAGES                            SELECTOR
daemonset.apps/speaker   0         0         0       0            0           kubernetes.io/os=linux   43m   speaker      quay.io/metallb/speaker:v0.13.4   app=metallb,component=speaker

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                               SELECTOR
deployment.apps/controller   0/1     1            0           43m   controller   quay.io/metallb/controller:v0.13.4   app=metallb,component=controller

NAME                                    DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES                               SELECTOR
replicaset.apps/controller-5bd9496b89   1         1         0       43m   controller   quay.io/metallb/controller:v0.13.4   app=metallb,component=controller,pod-template-hash=5bd9496b89
 bino@corobalap  ~/k8nan/bino-blajar-metalLB   pertamax  kubectl -n metallb-system describe svc webhook-service
Name:              webhook-service
Namespace:         metallb-system
Labels:            <none>
Annotations:       <none>
Selector:          component=controller
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.109.189.131
IPs:               10.109.189.131
Port:              <unset>  443/TCP
TargetPort:        9443/TCP
Endpoints:         <none>
Session Affinity:  None
Events:            <none>

Kindly please tell me what to do to fix this problem

Sincerely -bino-

Describe metallb controller pod 
(note: it's different name from above, since I destroy and re deploy it)

 bino@corobalap  ~/k8nan/bino-blajar-metalLB   pertamax  kubectl -n metallb-system describe pod controller-5bd9496b89-htdnq 
Name:           controller-5bd9496b89-htdnq
Namespace:      metallb-system
Priority:       0
Node:           <none>
Labels:         app=metallb
                component=controller
                pod-template-hash=5bd9496b89
Annotations:    prometheus.io/port: 7472
                prometheus.io/scrape: true
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/controller-5bd9496b89
Containers:
  controller:
    Image:       quay.io/metallb/controller:v0.13.4
    Ports:       7472/TCP, 9443/TCP
    Host Ports:  0/TCP, 0/TCP
    Args:
      --port=7472
      --log-level=info
    Liveness:   http-get http://:monitoring/metrics delay=10s timeout=1s period=10s #success=1 #failure=3
    Readiness:  http-get http://:monitoring/metrics delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      METALLB_ML_SECRET_NAME:  memberlist
      METALLB_DEPLOYMENT:      controller
    Mounts:
      /tmp/k8s-webhook-server/serving-certs from cert (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qkc4m (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  webhook-server-cert
    Optional:    false
  kube-api-access-qkc4m:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  63s   default-scheduler  0/2 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
Bino Oetomo
  • 571
  • 1
  • 10
  • 23
  • 1
    your pod `pod/controller-5bd9496b89-qsts2` in `metallb-system` is in pending state. Can you add the `describe` of the pod and output of `kubectl -n metallb-system log controller-5bd9496b89-qsts2` to the question? – P.... Jul 22 '22 at 14:17
  • 1
    since that pod is pending, you can notice that there is no endpoint present in your service `Endpoints: `, – P.... Jul 22 '22 at 14:19
  • @P.... , (1) the kubectl logs return nothing. (2) kindly please elaborate more about that empty endpoint. What I have to do about it? – Bino Oetomo Jul 23 '22 at 02:10
  • 1
    you missed providing the `describe` output!!.IE: `kubectl describe pod n metallb-system controller-5bd9496b89-qsts2` – P.... Jul 23 '22 at 04:07
  • 1
    you need to make the pending pod to go running to populate the endpoint, that would make service available – P.... Jul 23 '22 at 04:09
  • @P.... (1) I add the pod describe (2).as I mention before, this is a fress k8s install. Now I added 1 worker node in it, just to see if it will help. But still the controller pod is at pending state. I really don't know how to make that pod running. – Bino Oetomo Jul 23 '22 at 10:21
  • @P.... nevermind sir. I build another cluster in my virtual box using k0s. at first try defining ip pool .. it's failed. but after some time I do retry and it done well. I think it need times to start metallab controller. I really appreciate your help. – Bino Oetomo Jul 23 '22 at 10:38
  • 1
    Fyi. The last line of describe output means, you got two nodes. 1st is master and it has a taint, the 2nd is worker and its not ready. So making 2/2 nodes unschedulebal – P.... Jul 23 '22 at 16:07
  • 1
    So, the problem is probably why the worker node not ready causing a pod(not specifically metallb) to get scheduled – P.... Jul 23 '22 at 16:08

0 Answers0