0

I'm starting on Kubernetes.I built a cluster with these features

NAME/ STATUS/ ROLES/ VERSION/ TYPE/ OS/ DOCKER/ IP k8s-master-001/ Ready/ master/ v1.18.0/ VM (vmware)/ Ubuntu 19.10 x64/ 19.03.6/ 192.168.10.70
k8s-master-002/ Ready/ master/ v1.18.0/ VM (vmware)/ Ubuntu 19.10 x64/ 19.03.6/ 192.168.10.71
k8s-master-003/ Ready/ master/ v1.18.0/ VM (vmware)/ Ubuntu 19.10 x64/ 19.03.6/ 192.168.10.72
k8s-worker-001/ Ready/ worker/ v1.18.0/ VM (vmware)/ Ubuntu 19.10 x64/ 19.03.6/ 192.168.10.73
k8s-worker-002/ Ready/ worker/ v1.18.0/ VM (vmware)/ Ubuntu 19.10 x64/ 19.03.6/ 192.168.5.74
k8s-worker-r01/ Ready/ worker/ v1.18.0/ rpi3/ Ubuntu 19.10 x64/ 19.03.6/ 192.168.10.80
k8s-worker-r02/ Ready/ worker/ v1.18.0/ rpi3/ Ubuntu 19.10 x64/ 19.03.6/ 192.168.10.81
k8s-worker-r03/ Ready/ worker/ v1.18.0/ rpi3/ Ubuntu 19.10 x64/ 19.03.6/ 192.168.10.82

I use the flannel network provider

Here are the contents of my configuration file when I deployed the cluster with kubeadm

apiServer:
  extraArgs:
    cloud-config: /etc/kubernetes/vsphere.conf
    cloud-provider: vsphere
    endpoint-reconciler-type: lease
  extraVolumes:
  - hostPath: /etc/kubernetes/vsphere.conf
    mountPath: /etc/kubernetes/vsphere.conf
    name: cloud
  certSANs:
  - 192.168.10.45
  - kube.coolcorp.priv
apiVersion: kubeadm.k8s.io/v1beta2
controlPlaneEndpoint: kube.coolcorp.priv
controllerManager:
  extraArgs:
    cloud-config: /etc/kubernetes/vsphere.conf
    cloud-provider: vsphere
  extraVolumes:
  - hostPath: /etc/kubernetes/vsphere.conf
    mountPath: /etc/kubernetes/vsphere.conf
    name: cloud
kind: ClusterConfiguration
kubernetesVersion: 1.18.0
networking:
  podSubnet: 10.244.0.0/16

Everything seems to be okay. I've got all my pods in running status on all namespace.

I deployed the dashboard kubernetes. When I use my pc to access them via "kubectl proxy" I arrive on the home page. When I enter the token, I get this error after a few seconds

Internal error (500): Get https://10.96.0.1:443/version?timeout=32s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)

and I get the message "http: proxy error: unexpected EOF" in the CMD window where I ran kubectl proxy

I don't understand why.

Here is the return of the command : kubectl describe pod kubernetes-dashboard-79c6f94c98-ngt4g -n kubernetes-dashboard

Name:         kubernetes-dashboard-79c6f94c98-ngt4g
Namespace:    kubernetes-dashboard
Priority:     0
Node:         k8s-worker-002/192.168.5.74
Start Time:   Tue, 07 Apr 2020 20:07:27 +0200
Labels:       k8s-app=kubernetes-dashboard
              pod-template-hash=79c6f94c98
Annotations:  <none>
Status:       Running
IP:           10.244.4.122
IPs:
  IP:           10.244.4.122
Controlled By:  ReplicaSet/kubernetes-dashboard-79c6f94c98
Containers:
  kubernetes-dashboard:
    Container ID:  docker://e0b57c1b689f371dad7dad1a7fe70057de78d79dfaa93079ca24fb132f38ff49
    Image:         kubernetesui/dashboard:v2.0.0-rc7
    Image ID:      docker-pullable://kubernetesui/dashboard@sha256:24b77588e57e55da43db45df0c321de1f48488fa637926b342129783ff76abd4
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      --auto-generate-certificates
      --namespace=kubernetes-dashboard
    State:          Running
      Started:      Wed, 08 Apr 2020 07:51:43 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Wed, 08 Apr 2020 07:34:29 +0200
      Finished:     Wed, 08 Apr 2020 07:51:41 +0200
    Ready:          True
    Restart Count:  41
    Liveness:       http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /certs from kubernetes-dashboard-certs (rw)
      /tmp from tmp-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-zp25l (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  kubernetes-dashboard-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-certs
    Optional:    false
  tmp-volume:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  kubernetes-dashboard-token-zp25l:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-token-zp25l
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  kubernetes.io/os=linux
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason   Age                   From                     Message
  ----    ------   ----                  ----                     -------
  Normal  Started  21m (x41 over 11h)    kubelet, k8s-worker-002  Started container kubernetes-dashboard
  Normal  Pulling  4m46s (x42 over 11h)  kubelet, k8s-worker-002  Pulling image "kubernetesui/dashboard:v2.0.0-rc7"
  Normal  Pulled   4m45s (x42 over 11h)  kubelet, k8s-worker-002  Successfully pulled image "kubernetesui/dashboard:v2.0.0-rc7"
  Normal  Created  4m45s (x42 over 11h)  kubelet, k8s-worker-002  Created container kubernetes-dashboard
bvivi57@k8s-master-001:~/k8s/kubernetes/rolesbinding$ kubectl logs kubernetes-dashboard-79c6f94c98-ngt4g -n kubernetes-dashboard
2020/04/08 05:51:43 Starting overwatch
2020/04/08 05:51:43 Using namespace: kubernetes-dashboard
2020/04/08 05:51:43 Using in-cluster config to connect to apiserver
2020/04/08 05:51:43 Using secret token for csrf signing
2020/04/08 05:51:43 Initializing csrf token from kubernetes-dashboard-csrf secret
2020/04/08 05:51:43 Successful initial request to the apiserver, version: v1.18.0
2020/04/08 05:51:43 Generating JWE encryption key
2020/04/08 05:51:43 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2020/04/08 05:51:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2020/04/08 05:51:43 Initializing JWE encryption key from synchronized object
2020/04/08 05:51:43 Creating in-cluster Sidecar client
2020/04/08 05:51:43 Auto-generating certificates
2020/04/08 05:51:43 Successful request to sidecar
2020/04/08 05:51:44 Successfully created certificates
2020/04/08 05:51:44 Serving securely on HTTPS port: 8443
2020/04/08 05:54:05 [2020-04-08T05:54:05Z] Incoming HTTP/2.0 GET /api/v1/settings/global request from 10.244.2.0:42824:
2020/04/08 05:54:05 [2020-04-08T05:54:05Z] Incoming HTTP/2.0 GET /api/v1/plugin/config request from 10.244.2.0:42824:
2020/04/08 05:54:05 [2020-04-08T05:54:05Z] Incoming HTTP/2.0 GET /api/v1/settings/pinner request from 10.244.2.0:42824:
2020/04/08 05:54:05 Getting application global configuration
2020/04/08 05:54:05 Application configuration {"serverTime":1586325245097}
2020/04/08 05:54:05 [2020-04-08T05:54:05Z] Incoming HTTP/2.0 GET /api/v1/settings/global request from 10.244.2.0:42824:
2020/04/08 05:54:05 [2020-04-08T05:54:05Z] Outcoming response to 10.244.2.0:42824 with 200 status code
2020/04/08 05:54:05 [2020-04-08T05:54:05Z] Incoming HTTP/2.0 GET /api/v1/login/status request from 10.244.2.0:42824:
2020/04/08 05:54:05 [2020-04-08T05:54:05Z] Outcoming response to 10.244.2.0:42824 with 200 status code
2020/04/08 05:54:05 [2020-04-08T05:54:05Z] Incoming HTTP/2.0 GET /api/v1/login/status request from 10.244.2.0:42824:
2020/04/08 05:54:05 [2020-04-08T05:54:05Z] Outcoming response to 10.244.2.0:42824 with 200 status code
2020/04/08 05:54:05 [2020-04-08T05:54:05Z] Incoming HTTP/2.0 GET /api/v1/login/status request from 10.244.2.0:42824:
2020/04/08 05:54:05 [2020-04-08T05:54:05Z] Outcoming response to 10.244.2.0:42824 with 200 status code
2020/04/08 05:54:05 [2020-04-08T05:54:05Z] Incoming HTTP/2.0 GET /api/v1/login/skippable request from 10.244.2.0:42824:
2020/04/08 05:54:05 [2020-04-08T05:54:05Z] Incoming HTTP/2.0 GET /api/v1/login/modes request from 10.244.2.0:42824:
2020/04/08 05:54:05 [2020-04-08T05:54:05Z] Outcoming response to 10.244.2.0:42824 with 200 status code
2020/04/08 05:54:05 [2020-04-08T05:54:05Z] Outcoming response to 10.244.2.0:42824 with 200 status code
2020/04/08 05:54:06 [2020-04-08T05:54:06Z] Incoming HTTP/2.0 GET /api/v1/csrftoken/login request from 10.244.2.0:42824: { contents hidden }
2020/04/08 05:54:06 [2020-04-08T05:54:06Z] Outcoming response to 10.244.2.0:42824 with 200 status code
2020/04/08 05:54:06 [2020-04-08T05:54:06Z] Incoming HTTP/2.0 POST /api/v1/login request from 10.244.2.0:42824: { contents hidden }
2020/04/08 05:54:10 [2020-04-08T05:54:10Z] Incoming HTTP/2.0 GET /api/v1/csrftoken/login request from 10.244.2.0:42824: { contents hidden }
2020/04/08 05:54:10 [2020-04-08T05:54:10Z] Outcoming response to 10.244.2.0:42824 with 200 status code
2020/04/08 05:54:10 [2020-04-08T05:54:10Z] Incoming HTTP/2.0 POST /api/v1/login request from 10.244.2.0:42824: { contents hidden }
2020/04/08 05:54:38 [2020-04-08T05:54:38Z] Outcoming response to 10.244.2.0:42824 with 500 status code
2020/04/08 05:54:42 [2020-04-08T05:54:42Z] Outcoming response to 10.244.2.0:42824 with 500 status code
2020/04/08 05:55:32 [2020-04-08T05:55:32Z] Incoming HTTP/2.0 GET /api/v1/login/status request from 10.244.2.0:42824:
2020/04/08 05:55:32 [2020-04-08T05:55:32Z] Outcoming response to 10.244.2.0:42824 with 200 status code
2020/04/08 05:55:32 [2020-04-08T05:55:32Z] Incoming HTTP/2.0 GET /api/v1/login/status request from 10.244.2.0:42824:
2020/04/08 05:55:32 [2020-04-08T05:55:32Z] Outcoming response to 10.244.2.0:42824 with 200 status code
2020/04/08 05:55:32 [2020-04-08T05:55:32Z] Incoming HTTP/2.0 GET /api/v1/login/status request from 10.244.2.0:42824:
2020/04/08 05:55:32 [2020-04-08T05:55:32Z] Outcoming response to 10.244.2.0:42824 with 200 status code

If anyone can help me, thank you very much.

  • Dashboard seems to be running on a node with ip `192.168.5.74`. Is this IP correct? I am asking because all other addresses are `192.168.10.*` and this one looks like an anomaly. – Matt Apr 08 '20 at 10:47
  • Also, I have deployed latest dashboard (v2.0.0-rc7) on kubernetes v1.18.0 (which seems to be the version you are running based on the logs you posted) and everything is working for me as it should so its highly unlikely that the problem is with either dashboard or apiserver. This is why I think that (most likely) the problem is with your network. – Matt Apr 08 '20 at 11:10
  • The fastest way to check it would be to delete dashboard pod so (hopefully) it gets rescheduled on different node. – Matt Apr 08 '20 at 11:17

0 Answers0