0

Our team was trying to fix some issues with the Kubernetes dashboard because it couldn’t get a secret. We are using dashboard version 1.8.3 and the Kubernetes server version is version 1.9.

In order to check if it was an issue that could be solved by reinstalling the dashboard, I ran the command

kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.8.3/src/deploy/recommended/kubernetes-dashboard.yaml

Then the command

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.8.3/src/deploy/recommended/kubernetes-dashboard.yaml

However, I ended up being unable to recreate the Kubernetes pod. I'm not sure why the deployment refuses to generate the values. Here is the output from

kubectl describe deployment kubernetes-dashboard -n kube-system

showing that there is one replica desired but none created.

Name:                   kubernetes-dashboard
Namespace:              kube-system
CreationTimestamp:      <hidden>
Labels:                 addonmanager.kubernetes.io/mode=Reconcile
                        k8s-app=kubernetes-dashboard
                        kubernetes.io/cluster-service=true
Annotations:            Selector:  k8s-app=kubernetes-dashboard
Replicas:               1 desired | 0 updated | 0 total | 0 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           k8s-app=kubernetes-dashboard
  Service Account:  kubernetes-dashboard
  Containers:
   kubernetes-dashboard:
    Image:      k8s-gcrio.azureedge.net/kubernetes-dashboard-amd64:v1.8.3
    Port:       8443/TCP
    Host Port:  0/TCP
    Args:
      --auto-generate-certificates
      --heapster-host=http://heapster.kube-system:80
    Limits:
      cpu:     500m
      memory:  500Mi
    Requests:
      cpu:        300m
      memory:     150Mi
    Liveness:     http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /certs from kubernetes-dashboard-certs (rw)
  Volumes:
   kubernetes-dashboard-certs:
    Type:        EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:   <unset>
OldReplicaSets:  <none>
NewReplicaSet:   <none>
Events:          <none>

How do I create the pod and have the dashboard working again?

Update: I found out that we created the dashboard in a namespace called "kubernetes-dashboard" so I deleted everything associated with the kubernetes-dashboard namespace. However, the dashboard is still not being created by the deployment.

I also found out that the issue appears to be that any replica set or deployment that is supposed to create a pod isn't creating pods when they should. Is there any information I could send to get some help for this issue?

more whirlpools
  • 335
  • 1
  • 4
  • 16
  • Pods not being created after creating a `Deployment` is usually a sign of `kube-controller-manager` malfunctioning. Also, with Kubernetes 1.19 release coming soon, you're using a version (almost) 10 versions behind. – BogdanL Aug 13 '20 at 12:58

1 Answers1

0

I advice you to rather point to a latest release. The same it with version of Kubernetes - its is really out of date.

Try to delete Kubernetes dashboard manually and then recreate it.

Execute following commands:

$ kubectl delete deployment kubernetes-dashboard --namespace=kube-system

$ kubectl delete service kubernetes-dashboard --namespace=kube-system

$ kubectl delete role kubernetes-dashboard-minimal --namespace=kube-system

$ kubectl delete rolebinding kubernetes-dashboard-minimal --namespace=kube-system

$ kubectl delete sa kubernetes-dashboard --namespace=kube-system

$ kubectl delete secret kubernetes-dashboard-certs --namespace=kube-system

$ kubectl delete secret kubernetes-dashboard-key-holder --namespace=kube-system

Then recreate dashboard.

Take look: kubernetes-dashboard, cluster-management.

Malgorzata
  • 6,409
  • 1
  • 10
  • 27
  • It didn't work. The deployment is still not creating the pod. Is there a way to force a pod to be created? – more whirlpools Aug 13 '20 at 21:12
  • You can force deployments/pods to be deleted by using --force flag. But creation of pods depends on many different factors, you cannot to force its creation. Please check check the event and rs in all namespaces: kubectl get event --all-namespaces kubectl get rs --all-namespaces . – Malgorzata Sep 07 '20 at 13:34