I have upgraded my 3 master and 6 nodes kubernetes cluster from kubernetes version 1.17.5 to 1.18.0 and post upgrade kubernetes-dashboard stopped working with below error :
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "no endpoints available for service \"kubernetes-dashboard\"",
"reason": "ServiceUnavailable",
"code": 503
}
dashboard status Before upgrade:
kubectl get deployment kubernetes-dashboard --namespace=kubernetes-dashboard
NAME READY UP-TO-DATE AVAILABLE AGE
kubernetes-dashboard 1/1 1 1 35m
kubectl describe service kubernetes-dashboard --namespace=kubernetes-dashboard
Name: kubernetes-dashboard
Namespace: kubernetes-dashboard
Labels: k8s-app=kubernetes-dashboard
Annotations: Selector: k8s-app=kubernetes-dashboard
Type: ClusterIP
IP: 10.y.y.y
Port: <unset> 443/TCP
TargetPort: 8443/TCP
Endpoints: 192.168.x.x:8443
Session Affinity: None
Events: <none>
kubectl --namespace=kubernetes-dashboard get ep kubernetes-dashboard
NAME ENDPOINTS AGE
kubernetes-dashboard 192.168.x.x:8443 36m
dashboard status After Upgrade:
kubectl get deployment kubernetes-dashboard --namespace=kubernetes-dashboard
NAME READY UP-TO-DATE AVAILABLE AGE
kubernetes-dashboard 0/1 0 0 88m
kubectl describe service kubernetes-dashboard --namespace=kubernetes-dashboard
Name: kubernetes-dashboard
Namespace: kubernetes-dashboard
Labels: k8s-app=kubernetes-dashboard
Annotations: <none>
Selector: k8s-app=kubernetes-dashboard
Type: ClusterIP
IP: 10.y.y.y
Port: <unset> 443/TCP
TargetPort: 8443/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
kubectl --namespace=kubernetes-dashboard get ep kubernetes-dashboard
NAME ENDPOINTS AGE
kubernetes-dashboard <none> 88m
No pods are present under namespace kubernetes-dashboard while it was there before upgrade. Here is some part of kubernetes-dashboard deployment yaml output :
manager: kubectl
operation: Update
time: "2020-05-14T20:56:04Z"
name: kubernetes-dashboard
namespace: kubernetes-dashboard
resourceVersion: "66272"
selfLink: /apis/apps/v1/namespaces/kubernetes-dashboard/deployments/kubernetes-dashboard
uid: 11d1b3e7-b333-4cf0-a542-a72a09f3d56a
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
image: kubernetesui/dashboard:v2.0.0
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
path: /
port: 8443
scheme: HTTPS
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
name: kubernetes-dashboard
ports:
- containerPort: 8443
protocol: TCP
resources: {}
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsGroup: 2001
runAsUser: 1001
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /certs
name: kubernetes-dashboard-certs
- mountPath: /tmp
name: tmp-volume
dnsPolicy: ClusterFirst
nodeSelector:
kubernetes.io/os: linux
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: kubernetes-dashboard
serviceAccountName: kubernetes-dashboard
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
volumes:
- name: kubernetes-dashboard-certs
secret:
defaultMode: 420
secretName: kubernetes-dashboard-certs
- emptyDir: {}
name: tmp-volume
status:
conditions:
- lastTransitionTime: "2020-05-14T20:56:04Z"
lastUpdateTime: "2020-05-14T20:56:04Z"
message: Created new replica set "kubernetes-dashboard-7b544877d5"
reason: NewReplicaSetCreated
status: "True"
type: Progressing
- lastTransitionTime: "2020-05-14T20:56:04Z"
lastUpdateTime: "2020-05-14T20:56:04Z"
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
- lastTransitionTime: "2020-05-14T20:56:04Z"
lastUpdateTime: "2020-05-14T20:56:04Z"
message: 'pods "kubernetes-dashboard-7b544877d5-" is forbidden: unable to validate
against any pod security policy: []'
reason: FailedCreate
status: "True"
type: ReplicaFailure
observedGeneration: 1
unavailableReplicas: 1