I have a k0s cluster where I need to setup a persistent volume claim but its failing - as well as the associated deployment. I am getting error below when I run ks describe pod mssql-depl-86c86b5f44-ldj49
:
Name: mssql-depl-86c86b5f44-ldj49
Namespace: default
Priority: 0
Node:
Labels: app=mssql
pod-template-hash=86c86b5f44
Annotations: kubernetes.io/psp: 00-k0s-privileged
Status: Pending
IP:
IPs:
Controlled By: ReplicaSet/mssql-depl-86c86b5f44
Containers:
mssql:
Image: mcr.microsoft.com/mssql/server:2019-CU15-ubuntu-20.04
Port: 1433/TCP
Host Port: 0/TCP
Environment:
MSSQL_PID: Express
ACCEPT_EULA: Y
SA_PASSWORD: <set to the key 'SA_PASSWORD' in secret 'mssql'> Optional: false
Mounts:
/var/opt/mssql/data from mssqldb (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v6hff (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
mssqldb:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mssql-claim
ReadOnly: false
kube-api-access-v6lzw:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
Warning FailedScheduling 17m default-scheduler 0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
Warning FailedScheduling 10m (x6 over 16m) default-scheduler 0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate
I am unsure at best why this is occurring , or what causes the cluster to have a "taint" . I have been able to fix this "taint" before however with command ks taint nodes serverfxc02 node-role.kubernetes.io/master-
but in this particular instance its not working.
I had created the deployment as a check to satisfy the WaitForFirstConsumer constraint but after some time the persistent volume claim still remains in Pending mode :
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mssql-claim Pending openebs-device 53m
What am I missing ?
UPDATE
The output of kubectl get storageclass openebs-device -o yaml
is :
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
cas.openebs.io/config: |
- name: StorageType
value: "device"
meta.helm.sh/release-name: openebs-1644933290
meta.helm.sh/release-namespace: openebs
openebs.io/cas-type: local
creationTimestamp: "2022-02-15T13:54:56Z"
labels:
app.kubernetes.io/managed-by: Helm
name: openebs-device
resourceVersion: "587"
uid: 81aa1c7f-8f00-4621-8d53-26e6552f5129
provisioner: openebs.io/local
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
And the output of kubectl get pvc mssql-claim -o yaml
is :
apiVersion: v1
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"mssql-claim","namespace":"default"},"spec":{"accessModes":["ReadWriteMany"],"resources":{"requests":{"storage":"200Mi"}},"storageClassName":"openebs-device"}}
volume.beta.kubernetes.io/storage-provisioner: openebs.io/local
volume.kubernetes.io/selected-node: servername
volume.kubernetes.io/storage-provisioner: openebs.io/local
creationTimestamp: "2022-02-16T13:52:56Z"
finalizers:
- kubernetes.io/pvc-protection
name: mssql-claim
namespace: default
resourceVersion: "18799"
uid: faa049a6-7cd2-40b4-b3f4-02c45a066f2c
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 200Mi
storageClassName: openebs-device
volumeMode: Filesystem
status:
phase: Pending