2

I have a single node kubernetes setup on Ubuntu 20.04. Am using microk8s and longhorn storage for my single node cluster. I install packages using Helm via Lens IDE. I have configured everything as per the respective guides but anytime I install a package that requires persistence eg Mariadb or Wordpress, the following happens:

  • pv and pvc get created and Bound successfully
  • pod does not successfully create and throws the error below
MountVolume.SetUp failed for volume "pvc-fdada93c-c4af-4916-942f-abf9897feaf9" : applyFSGroup failed for vol pvc-fdada93c-c4af-4916-942f-abf9897feaf9: lstat /var/snap/microk8s/common/var/lib/kubelet/pods/f69173e1-cd98-4f86-9e52-edf62fa723da/volumes/kubernetes.io~csi/pvc-fdada93c-c4af-4916-942f-abf9897feaf9/mount: no such file or directory
  • when I manually create a directory using the command below, the pod will successfully start
mkdir -p /var/snap/microk8s/common/var/lib/kubelet/pods/f69173e1-cd98-4f86-9e52-edf62fa723da/volumes/kubernetes.io~csi/pvc-fdada93c-c4af-4916-942f-abf9897feaf9/mount
  • the issue will then repeat if I do server reboot

Question: How can I get the pods to automatically mount when I install a package from Helm. I have seen this happen on similar single node clusters using the same software.

NOTE: nfs-common and open-iscsi are both running

wwmwabini
  • 75
  • 6

1 Answers1

0

I was able to figure out the issue.

The issue was actually not due to Longhorn itself. It was due to CoreDNS.

Due to firewall restrictions, CoreDNS could not resolve internal kubernetes DNS, especially longhorn-backend

Provided the UI and Driver could not reach longhorn-backend, they could never start. Fixing CoreDNS issues fixed caused the longhorn services to work well and my PVCs and PVs also worked as expected.

Steps to resolve were as follow

  1. Check the coredns pod for errors

    kubectl logs coredns-7f9c69c78c-7dsjg -n kube-system

Any output other than simply the coredns version means you need to resolve the errors shown

For me it was done by disabling firewalls and adding 8.8.8.8 in my Node's /etc/resolv.conf file

  1. Once resolved, you can ether wait a minute for coredns to resolve internal DNS or restart it with the command below

    kubectl rollout restart deployment/coredns -n kube-system

Everything worked well after that!

wwmwabini
  • 75
  • 6