2

I've installed fresh rancher on microk8s using helm3 and there are helm operations failing periodically. I am rather clueless as where to have a look for what. Could you please point me in the right direction?

Waiting for Kubernetes API to be available
helm upgrade --history-max=5 --install=true --namespace=rancher-operator-system --reset-values=true --timeout=5m0s --values=/home/shell/helm/values-rancher-operator-crd-0.1.100.yaml --version=0.1.100 --wait=true rancher-operator-crd /home/shell/helm/rancher-operator-crd-0.1.100.tgz
Release "rancher-operator-crd" does not exist. Installing it now.
W1129 15:37:01.028852      39 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
Error: rendered manifests contain a resource that already exists. Unable to continue with install: CustomResourceDefinition "clusters.rancher.cattle.io" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "rancher-operator-crd"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "rancher-operator-system"
Waiting for Kubernetes API to be available
helm upgrade --history-max=5 --install=true --namespace=fleet-system --reset-values=true --timeout=5m0s --version=0.3.100 --wait=true fleet-crd /home/shell/helm/fleet-crd-0.3.100.tgz
Release "fleet-crd" does not exist. Installing it now.
W1129 15:36:48.667489      41 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
Error: rendered manifests contain a resource that already exists. Unable to continue with install: CustomResourceDefinition "bundles.fleet.cattle.io" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "fleet-crd"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "fleet-system"

maybe the problem is I installed it with helm3? I would like to use the latest technology, so I would like to use helm3. Is that possible? If not, is it possible to use both helms side by side?

user411245
  • 21
  • 1
  • 2
  • It's more likely that people will (be willing to) help when you include what your research on the topic was so far. What did a google search revealed about your problem?What else did you try to find a solution to the problem? In addition, you should describe what you did before this error showed up for others to reproduce it. – wedi Nov 29 '20 at 18:52
  • Can you share more info on how did you boostrap your cluster? Did you follow [this guide](https://rancher.com/docs/rancher/v2.x/en/installation/install-rancher-on-k8s/) to install Rancher? Can you share more info on what do you want to upgrade? (mention yaml in your question) – kool Nov 30 '20 at 14:55

3 Answers3

3

I think I found a solution for this (at least the Fleet part, but this approach might help you solve the rancher-operator part).

Basically:

After doing that it created the missing "fleet-local" namespace, the Fleet Cluster and the Fleet Cluster Group.

Rancher version 2.5.7 from the latest helm repo.

Reference: https://www.reddit.com/r/rancher/comments/md963s/rancher_25_inside_docker_desktop_single_node/

Rafalfaro
  • 211
  • 4
  • 3
2

I am not familiar with launching Kubernetes from Helm for Rancher, but since it says it is failing from resources already existing and invalid ownership, it may be that there are conflicts on at least one of the host nodes and therefore the install is not 100% fresh.

Since you are trying for a fresh install though, it should not matter if you purge everything related to Kubernetes from every node.

Here is the documentation for cleaning cluster nodes. Normally, I would explain, in my answer here, what is in that linked document, but it is fairly long. The core of it is:

  • purge all rancher components (either manually or by running the cleanup script)
    • you may not need that part if it did not get that far, but should be checked
  • remove all (related, or simply all) containers, images, and volumes
    • docker container rm ...
    • docker image rm ...
    • docker volume rm ...
  • remove mounts
    • for mount in $(mount | grep tmpfs | grep '/var/lib/kubelet' | awk '{ print $3 }') /var/lib/kubelet /var/lib/rancher; do umount $mount; done
  • remove directories and files
    •   sudo rm -Rf /etc/ceph \
            /etc/cni \
            /etc/kubernetes \
            /opt/cni \
            /opt/rke \
            /run/secrets/kubernetes.io \
            /run/calico \
            /run/flannel \
            /var/lib/calico \
            /var/lib/etcd \
            /var/lib/cni \
            /var/lib/kubelet \
            /var/lib/rancher/rke/log \
            /var/log/containers \
            /var/log/kube-audit \
            /var/log/pods \
            /var/run/calico
      
  • finally, reboot
    • this will clear the non-persistent network interfaces that were used

I have these extras in my notes when having trouble removing old mounted directories or containers.

Use this to help determine a problematic process and then manually perform unmount operations.

grep "docker" /proc/*/mountinfo | grep "${SOME_CONTAINER_HASH}" | awk '{ print $1; }' | perl -p -e 's:^/proc/(\d+)/.*$:\1:' | sort -n | uniq

then

ps -p "${PROBLEM_PID}"
Kevin
  • 2,234
  • 2
  • 21
  • 26
0

As you have not post any information about the versions that you are using, I am not sure whether it suites you.

I encountered this on my K8s (version1.23.7) when I install rancher using helm, I specified the rancher of version 2.6.4, but according to their support matrix of 2.6.4, it only supports k8s up to 1.22.x . So I upgraded rancher to use 2.6.6, which support k8s 1.23.x, the problem is then resolved.

You can find the full list of rancher versions and their support matrix in this link.

v.ng
  • 533
  • 5
  • 10