9

I have installed Rancher 2 and created a kubernetes cluster of internal vm's ( no AWS / gcloud).

The cluster is up and running.

I logged into one of the nodes.

1) Installed Kubectl and executed kubectl cluster-info . It listed my cluster information correctly.

2) Installed helm

curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh

root@lnmymachine # helm version
Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}

3) Configured helm referencing Rancher Helm Init

kubectl -n kube-system create serviceaccount tiller

kubectl create clusterrolebinding tiller \
  --clusterrole cluster-admin \
  --serviceaccount=kube-system:tiller

helm init --service-account tiller

Tried installing Jenkins via helm

root@lnmymachine # helm ls
Error: Unauthorized
root@lnmymachine # helm install --name initial stable/jenkins
Error: the server has asked for the client to provide credentials

Browsed similar issues and few of them were due to multiple clusters. I have only one cluster. kubectl gives all information correctly.

Any idea whats happening.

VVP
  • 766
  • 4
  • 14
  • 39
  • 1
    There seems to be a mistake `--clusterrole=cluster-admin \`missing "=". Can you check if ServiceAccount, ClustrerRoleBinding and ClusterRole was correctly created? – Crou Feb 14 '19 at 12:46
  • 1
    Brilliant.It worked. I think you should post the comment as answer. – VVP Feb 15 '19 at 00:24
  • I hope the answer is fine with you @VVP – Crou Feb 18 '19 at 12:36
  • Occasionally while running helm using `sudo` to debug the mentioned error I see this instead: *Error: failed to download [chart] (hint: running `helm repo update` may help).* Not sure why, but perhaps this will help others debug. – vhs Aug 06 '19 at 11:38

2 Answers2

3

It seems there is a mistake while creating the ClusterRoleBinding:

Instead of --clusterrole cluster-admin, you should have --clusterrole=cluster-admin

You can check if this is the case by verifying if ServiceAccount, ClustrerRoleBinding were created correctly.

kubectl describe -n kube-system sa tiller

kubectl describe clusterrolebinding tiller

Seems like they have already fixed this on Rancher Helm Init page.

vhs
  • 9,316
  • 3
  • 66
  • 70
Crou
  • 10,232
  • 2
  • 26
  • 31
  • Thanks. I submitted the fix in the Rancher pages and it is accepted. – VVP Feb 18 '19 at 22:30
  • 1
    Had hopes this would address the issue given it's the selected answer and I'm also using K3s (v0.7.0). But I continue to see the same "provide credentials" message described in the OP unfortunately. – vhs Aug 06 '19 at 11:43
0

I was facing the same issue, but the following steps worked for me.

root@node1:~# helm install --name prom-operator stable/prometheus-operator --namespace monitoring
Error: the server has asked for the client to provide credentials

Step 1: Delete the Service Account

root@node1:~# kubectl delete serviceaccount --namespace kube-system tiller
serviceaccount "tiller" deleted

Step2: Delete the cluster role binding

root@node1:~# kubectl delete clusterrolebinding tiller-cluster-rule 
clusterrolebinding.rbac.authorization.k8s.io "tiller-cluster-rule" deleted

Step3: Remove helm directory

root@node1:~# rm -rf .helm/

Step4: Create the Service account again.

root@node1:~# kubectl create serviceaccount tiller --namespace kube-system
serviceaccount/tiller created

Step 5: Create the cluster role binding

root@node1:~# kubectl create clusterrolebinding tiller-cluster-rule \
>  --clusterrole=cluster-admin \
>  --serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created

Step6: run helm init command

helm init --service-account=tiller

Creating /root/.helm 
Creating /root/.helm/repository 
Creating /root/.helm/repository/cache 
Creating /root/.helm/repository/local 
Creating /root/.helm/plugins 
Creating /root/.helm/starters 
Creating /root/.helm/cache/archive 
Creating /root/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /root/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)

Step 7: Delete the tiller-deploy-xxx pod

kubectl delete pod -n kube-system tiller-deploy

pod "tiller-deploy-5d58456765-xlns2" deleted

Wait till it is recreated.

Step 8: Install the helm charts.

helm install --name prom-operator stable/prometheus-operator --namespace monitoring
Vipul Sharda
  • 387
  • 3
  • 6