0

I have a problem with mounting the default tokens in kubernetes it no longer works with me, I wanted to ask directly before creating an issue on Github, so my setup consists of basically a HA bare metal cluster with manually deployed etcd (which includes certs ca, keys).The deployments run the nodes register, I just cannot deploy pods, always giving the error:

MountVolume.SetUp failed for volume "default-token-ddj5s" : secrets "default-token-ddj5s" is forbidden: User "system:node:tweak-node-1" cannot get secrets in the namespace "default": no path found to object

where tweak-node-1 is one of my node names and hostnames, I have found some similar issues: - https://github.com/kubernetes/kubernetes/issues/18239 - https://github.com/kubernetes/kubernetes/issues/25828

but none came close to fixing my issue as the issue was not the same.I only use default namespaces when trying to run pods and tried setting both RBAC ABAC, both gave the same result, this is a template I use for deploying showing version an etcd config:

apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: IP1
bindPort: 6443
authorizationMode: ABAC
kubernetesVersion: 1.8.5
etcd:
endpoints:
- https://IP1:2379
- https://IP2:2379
- https://IP3:2379

caFile: /opt/cfg/etcd/pki/etcd-ca.crt
certFile: /opt/cfg/etcd/pki/etcd.crt
keyFile: /opt/cfg/etcd/pki/etcd.key
dataDir: /var/lib/etcd
etcdVersion: v3.2.9
networking:
podSubnet: 10.244.0.0/16
apiServerCertSANs:
- IP1
- IP2
- IP3
- DNS-NAME1
- DNS-NAME2
- DNS-NAME3

2 Answers2

0

Your node must use credentials that match its Node API object name, as described in https://kubernetes.io/docs/admin/authorization/node/#overview

In order to be authorized by the Node authorizer, kubelets must use a credential that identifies them as being in the system:nodes group, with a username of system:node:. This group and user name format match the identity created for each kubelet as part of kubelet TLS bootstrapping.

Jordan Liggitt
  • 16,933
  • 2
  • 56
  • 44
  • Thank you, but the thing is i use kubeadm to init the cluster, assuming that it autogenerates the certs according to the above yaml (and since the IPs and DNS names are in the cert ) I just move the entire configs into the new master nodes, adjusting the IPs in the manifests as stated in multiple sources such as https://github.com/kubernetes/community/blob/master/contributors/design-proposals/cluster-lifecycle/ha_master.md and https://docs.google.com/document/d/1rEMFuHo3rBJfFapKBInjCqm2d7xGkXzh0FpFO0cRuqg/edit#, so my question is to fix this do i need to manually generate the certs ? – abdulrahmantkhalifa Dec 20 '17 at 12:18
  • or do i need to adjust the kubelet configs ? – abdulrahmantkhalifa Dec 20 '17 at 12:21
  • I would recommend joining new nodes using `kubeadm join`, not copying config between nodes – Jordan Liggitt Dec 21 '17 at 07:50
  • Thank you for all the help, really appreciate it. I tried two solutions both worked: 1- like you said using `kubeadm join` 2-patching the system:node clusterrolebinding to include the system:nodes group – abdulrahmantkhalifa Dec 21 '17 at 08:45
0

update

So the specific solution, the problem was because I was using version 1.8.x and was copying the certs and keys manually each kubelet didn't have its own system:node binding or specific key as specified in https://kubernetes.io/docs/admin/authorization/node/#overview:

RBAC Node Permissions In 1.8, the binding will not be created at all.

When using RBAC, the system:node cluster role will continue to be created, for compatibility with deployment methods that bind other users or groups to that role.

I fixed using either two ways :

1 - Using kubeadm join instead of copying the /etc/kubernetes file from master1

2 - after deployment patching the clusterrolebinding for system:node

kubectl patch clusterrolebinding system:node -p '{"apiVersion": 
"rbac.authorization.k8s.io/v1beta1","kind": 
"ClusterRoleBinding","metadata": {"name": "system:node"},"subjects": 
[{"kind": "Group","name": "system:nodes"}]}'
Community
  • 1
  • 1