0

I want to manage servers and configure them with ansible. After creating a join command with kubeadm, I want to save the command in the controller machine RAM. And, saving the secret join command locally on the controller machine is problematic for my job purposes. For some issues, Ansible Vault is not an option for me that I can work with.
Is there any way I can save join command and pass this to worker nodes without saving locally on the controller machine? A short-lived token is alright as long as I can join newer nodes to the cluster.

Any secured way that doesn't involve saving join command or token to local storage and new nodes can join after a long period of time, would work for me.

noobmaster69
  • 182
  • 1
  • 3
  • 13
  • 2
    Is there a problem with generating a short lived token and passing it out to the nodes inside the playbook run? Can you describe the ansible setup a bit more? – Matt Feb 08 '20 at 05:33
  • 1
    kubeadm init creates an initial token with a 24-hour TTL. – DT. Feb 08 '20 at 06:23
  • 1
    Consider using https based discovery mode if you are building automated provisioning using kubeadm. [Link](https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/#file-or-https-based-discovery) – DT. Feb 08 '20 at 06:26
  • For my intended (job's purpose) way, I need the initial token not to be destroyed – noobmaster69 Feb 08 '20 at 07:33
  • If I save the written information in a bash variable, would it be saved in Ram or Local Storage? – noobmaster69 Feb 08 '20 at 09:43
  • 2
    @DT `init` and `token create` accept a --token-ttl or --ttl [option](https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-token/#cmd-token-create) – Matt Feb 08 '20 at 21:17
  • 1
    @noobmaster69 bash variables that are set during run time are in memory. Could you add your proposed method into the question, it might be easier for people to answer then – Matt Feb 09 '20 at 02:05

1 Answers1

0

I am creating small clusters with ansible and had this issue as well.

My first solution was exactly what you say you don't want to do... the join command (How to grab last two lines from ansible (register stdout) initialization of kubernetes cluster) I went with another option for simplicity not for security, as it was a pain because I had to change the permissions on the join-command file after it was copied to the ansible server so that the user I was running the playbooks as could read it... And if I used it for a second cluster the join-command would change and i'd lose the old one and not be able to add nodes to the previous cluster... anyway.

My second solution I liked better is this:

I created a yml init file for my nodes that includes a long term token (Not sure if you would have issues with a long lived token) that was created on master. So when I kubeadm init my nodes I have ansible copy in the init file first then init with it.

ansible snippets:

  - name: Create the kubernetes init yaml file for worker node
    template:
      src: kubeadminitworker.yml
      dest: /etc/kubernetes/kubeadminitworker.yaml

  - name: Join the node to cluster
    command: kubeadm join --config /etc/kubernetes/kubeadminitworker.yaml
    register: join_output
  - debug:
      var: join_output.stdout

kubeadmininitworker.yml:

apiVersion: kubeadm.k8s.io/v1beta2
caCertPath: /etc/kubernetes/pki/ca.crt
discovery:
  file:
    kubeConfigPath: /etc/kubernetes/discovery.yml
  timeout: 5m0s
  tlsBootstrapToken: <token string removed for post>
kind: JoinConfiguration
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  kubeletExtraArgs:
    cloud-provider: external

Where the token string matches what's on the master.

I also used an init file when created the master with ansible as well which included my long term token.

master init for reference:

apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
bootstrapTokens:
       - groups:
         - system:bootstrappers:kubeadm:default-node-token
         token: <token string removed for post>
         ttl: 0s
         usages:
         - signing
         - authentication
nodeRegistration:
  kubeletExtraArgs:
    cloud-provider: external
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
useHyperKubeImage: false
networking:
  serviceSubnet: "10.96.0.0/12"
  podSubnet: "172.16.0.0/16"
etcd:
  local:
    imageRepository: "k8s.gcr.io"
dns:
  type: "CoreDNS"
  imageRepository: "k8s.gcr.io"

I did this a while ago - but I believe I just ran the create token command on an existing cluster, copied the token string into my two init files and then deleted the token from the existing cluster. So far so good...

  • Just realized this might not be the secure answer you wanted.. but maybe useful to someone :) I probably should have read the question more thoroughly... – Levi Silvertab Apr 02 '20 at 19:27