4

I'm trying to use Ansible to deploy a small k3s cluster with just two server nodes at the moment. Deploying the first server node, which I refer to as "master" is easy to set up with Ansible. However, setting up the second server node, which I refer to as "node" is giving me a challenge because I need to pull the value of the node-token from the master and use it to call the k3s install command on the "node" vm.

I'm using Ansible roles, and this is what my playbook looks like:

- hosts: all

  roles:
    - { role: k3sInstall , when: 'server_type is defined'}
    - { role: k3sUnInstall , when: 'server_type is defined'}

This is my main.yml file from the k3sInstall role directory:

- name: Install k3s Server
  import_tasks: k3s_install_server.yml
  tags:
    - k3s_install

This is my k3s_install_server.yml:

---
- name: Install k3s Cluster
  block:
    - name: Install k3s Master Server
      become: yes
      shell: "{{ k3s_master_install_cmd }}"
      when: server_role == "master"

    - name: Get Node-Token file from master server.
      become: yes
      shell: cat {{ node_token_filepath }}
      when: server_role == "master"
      register: nodetoken
    
    - name: Print Node-Token
      when: server_role == "master"
      debug:
        msg: "{{ nodetoken.stdout }}"
        # msg: "{{ k3s_node_install_cmd }}"

    - name: Set Node-Token fact
      when: server_role == "master"
      set_fact:
        nodeToken: "{{ nodetoken.stdout }}"

    - name: Print Node-Token fact
      when: server_role == "node" or server_role == "master"
      debug:
        msg: "{{ nodeToken }}"
    # - name: Install k3s Node Server
    #   become: yes
    #   shell: "{{ k3s_node_install_cmd }}{{ nodeToken }}"
    #   when: server_role == "node"

I've commented out the Install k3s Node Servertask because I'm not able to properly reference the nodeToken variable that I'm setting when server_role == master.

This is the output of the debug:

TASK [k3sInstall : Print Node-Token fact] ***************************************************************************************************************************************************************************************************************************************************************************
ok: [server1] => {
    "msg": "K10cf129cfedafcb083655a1780e4be994621086f780a66d9720e77163d36147051::server:aa2837148e402f675604a56602a5bbf8"
}
ok: [server2] => {
    "msg": ""
}

My host file:

[p6dualstackservers]
server1 ansible_ssh_host=10.63.60.220
server2 ansible_ssh_host=10.63.60.221

And I have the following host_vars files assigned:

server1.yml:

server_role: master

server2.yml:

server_role: node

I've tried assigning the nodeToken variable inside of k3sInstall/vars/main.yml as well as one level up from the k3sInstall role inside group_vars/all.yml but that didn't help.

I tried searching for a way to use block-level variables but couldn't find anything.

β.εηοιτ.βε
  • 33,893
  • 13
  • 69
  • 83
stminion001
  • 333
  • 2
  • 15

2 Answers2

1

If you set the variable for master only it's not available for other hosts, e.g.

- hosts: master,node
  tasks:
    - set_fact:
        nodeToken: K10cf129cfedaf
      when: inventory_hostname == 'master'
    - debug:
        var: nodeToken

gives

ok: [master] => 
  nodeToken: K10cf129cfedaf
ok: [node] => 
  nodeToken: VARIABLE IS NOT DEFINED!

If you want to "apply all results and facts to all the hosts in the same batch" use run_once: true, e.g.

- hosts: master,node
  tasks:
    - set_fact:
        nodeToken: K10cf129cfedaf
      when: inventory_hostname == 'master'
      run_once: true
    - debug:
        var: nodeToken

gives

ok: [master] => 
  nodeToken: K10cf129cfedaf
ok: [node] => 
  nodeToken: K10cf129cfedaf

In your case, add 'run_once: true' to the task

    - name: Set Node-Token fact
      set_fact:
        nodeToken: "{{ nodetoken.stdout }}"
      when: server_role == "master"
      run_once: true

The above code works because the condition when: server_role == "master" is applied before run_once: true. Quoting from run_once

"Boolean that will bypass the host loop, forcing the task to attempt to execute on the first host available and afterward apply any results and facts to all active hosts in the same batch."

Safer code would be adding a standalone set_fact instead of relying on the precedence of the condition when: and run_once, e.g.

    - set_fact:
        nodeToken: "{{ nodetoken.stdout }}"
      when: inventory_hostname == 'master'
    - set_fact:
        nodeToken: "{{ hostvars['master'].nodeToken }}"
      run_once: true
Vladimir Botka
  • 58,131
  • 4
  • 32
  • 63
0

Using when in this use case is probably not the best fit, you would probably be better delegating some tasks to the so-called master server.

What you can do to define what server is the master, based on your inventory variable, is to delegate a fact to localhost, for example.

Then again, to get the token from your file in the master server, you can delegate this task and fact only to this server.


Given the playbook:

- hosts: all
  gather_facts: no

  tasks:
    - set_fact:
        master_node: "{{ inventory_hostname }}"
      when: server_role == 'master'
      delegate_to: localhost
      delegate_facts: true

    - set_fact:
        token: 12345678
      run_once: true
      delegate_to: "{{ hostvars.localhost.master_node }}"
      delegate_facts: true

    - debug:
        var: hostvars[hostvars.localhost.master_node].token
      when: server_role != 'master'

This yields the expected:

PLAY [all] ********************************************************************************************************

TASK [set_fact] ***************************************************************************************************
skipping: [node1]
ok: [node2 -> localhost]
skipping: [node3]

TASK [set_fact] ***************************************************************************************************
ok: [node1 -> node2]

TASK [debug] ******************************************************************************************************
skipping: [node2]
ok: [node1] => 
  hostvars[hostvars.localhost.master_node].token: '12345678'
ok: [node3] => 
  hostvars[hostvars.localhost.master_node].token: '12345678'

PLAY RECAP ********************************************************************************************************
node1                      : ok=2    changed=0    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0   
node2                      : ok=1    changed=0    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0   
node3                      : ok=1    changed=0    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0   
β.εηοιτ.βε
  • 33,893
  • 13
  • 69
  • 83