0

I have following code in ansible.

elasticsearch role is set up to run as below on elasticsearch host group which consists of 3 es nodes.

- name: elasticsearch
  hosts: elasticsearch
  become: true
  gather_facts: false
  roles:
    - elasticsearch

.ini file

[elasticsearch]
elasticsearch_1
elasticsearch_2
elasticsearch_3

It has below task under roles/elasticsearch/tasks/alias.yml which should run only on one of the es node and not all.

- name: create access initial index write true
  uri:
   method: PUT
   url: "http://{{ elasticsearch1_private_ip }}:{{ elasticsearch_port }}/access-000001?pretty"
   body: "{{ lookup('file', '{{ es_files }}/access_alias.json') }}"
   body_format: json
   user: "{{ elasticsearch_username }}"
   password: "{{ elasticsearch_password }}"
   status_code: 200

output of above task

TASK [elasticsearch : create access initial index write true] ***************************************************************************************************************

fatal: [elasticsearch_2 -> 10.10.10.1]: FAILED! => {

    "changed": false,

    "content": "{\n  \"error\" : {\n    \"root_cause\" : [\n      {\n        \"type\" : \"resource_already_exists_exception\",\n        \"reason\" : \"index [access-000001/N24rDMyXuQdOQ] already exists\",\n        \"index_uuid\" : \"N24rDMyRSGQmpdOQ\",\n        \"index\" : \"access-000001\"\n      }\n    ],\n    \"type\" : \"resource_already_exists_exception\",\n    \"reason\" : \"index [access-000001/N24rDMy7XuQmpdOQ] already exists\",\n    \"index_uuid\" : \"N24rDMyR7XuQmpdOQ\",\n    \"index\" : \"access-000001\"\n  },\n  \"status\" : 400\n}\n",

    "content_length": "522",
    "content_type": "application/json; charset=UTF-8",
    "json": {
        "error": {
            "index": "access-000001",
            "index_uuid": "N24rDMy7XuQmpdOQ",
            "reason": "index [access-000001/N24rDMyRluQmpdOQ] already exists",
            "root_cause": [

                {
                    "index": "access-000001",
                    "index_uuid": "N24rDMyRuQmpdOQ",
                    "reason": "index [access-000001/N24rDMyRXuQmpdOQ] already exists",
                    "type": "resource_already_exists_exception"
                }
            ],
            "type": "resource_already_exists_exception"
        },
        "status": 400
    },
    "redirected": false,
    "status": 400,
    "url": "http://10.10.10.1:9200/access-000001?pretty",
    "x_elastic_product": "Elasticsearch"

}
MSG:

Status code was 400 and not [200]: HTTP Error 400: Bad Request

fatal: [elasticsearch_3 -> 10.10.10.1]: FAILED! => {

    "changed": false,

    "content": "{\n  \"error\" : {\n    \"root_cause\" : [\n      {\n        \"type\" : \"resource_already_exists_exception\",\n        \"reason\" : \"index [access-000001/N24rDMyGalQmpdOQ] already exists\",\n        \"index_uuid\" : \"N24rDMyuQmpdOQ\",\n        \"index\" : \"access-000001\"\n      }\n    ],\n    \"type\" : \"resource_already_exists_exception\",\n    \"reason\" : \"index [access-000001/N24rDMGaQmpdOQ] already exists\",\n    \"index_uuid\" : \"N24rDMyRSGmpdOQ\",\n    \"index\" : \"access-000001\"\n  },\n  \"status\" : 400\n}\n",

    "content_length": "522",
    "content_type": "application/json; charset=UTF-8",
    "json": {
        "error": {
            "index": "access-000001",
            "index_uuid": "N24rDMy7XuQmpdOQ",
            "reason": "index [access-000001/N24rDMQmpdOQ] already exists",
            "root_cause": [
                {
                    "index": "access-000001",
                    "index_uuid": "N24rDMyGalh7pdOQ",
                    "reason": "index [access-000001/N24rDMyXuQmpdOQ] already exists",
                    "type": "resource_already_exists_exception"
                }
            ],
            "type": "resource_already_exists_exception"
        },
        "status": 400
    },
    "redirected": false,
    "status": 400,
    "url": "http://10.10.10.1:9200/access-000001?pretty",
    "x_elastic_product": "Elasticsearch"

} 
MSG:

Status code was 400 and not [200]: HTTP Error 400: Bad Request

ok: [elasticsearch_1 -> 10.10.10.1]

It looks because of group elasticsearch above is running on all three es nodes but since it is already ran on es1 node and when it runs on remaining two nodes, it is getting resource_already_exists_exception error.

there are other tasks similar to above.

- name: create federate initial index write true
  uri:
   method: PUT
   url: "http://{{ elasticsearch1_private_ip }}:{{ elasticsearch_port }}/federate-000001?pretty"
   body: "{{ lookup('file', '{{ es_files }}/federate_alias.json') }}"
   body_format: json
   user: "{{ elasticsearch_username }}"
   password: "{{ elasticsearch_password }}"
   status_code: 200

- name: create directory initial index write true
  uri:
   method: PUT
   url: "http://{{ elasticsearch1_private_ip }}:{{ elasticsearch_port }}/directory-000001?pretty"
   body: "{{ lookup('file', '{{ es_files }}/directory_alias.json') }}"
   body_format: json
   user: "{{ elasticsearch_username }}"
   password: "{{ elasticsearch_password }}"
   status_code: 200

Below is their run output but they are only running on first node and not trying to run on remaining two nodes.

TASK [elasticsearch : create federate initial index write true] *************************************************************************************************************

ok: [elasticsearch_1]


TASK [elasticsearch : create directory initial index write true] ************************************************************************************************************

ok: [elasticsearch_1]

So I added delegate_to: elasticsearch_1 at first task (create access initial index write true) so that it will run only on first node but i am still getting the same error output and because of this at the final ansible output i am getting failed=1.

elasticsearch_2            : ok=37   changed=13   unreachable=0    failed=1
elasticsearch_3            : ok=37   changed=13   unreachable=0    failed=1

One more thing, there is .retry file getting created which has es2 and es3 servers (not sure exactly after which task this file getting created) so may be this file is getting created at first task and then because this file is present second task is not even trying to run on these two servers?

Q. Can anyone point out why other tasks are running on only one node and first task is trying to run on all three nodes and hence getting error.

Q. What needs to add to run first task only on one node.

Thanks,

prat
  • 103
  • 2
  • 10
  • To clarify, you're running into this issue when run it with `delegate_to: elasticsearch_1` on a fresh cluster that doesn't already have that resource? Because if you're not running on a fresh cluster, then previous runs of the playbook could have created that resource and so it will fail in subsequent runs even if it only runs once. – Rickkwa Oct 08 '21 at 00:00
  • Hi @Rickkwa, Thanks for your reply. I have not added delegate_to initially. i.e I ran playbook without delegate_to first but when I got this issue I tried adding it bit it didn't help. Everytime when I am running this playbook, I am running cleanup elk playbook first which is removing all indices, packages, folders etc. And if something is left because of previous run then all above tasks should give same error but only first task giving the error. Thanks – prat Oct 08 '21 at 01:45

1 Answers1

1

Try instead using run_once: true on the task. Even if you have delegate_to, it will still run multiple instances of that task -- just on the delegated host instead of the inventory host.

For example,

---

- hosts: elasticsearch
  tasks:
    - debug:
        msg: "{{ inventory_hostname }}"
      delegate_to: elasticsearch_2

Will still run 3 times

TASK [debug] *****************************************************
ok: [elasticsearch_1 -> elasticsearch_2] => {
    "msg": "elasticsearch_1"
}
ok: [elasticsearch_2 -> elasticsearch_2] => {
    "msg": "elasticsearch_2"
}
ok: [elasticsearch_3 -> elasticsearch_2] => {
    "msg": "elasticsearch_3"
}

But if you instead, do run_once: true

---

- hosts: elasticsearch
  tasks:
    - debug:
        msg: "{{ inventory_hostname }}"
      run_once: true

Then it truly only runs once.

TASK [debug] **********************************************
ok: [elasticsearch_1] => {
    "msg": "elasticsearch_1"
}

You can also use both delegate_to and run_once if you want the one run to be from a specific host.

Rickkwa
  • 2,197
  • 4
  • 23
  • 34