0

I am trying to develop a simple playbook (which would later be used in larger ones) to check if Windows VMs in the inventory are up and running. I use Ansible Tower (free version) to manage the dynamic VMware inventory containing Windows VMs. These VMs are pre-configured to work with Ansible (winrm enabling etc.) Therefore, I don't maintain any manually edited hosts files.

- name: Check if VMs are up and running
  hosts: localhost

  tasks:
    - name: Pauses the workflow
      pause: minutes=5

    - name: Wait for port number 5986 to be available
      vars:
        - vmname: ['VM-NO1', 'VM-NO2']
      local_action: wait_for host={{ hostvars[item].ansible_ssh_host }} state=started delay=10 timeout=15 connect_timeout=15
      with_items: "{{ vmname }}"

I have the pause to provide some time for the VM to boot wherein I have tried times ranging from 1 to 5 minutes. The VMs come up in less than 3 minutes, actually.

I am facing a strange issue w.r.t. wait_for. While the VMs are up and running as can be seen from the vCenter Console, Ansible reports this failure:

fatal: [localhost]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'dict object' has no attribute 'ansible_ssh_host'\n\nThe error appears to have been in '/var/lib/awx/projects/vms/waitcheck.yml': line 10, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Wait for port number 5986 to be available\n ^ here\n"}

I have added and removed port=5986 flag in the wait_for section. Surprisingly, the same playbook runs and reports a success when running for the second time. How can I resolve this?

Chethan S.
  • 103
  • 1
  • 11
  • How do you expect `hostvars` to contain `VM-NO1` if you are running the play against `localhost`? Besides the `-name:` referenced in the error message is not in the line no. 10 from the playbook you included, so I suppose you are not showing the real thing here. – techraf Aug 10 '16 at 13:14
  • I am showing the relevant part in failure message and the full playbook. I had two lines of comments in the playbook which I have removed; plus the error clearly refers ansible_ssh_host. Unless I run it against localhost, the playbook would try to do a setup against all the VMs in the inventory, which fails as the VMs would be coming up then. As I noted in my question, it runs perfect when I run for the second time for whatever reason. – Chethan S. Aug 10 '16 at 13:25
  • 1
    I did not ask why you run it against localhost, but how do you expect `hostvars` to contain data from a server it did not collect facts from? – techraf Aug 10 '16 at 13:28
  • Ok! So how do I set things right? I'm a newbie. Your suggestion would help me move over this issue. :) – Chethan S. Aug 10 '16 at 13:33
  • I earlier had vars defined outside (just below hosts, before the tasks). It didn't really make a difference. – Chethan S. Aug 10 '16 at 13:36
  • @knowhy yes, you can, try – techraf Aug 10 '16 at 13:36
  • All you wrote is "*if Windows VMs in the inventory*" and from that you expect to deduce everyone what you have and what you intend to do. Ok, let me play fortune teller... You think you are accessing inventory file, but you are not. You need to use syntax like here: http://serverfault.com/a/795779/197039 – techraf Aug 10 '16 at 13:40
  • I didn't realize the importance of providing those details which have now been updated. I use Ansible Tower's sync capabilities to add VMs to my inventory. – Chethan S. Aug 10 '16 at 14:09

1 Answers1

1

Ansible 2.3

  tasks:
    - name: Pauses the workflow
      pause: minutes=5

    - name: Wait for port number 5986 to be available
      wait_for:
        host: {{ hostvars[item]['ansible_host'] }}
        port: 5986
        delay: 10
        state: started
      with_items:
        - VM-NO1
        - VM-NO2
      delegate_to: 127.0.0.1
xddsg
  • 3,392
  • 2
  • 28
  • 33