6

I am trying to run ansible-plabook but it hangs at setup. my playbook does lot of work like calling diffferent roles and modules also it gathers facts. It used to work fine earlier, but now i am not sure what went wrong , any help is appreciated

  • host operating system is RHEL 7
  • ssh passwordless authentication is set between those systems
  • my inventory file contains only 1 host system

command I was running was

 ansible-playbook  -i /tmp/tmpBo5Xmj -vvvvv playbook.yml -c ssh

here is the verbose log

TASK [setup] *******************************************************************
<172.17.239.193> ESTABLISH SSH CONNECTION FOR USER: ansible
<172.17.239.193> SSH: ansible.cfg set ssh_args: (-o)(UserKnownHostsFile=/dev/null)(-o)(StrictHostKeyChecking=no)
<172.17.239.193> SSH: ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled: (-o)(StrictHostKeyChecking=no)
<172.17.239.193> SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<172.17.239.193> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=ansible)
<172.17.239.193> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<172.17.239.193> SSH: PlayContext set ssh_common_args: ()
<172.17.239.193> SSH: PlayContext set ssh_extra_args: ()
<172.17.239.193> SSH: EXEC ssh -C -vvv -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible -o ConnectTimeout=10 172.17.239.193 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1474582282.38-93511913696801 `" && echo ansible-tmp-1474582282.38-93511913696801="` echo $HOME/.ansible/tmp/ansible-tmp-1474582282.38-93511913696801 `" ) && sleep 0'"'"''
<172.17.239.193> PUT /tmp/tmpAKnqv6 TO /home/ansible/.ansible/tmp/ansible-tmp-1474582282.38-93511913696801/setup
<172.17.239.193> SSH: ansible.cfg set ssh_args: (-o)(UserKnownHostsFile=/dev/null)(-o)(StrictHostKeyChecking=no)
<172.17.239.193> SSH: ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled: (-o)(StrictHostKeyChecking=no)
<172.17.239.193> SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<172.17.239.193> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=ansible)
<172.17.239.193> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<172.17.239.193> SSH: PlayContext set ssh_common_args: ()
<172.17.239.193> SSH: PlayContext set sftp_extra_args: ()
<172.17.239.193> SSH: EXEC sftp -b - -C -vvv -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible -o ConnectTimeout=10 '[172.17.239.193]'
<172.17.239.193> ESTABLISH SSH CONNECTION FOR USER: ansible
<172.17.239.193> SSH: ansible.cfg set ssh_args: (-o)(UserKnownHostsFile=/dev/null)(-o)(StrictHostKeyChecking=no)
<172.17.239.193> SSH: ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled: (-o)(StrictHostKeyChecking=no)
<172.17.239.193> SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<172.17.239.193> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=ansible)
<172.17.239.193> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<172.17.239.193> SSH: PlayContext set ssh_common_args: ()
<172.17.239.193> SSH: PlayContext set ssh_extra_args: ()
<172.17.239.193> **SSH: EXEC ssh -C -vvv -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible -o ConnectTimeout=10 -tt 172.17.239.193 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-njtihbebdvbpospbpivnpwbhrqtnfylc; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible/.ansible/tmp/ansible-tmp-1474582282.38-93511913696801/setup; rm -rf "/home/ansible/.ansible/tmp/ansible-tmp-1474582282.38-93511913696801/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"''**

On the target system I could see following python process running

[root@odcrac01 ~]# ps -ef | grep python| grep ansible
ansible  12600 12568  0 07:18 pts/0    00:00:00 /bin/sh -c sudo -H -S  -p "[sudo via ansible, key=tdtazugynuyekapktrkwjrwuawfvgkme] password: " -u root /bin/sh -c 'echo BECOME-SUCCESS-tdtazugynuyekapktrkwjrwuawfvgkme; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible/.ansible/tmp/ansible-tmp-1474582204.64-194542154309618/setup; rm -rf "/home/ansible/.ansible/tmp/ansible-tmp-1474582204.64-194542154309618/" > /dev/null 2>&1' && sleep 0
root     12613 12600  0 07:18 pts/0    00:00:00 sudo -H -S -p [sudo via ansible, key=tdtazugynuyekapktrkwjrwuawfvgkme] password:  -u root /bin/sh -c echo BECOME-SUCCESS-tdtazugynuyekapktrkwjrwuawfvgkme; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible/.ansible/tmp/ansible-tmp-1474582204.64-194542154309618/setup; rm -rf "/home/ansible/.ansible/tmp/ansible-tmp-1474582204.64-194542154309618/" > /dev/null 2>&1
root     12614 12613  0 07:18 pts/0    00:00:00 /bin/sh -c echo BECOME-SUCCESS-tdtazugynuyekapktrkwjrwuawfvgkme; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible/.ansible/tmp/ansible-tmp-1474582204.64-194542154309618/setup; rm -rf "/home/ansible/.ansible/tmp/ansible-tmp-1474582204.64-194542154309618/" > /dev/null 2>&1
root     12615 12614  0 07:18 pts/0    00:00:00 /usr/bin/python /home/ansible/.ansible/tmp/ansible-tmp-1474582204.64-194542154309618/setup
root     12616 12615  0 07:18 pts/0    00:00:00 /usr/bin/python /tmp/ansible_0loivr/ansible_module_setup.py
ansible  15436 15435  0 07:20 pts/1    00:00:00 /bin/sh -c sudo -H -S -n -u root /bin/sh -c 'echo BECOME-SUCCESS-njtihbebdvbpospbpivnpwbhrqtnfylc; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible/.ansible/tmp/ansible-tmp-1474582282.38-93511913696801/setup; rm -rf "/home/ansible/.ansible/tmp/ansible-tmp-1474582282.38-93511913696801/" > /dev/null 2>&1' && sleep 0
root     15449 15436  0 07:20 pts/1    00:00:00 sudo -H -S -n -u root /bin/sh -c echo BECOME-SUCCESS-njtihbebdvbpospbpivnpwbhrqtnfylc; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible/.ansible/tmp/ansible-tmp-1474582282.38-93511913696801/setup; rm -rf "/home/ansible/.ansible/tmp/ansible-tmp-1474582282.38-93511913696801/" > /dev/null 2>&1
root     15450 15449  0 07:20 pts/1    00:00:00 /bin/sh -c echo BECOME-SUCCESS-njtihbebdvbpospbpivnpwbhrqtnfylc; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible/.ansible/tmp/ansible-tmp-1474582282.38-93511913696801/setup; rm -rf "/home/ansible/.ansible/tmp/ansible-tmp-1474582282.38-93511913696801/" > /dev/null 2>&1
root     15451 15450  0 07:20 pts/1    00:00:00 /usr/bin/python /home/ansible/.ansible/tmp/ansible-tmp-1474582282.38-93511913696801/setup
root     15452 15451  0 07:20 pts/1    00:00:00 /usr/bin/python /tmp/ansible_PJZfVt/ansible_module_setup.py

Here is the simple playbook, where i have set become: yes and become_user: root, somehow when I set become: yes it is not working, it hangs

 - name: list files in target system
   hosts: clonedb
   user: ansible
   become: yes
   become_user: root
   gather_facts: yes
   tasks:
   - name: list files in target system
     command: ls
     always_run: true
     tags: list

if I comment the become and become_user it works fine. I have added user ansible to sudoers list in the target system, but still it hangs

on the target system for "ansible" user i have given sudo permission by adding it to sudoers list

ansible         ALL=(ALL)       NOPASSWD: ALL

I try to run the sudo command as ansible user on the target system it works fine

[ansible@odcrac01 ~]$ sudo ls ~root
anaconda-ks.cfg  cvuqdisk-1.0.9-1.rpm  install.log  install.log.syslog  remove_disk.sh

But on another system it works fine

(virtualapp) [ansible@OEL72-37-70 lib]$ python odcansible.py
sys path:['/home/ansible/virtualapp/pypi_portal/lib', '/home/ansible/virtualapp/lib64/python27.zip', '/home/ansible/virtualapp/lib64/python2.7', '/home/ansible/virtualapp/lib64/python2.7/plat-linux2', '/home/ansible/virtualapp/lib64/python2.7/lib-tk', '/home/ansible/virtualapp/lib64/python2.7/lib-old', '/home/ansible/virtualapp/lib64/python2.7/lib-dynload', '/usr/lib64/python2.7', '/usr/lib/python2.7', '/home/ansible/virtualapp/lib/python2.7/site-packages']
PLAY [create temporary directory in target system] *****************************

TASK [setup] *******************************************************************
<172.17.58.95> ESTABLISH SSH CONNECTION FOR USER: ansible
<172.17.58.95> SSH: ansible.cfg set ssh_args: (-o)(UserKnownHostsFile=/dev/null)(-o)(StrictHostKeyChecking=no)(-o)(IdentitiesOnly=yes)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<172.17.58.95> SSH: ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled: (-o)(StrictHostKeyChecking=no)
<172.17.58.95> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=ansible)
<172.17.58.95> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<172.17.58.95> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/ansible/.ansible/cp/ansible-ssh-%h-%p-%r)
<172.17.58.95> SSH: EXEC sshpass -d14 ssh -C -vvv -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o User=ansible -o ConnectTimeout=10 -o ControlPath=/home/ansible/.ansible/cp/ansible-ssh-%h-%p-%r 172.17.58.95 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1474661320.71-273658467725557 `" && echo ansible-tmp-1474661320.71-273658467725557="` echo $HOME/.ansible/tmp/ansible-tmp-1474661320.71-273658467725557 `" ) && sleep 0'"'"''
<172.17.58.95> PUT /tmp/tmpITvUgQ TO /home/ansible/.ansible/tmp/ansible-tmp-1474661320.71-273658467725557/setup
<172.17.58.95> SSH: disable batch mode for sshpass: (-o)(BatchMode=no)
<172.17.58.95> SSH: ansible.cfg set ssh_args: (-o)(UserKnownHostsFile=/dev/null)(-o)(StrictHostKeyChecking=no)(-o)(IdentitiesOnly=yes)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<172.17.58.95> SSH: ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled: (-o)(StrictHostKeyChecking=no)
<172.17.58.95> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=ansible)
<172.17.58.95> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<172.17.58.95> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/ansible/.ansible/cp/ansible-ssh-%h-%p-%r)
<172.17.58.95> SSH: EXEC sshpass -d14 sftp -o BatchMode=no -b - -C -vvv -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o User=ansible -o ConnectTimeout=10 -o ControlPath=/home/ansible/.ansible/cp/ansible-ssh-%h-%p-%r '[172.17.58.95]'
<172.17.58.95> ESTABLISH SSH CONNECTION FOR USER: ansible
<172.17.58.95> SSH: ansible.cfg set ssh_args: (-o)(UserKnownHostsFile=/dev/null)(-o)(StrictHostKeyChecking=no)(-o)(IdentitiesOnly=yes)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<172.17.58.95> SSH: ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled: (-o)(StrictHostKeyChecking=no)
<172.17.58.95> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=ansible)
<172.17.58.95> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<172.17.58.95> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/ansible/.ansible/cp/ansible-ssh-%h-%p-%r)
<172.17.58.95> SSH: EXEC sshpass -d14 ssh -C -vvv -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o User=ansible -o ConnectTimeout=10 -o ControlPath=/home/ansible/.ansible/cp/ansible-ssh-%h-%p-%r -tt 172.17.58.95 '/bin/sh -c '"'"'sudo -H -S  -p "[sudo via ansible, key=wcazqfwywctzrpesmznhbpbibluqmkqg] password: " -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-wcazqfwywctzrpesmznhbpbibluqmkqg; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible/.ansible/tmp/ansible-tmp-1474661320.71-273658467725557/setup; rm -rf "/home/ansible/.ansible/tmp/ansible-tmp-1474661320.71-273658467725557/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"''
ok: [172.17.58.95]
Vivek Basappa
  • 353
  • 4
  • 8
  • It seems strange that `sudo` is running, considering it's still at the setup step. But that makes me think that the problem is actually with your sudoers configuration. – Xiong Chiamiov Sep 22 '16 at 22:55
  • I am also facing this issue, reverting from ssh key to normal username/pwd helped.. Do you have a proper solution for this? – Anand Varkey Philips Aug 12 '18 at 16:52

5 Answers5

1

Can you please clarify your question?

You state your goal is to gather facts about a host - but your playbook does not reflect that.

If your only goal is to gather facts about a host, you can use the setup module for that task. You also do not need a playbook to gather facts about a host.

ansible clonedb -m setup -u ansible

The above ad-hoc command with gather facts for your "clonedb" host group using the user "ansible" to authenticate. If you do not use SSH Keys for authenticating to your servers you will need to pass the "-k" option as well to prompt for an SSH password.

The best way to gather facts would be via a playbook, however. You can further simplify your playbook and just do the following:

---
- hosts: clonedb
  user: ansible

  tasks:
  - name: gather facts
    action: setup

You do not need a privileged account to gather facts about hosts.

The "gather_facts" option is by default set to True. It is not necessary to specify it in your playbook unless you have explicitly set it to "False" in your ansible.cfg.

You should store your facts in redis or via a json file because once the playbook completes, the facts will be removed from memory.

http://docs.ansible.com/ansible/playbooks_variables.html#fact-caching

EDIT:

A simplified version of your playbook:

---
- hosts: clonedb
  user: ansible
  become: yes

  tasks:
  - name: list files
    command: ls
    always_run: true

    register: listfiles
  - debug: var=listfiles
Avalon
  • 912
  • 6
  • 16
  • thanks for your reply, i do more than just gather facts. I just gave an example of one task (ls). I have several roles, which I call in that playbook. some of them run as root user, oracle user, grid user. so I need to user become: yes and become_user: root , somehow that is not working. – Vivek Basappa Sep 22 '16 at 23:02
  • I recommend simplifying your playbook. It isn't necessary to use become_user: root and also become: yes. become:yes is a privileged escalation. Your playbook runs on my test VM so there might be an issue with your sudoers file on the remote machine. – Avalon Sep 22 '16 at 23:13
  • I updated my answer to include a simplified version of your playbook that will display the file listing. – Avalon Sep 22 '16 at 23:17
  • @VivekBasappa - I can't comment on your answer so I'll ask here. Can you please clarify why my playbook did not work? I tested prior to providing it and it worked fine. Again, the default escalation user is root so having both "become: yes" and "become_user: root" is redundant. If you put "become" in your task you will have to invoke that module for every task you have in your playbook that requires escalation, which is why you should not put it in your task. – Avalon Sep 22 '16 at 23:43
  • I am not sure why , I am still trying to find why this is not working for me (only on one system its not working) on this system. – Vivek Basappa Sep 23 '16 at 00:36
1

Try to check permissions in home folder of user that are trying to connect to target machine. I run this command on my own target machine /home/ansible: chmod 775 -R /home/ansible. You should choose your username.

korotindev
  • 81
  • 6
1

I had the same issue and I fixed it by logging to the remote box and deleting the directory created by ansible:

(remote_box)$ rm -Rf ~/.ansible

It was due to the fact that I had interrupted a previous ansible session.

raratiru
  • 8,748
  • 4
  • 73
  • 113
1

Check, that problem host has no broken NFS mounts.

I had the same issue on particular hosts, the problem was found by running setup.py in place:

$ python ./setup.py 
(after continuous time)
NFS server 192.168.1.22 not responding still trying

You could check mounts via mount command

mount
...
/mnt/share on 192.168.1.22:/share remote/read/write/setuid/nodevices/rstchow/xattr/zone=dom87zone4/sharezone=2/dev=9640001 on Wed Feb 20 13:23:43 2019
Sasha Golikov
  • 638
  • 5
  • 11
0

After doing some debugging and web search i found this issue https://github.com/ansible/ansible/issues/12025

this is on ansible 2.0 if I set become: yes at main level before task it hangs, but if i set it inside the task it works

below playbook will not work

- hosts: clonedb
  user: ansible
  become: yes

  tasks:
  - name: list files
    command: ls
    always_run: true

    register: listfiles
  - debug: var=listfiles

but this one works - hosts: clonedb user: ansible

  tasks:
  - name: list files
    command: ls
    always_run: true
    become: yes
    become_user: root

    register: listfiles
  - debug: var=listfiles
Vivek Basappa
  • 353
  • 4
  • 8