41

I'm having trouble running my Ansible playbook on AWS instance. Here is my version:

$ ansible --version
ansible 2.0.0.2

I created an inventory file as:

[my_ec2_instance]
default ansible_host=MY_EC2_ADDRESS ansible_user='ubuntu' ansible_ssh_private_key_file='/home/MY_USER/MY_KEYS/MY_KEY.pem'

Testing connection to my server:

$ ansible -i provisioner/inventory my_ec2_instance -m ping
default | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}

Now when running my playbook on this inventory I get the error Timeout (12s) waiting for privilege escalation prompt as follows:

$ ansible-playbook -i provisioner/inventory -l my_ec2_instance provisioner/playbook.yml

PLAY [Ubuntu14/Python3/Postgres/Nginx/Gunicorn/Django stack] *****

TASK [setup] *******************************************************************
fatal: [default]: FAILED! => {"failed": true, "msg": "ERROR! Timeout (12s) waiting for privilege escalation prompt: "}

NO MORE HOSTS LEFT *************************************************************

PLAY RECAP *********************************************************************
default                    : ok=0    changed=0    unreachable=0    failed=1

If I run the same playbook using the .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory as the inventory parameter it works perfectly on my Vagrant instance.(I believe, proving there is nothing wrong in the playbook/roles itself)

Also, if I run it with an -vvvv, copy the exec ssh line and run it manually it indeed connects to AWS without problems.

Do I need to add any other parameter on my inventory file to connect an EC2 instance? What am I missing?

vmenezes
  • 1,076
  • 2
  • 12
  • 21
  • 1
    People having similar problem reported different solutions, because there are really many possible causes. I'll tell mine: the .profile or .bash_profile at destination contained a bash command, as a rude way for changing user's shell from ksh to bash. My advice is to test with the default versions of such profiling scripts in the target machine. – Corral Apr 24 '20 at 18:15

13 Answers13

37
$ vim /etc/ansible/ansible.cfg

SSH timeout

[defaults]
timeout = 10 ( change to 60 )
mikijov
  • 1,552
  • 24
  • 37
  • 6
    this worked for me as my LDAP is slow and increase timeout helped. however i'd recommend editing local ansible.cfg instead of the global /etc/ansible/ansible.cfg – Howard Lee Mar 20 '20 at 17:43
25

There is a git issue about this error that affect various versions of Ansible 2.x in here https://github.com/ansible/ansible/issues/13278#issuecomment-216307695

My solution was simply to add timeout=30 to /etc/ansible/ansible.cfg.

This is not a "task" or "role" timeout and was enough to solve the error (I do have some roles/tasks that take much longer than that).

the
  • 21,007
  • 11
  • 68
  • 101
vmenezes
  • 1,076
  • 2
  • 12
  • 21
8

In my case, the root cause was an incorrect entry in /etc/hosts for the localhost, causing a 20s delay for any sudo command.

127.0.0.1 wronghostname

Changed it to the correct hostname to fix it. No more delay for sudo/privileged commands.

Donn Lee
  • 2,962
  • 1
  • 24
  • 16
6

In my case it was because my playbook had

become_method: su
become_flags: "-"

which prompts a password request on the host.

Adding ansible-playbooks … --ask-become-pass and passing the password solved the issue.

GG.
  • 21,083
  • 14
  • 84
  • 130
4

I ran the command like follows & it works : command:

ansible-playbook -c paramiko httpd.yml

As the issue is related to the openssl implementation, the usage of paramiko dodges it.

Zulu
  • 8,765
  • 9
  • 49
  • 56
sambit
  • 339
  • 4
  • 12
  • This is not an answer to the well-developed question. – macetw Jun 04 '19 at 16:36
  • Actually, it is: https://github.com/ansible/ansible/issues/14426 – james.garriss Nov 05 '19 at 19:32
  • 2
    While james.garriss' link confirms this is useful information, I'd agree with macetw. Given that an external link is necessary for the context, the answer is poorly written. It should be self-evident from the answer text why this is a solution to the question. – duct_tape_coder May 14 '21 at 15:52
4

I had the same issue. I was able to solve it adding become_exe: sudo su -

- hosts: "{{ host | default('webservers')}}"
  become: yes
  become_user: someuser
  become_method: su
  become_exe: sudo su -


AbhiOps
  • 331
  • 2
  • 4
  • just adding "become_exe: sudo su" -- worked for me on raspberry pi....odd I think -- I already had the other become* lines... – jouell Nov 04 '21 at 02:09
3

Ansible defaults ssh_args setting, as documented here https://docs.ansible.com/ansible/latest/reference_appendices/config.html#ansible-ssh-args, is

-C -o ControlMaster=auto -o ControlPersist=60s

and by changing ControlMaster to either yes (or no) resolved the issue for me (somehow):

ansible.cfg:

[ssh_connection]
ssh_args = -C -o ControlMaster=yes -o ControlPersist=60s
resmo
  • 111
  • 3
2

The thread is old but the varied solutions keep coming.

In my case, the issue was that the ansible script had modified the sudoers file in the vagrant vm to add an entry for the vagrant group (%vagrant) after the existing entry for the vagrant user.

That was enough to cause the ansible script to timeout waiting for privilege escalation.

The solution was to force the sudoers entry for the vagrant group to be above the entry for the vagrant user.

Glenn
  • 881
  • 9
  • 11
1

Sometime setup phase takes more time for ec2 instances, you need to change timeout value in ansible.cfg to something like timeout=40 . This will set the timeout value to 40 seconds.

Mr Kashyap
  • 562
  • 1
  • 5
  • 16
1

I fixed this error for my system because I forgot I had altered the ansible config file:

sudo vim /etc/ansible/ansible.cfg 

Try commenting the priviledge parameters that could be trying to sudo to root.

like so:

[privilege_escalation]
#become=True
#become_method=su
#become_user=root
#become_ask_pass=False
#become_exe="sudo su -"

The account I was trying to ssh as did not have permission to become root.

DDrake
  • 318
  • 1
  • 9
1

I am building secure VM images for AWS, QEMU and VBox on an isolated network, with limited DNS support. Increasing the SSH Timeout to 40 sec had limited effect in my situation.

I am using Packer v1.5.5, Ansible v2.9.2 and OpenSSH v7.4p1

My solution was to change the UseDNS parameter in /etc/ssh/ssd_config to no.

I added the following lines in my RHEL/CentOS kickstart configuration, with great result.

%post
# Disable DNS lookups by sshd to address Ansible timeouts
perl -npe 's/^#UseDNS yes/UseDNS no/g' -i /etc/ssh/sshd_config
%end
Glenn Bell
  • 76
  • 4
1

I personally found another solution for this problem. I didn't had root account credentials so i couldn't use su root.

  • I first added new hostname in /etc/hosts with the same ip address 127.0.1.1.
  • Then i changed the hostname with builtin command hostname.
  • Then i deleted the old hostname in /etc/hosts.

This give a solution without delay and your playbook is much faster.

Final result:

---
- name: Change hostname
  hosts: ubuntu
  tasks:
    - name: Add etc host new entry line
      become: true
      ansible.builtin.lineinfile:
        path: /etc/hosts
        insertafter: '^127\.0\.0\.1'
        regexp: '^127\.0\.1\.1'
        line: 127.0.1.1 {{ inventory_hostname }}

    - name: Changes hostname
      ansible.builtin.hostname:
        name: "{{ inventory_hostname }}"

    - name: Delete old hostname  
      ansible.builtin.lineinfile:
        path: /etc/hosts
        regexp: '127\.0\.1\.1 (?!({{ inventory_hostname }}))'
        state: "absent"


Thisora
  • 51
  • 4
0

Check if it is a problem with an old version of sudo at destination server. Some old sudo versions does not have the -n option ansible uses.

Jose
  • 1