I have the following vagrantfile
Vagrant.configure(2) do |config|
config.ssh.insert_key = true
config.vm.define "dev" do |app|
app.vm.provider "docker" do |d|
d.image = "allansimon/allan-docker-dev-python"
d.has_ssh = true
end
app.ssh.username = "vagrant"
app.vm.provision "file", source: "~/.ssh/id_rsa", destination: ".ssh/id_rsa"
app.vm.provision "permits-root-to-clone", type: "shell" do |s|
s.inline = "cp /home/vagrant/.ssh/id_rsa /root/.ssh/id_rsa"
end
# if i put here a new shell provisionner , to the exact same repo than in my galaxy roles , it works
app.vm.provision "ansible_local" do |ansible|
ansible.galaxy_role_file = "build_scripts/ansible/requirements.yml"
ansible.playbook = "build_scripts/ansible/bootstrap.yml"
end
end
end
The requirements.yml
reference some private ansible roles, that are git-cloned
like this
- src: git@gitlab.mydomain.com:ansible-roles/myrole.git
scm: git
version: 'master'
name: myrole
I'm injecting my desktop private key inside the vagrant
- it works in the sell provisionner
- it works if after I
vagrant ssh
inside the machine
but it does not work with the ansible_local provisionner with the error
==> dev: Running provisioner: ansible_local...
dev: Running ansible-galaxy...
[WARNING]: - supervisord was NOT installed successfully: - command git clone
git@gitlab.mydomain.com:ansible-roles/myrole.git myrole failed in
directory /tmp/tmpQNgCTo (rc=128)
ERROR! - you can use --ignore-errors to skip failed roles and finish processing the list.
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
Is there a way to force ansible in vagrant to use a specific private key ?