121

I need to copy a file between two remote nodes:

  • node A is a managed node where the file exists
  • node B is a managed node where the file should be copied

Please note that my control node, from where I run all my Ansible tasks, is none of the above mentioned nodes.

I have tried the following:

Use scp command in shell module of Ansible

- hosts: machine2
  user: user2
  tasks:
    - name: Copy file from machine1 to machine2 
      shell: scp user1@machine1:/path-of-file/file1 /home/user2/file1

This approach just goes on and on never ends.

Use the fetch and copy modules

- hosts: machine1
  user: user1
  tasks:
    - name: copy file from machine1 to local
      fetch: 
        src: /path-of-file/file1 
        dest: /path-of-file/file1
    
- hosts: machine2
  user: user2
  tasks:
    - name: copy file from local to machine2
      copy: 
        src: /path-of-file/file1 
        dest: /path-of-file/file1

This approach throws me an error as follows:

error while accessing the file /Users//.ansible/cp/ansible-ssh-machine2-22-, error was: [Errno 102] Operation not supported on socket: u'/Users//.ansible/cp/ansible-ssh-machine2-22-'

How can I achieve this?

β.εηοιτ.βε
  • 33,893
  • 13
  • 69
  • 83
user3228188
  • 1,641
  • 3
  • 14
  • 19
  • 1.This is a handy feature to save network accesses, when the control machine might be farther away; 2.Should be fixed now per https://github.com/ansible/ansible/pull/16756 jctanner merged commit 0d94d39 into ansible:devel on Sep 23, 2016 – AnneTheAgile Jan 02 '18 at 17:54

9 Answers9

129

As @ant31 already pointed out, you can use the synchronize module for this. By default, the module transfers files between the control machine and the current remote host (inventory_host), however that can be changed using the delegate_to parameter of a task (it's important to note that this is a parameter of the task, not of the module).

You can place the task on either ServerA or ServerB, but you have to adjust the direction of the transfer accordingly (using the mode parameter of synchronize).

Placing the task on ServerB

- hosts: ServerB
  tasks:
    - name: Transfer file from ServerA to ServerB
      synchronize:
        src: /path/on/server_a
        dest: /path/on/server_b
      delegate_to: ServerA

This uses the default mode: push, so the file gets transferred from the delegate (ServerA) to the current remote (ServerB).

This might look strange, since the task has been placed on ServerB (via hosts: ServerB). However, one has to keep in mind that the task is actually executed on the delegated host, which, in this case, is ServerA. So pushing (from ServerA to ServerB) is indeed the correct direction. Also remember that we cannot simply choose not to delegate at all, since that would mean that the transfer happens between the control node and ServerB.

Placing the task on ServerA

- hosts: ServerA
  tasks:
    - name: Transfer file from ServerA to ServerB
      synchronize:
        src: /path/on/server_a
        dest: /path/on/server_b
        mode: pull
      delegate_to: ServerB

This uses mode: pull to invert the transfer direction. Again, keep in mind that the task is actually executed on ServerB, so pulling is the right choice.

β.εηοιτ.βε
  • 33,893
  • 13
  • 69
  • 83
Florian Brucker
  • 9,621
  • 3
  • 48
  • 81
  • 11
    This is such a good answer it should be part of the [Ansible documentation](http://docs.ansible.com/ansible/synchronize_module.html). None of the examples there explain this in such a clear fashion. Thanks! – ssc Mar 20 '16 at 16:36
  • 2
    I have tried this in numerous ways, however, fails me on `Warning: Identity file /Users/myuser/.ssh/id_servers not accessible`. – orotemo Mar 29 '16 at 13:27
  • @orotemo: Without further information I can only guess, but that looks like a problem in your SSH setup. Please check whether you have configured SSH or Ansible to use the identity file given in the error message and whether that file exists and has the right permissions. – Florian Brucker Mar 29 '16 at 13:36
  • @FlorianBrucker: running `command: hostname` succeeds and reports under the inventory_hostname the remote hostname. all the above uses delegate_to of course. Thanks to anyone who can assist here! – orotemo Mar 29 '16 at 13:40
  • @FlorianBrucker oh and yes to both your questions :) (thought I wrote that) – orotemo Mar 29 '16 at 14:25
  • Took me hours to decipher this on the documentation and still couldn't understand a thing. Took me less than 5 minutes reading your answer. Thank you! – JohnnyQ Jun 11 '16 at 05:07
  • `pull` and `push` seem the wrong way round? e.g. if you're copying from A to B, and the task is running on B, surely it's **pull**, not push? (A query not a criticism, as there are comments on GitHub suggesting this module is quite "hacky") – William Turrell Oct 20 '16 at 12:13
  • 2
    @WilliamTurrell I've updated my answer to explain the transfer direction in more detail. The module is indeed a bit confusing. – Florian Brucker Oct 20 '16 at 13:33
  • 1
    Thanks. For anyone else having @orotemo's problem, probable solution is you only don't have any public key access between servers A and B, or as I found, you've set it up to only work in one direction – the wrong one. In the absence of any keypair in your .ssh directory on server A, ansible attempts to use the home directory of your local machine (which won't exist if it's, say, a Mac, and may have a different account name.) – William Turrell Oct 20 '16 at 18:20
  • How do I specify that I want ask_pass for server A in the push case, i do not have certificate auth. since it is OSX machines... – David Karlsson Jul 04 '17 at 07:50
  • It failed to me, it looks like as in ServerA it was trying to use the ssh-key to login into ServerB. But that ssh-key only exists in the ansible-controller machine and not in ServerA.... Overall, this feels unreliable.. – zipizap Jul 25 '18 at 12:01
111

To copy remote-to-remote files you can use the synchronize module with delegate_to: source-server keyword:

- hosts: serverB
  tasks:    
   - name: Copy Remote-To-Remote (from serverA to serverB)
     synchronize: src=/copy/from_serverA dest=/copy/to_serverB
     delegate_to: serverA

This playbook can run from your machineC.

β.εηοιτ.βε
  • 33,893
  • 13
  • 69
  • 83
ant31
  • 4,471
  • 4
  • 15
  • 20
  • good answer! Unfortunately I did not get it working in an Vagrant environment with multi VMs. Seems Vagrant does something special there. – therealmarv Mar 05 '15 at 06:56
  • It uses rsync, do you have it installed on vm? – ant31 Mar 05 '15 at 06:57
  • yes it is installed. Here is a log of the attempt to copy it between to Vagrant machines. I'm guessing Vagrant is causing issues with multi machines (key handling?!): https://gist.github.com/therealmarv/24cbb1098d9148c4eb02 – therealmarv Mar 05 '15 at 07:14
  • Looks like you can't connect to your delegate host. Try to execute a debug task directly on it( 192.168.217.121) or even just ssh. – ant31 Mar 05 '15 at 07:56
  • 2
    Beginning with Vagrant 1.7.x it uses different private keys depending on machine. See issue https://github.com/mitchellh/vagrant/issues/4967 Insert following line into the Vagrantfile `config.ssh.insert_key = false` to force Vagrant to use the ONE insecure_key for accessing all machines. But now I even do not get an error message (it waits forever). Also bug https://github.com/ansible/ansible/issues/7250 says it is not possible to copy from remote to remote. – therealmarv Mar 05 '15 at 11:30
  • 9
    This actually copies the files from serverB to serverA. If you want to copy them from serverA to serverB, use `mode=push` (or `delegate_to: serverB`, but not both). – Marius Gedminas Aug 18 '15 at 08:15
  • 2
    @MariusGedminas you are correct, `mode=push` should be used, but in this situation `delegate_to: serverB` cannot be used, because that would make `serverB` the source and destination. – Strahinja Kustudic Apr 19 '16 at 22:10
  • To be honest, this short explanation made me understand the `synchronize` module. It was confusing in the official doc which server was the src or the destination. Thanks! – JohnnyQ Jun 11 '16 at 05:02
  • @MariusGedminas by the way how do you specify `serverB` in this command? Since this was only delegated to `serverA` and no info about `serverB`? – JohnnyQ Jun 11 '16 at 05:03
  • @JohnnyQ `serverB` is in the play's hosts – Timur Aug 10 '16 at 12:02
  • ansible is very buggy on this and it doesn't work over ssh even with version 2.3 –  Aug 18 '17 at 08:08
  • Thanks for the `serverA` and `serverB` notation. The official documentation wasn't clear enough as @JohnnyQ mentioned. I was able to use your example to do a 1:1 synchronization. I.e.: 1 synchronize to n hosts in a serial manner (without making multiple plays) rather than the default behavior of doing the synchronize against all hosts. For more details: https://stackoverflow.com/questions/46352179/ansible-run-task-against-hosts-in-a-with-together-fashion/46353525#46353525 – Darrel Holt Sep 21 '17 at 21:53
  • It looks like the synchronize module was moved and needs to be installed from ansible galaxy. https://docs.ansible.com/ansible/latest/collections/ansible/posix/synchronize_module.html – brainbuz Oct 27 '20 at 22:47
4

If you need to sync files between two remote nodes via ansible you can use this:

- name: synchronize between nodes
  environment:
    RSYNC_PASSWORD: "{{ input_user_password_if_needed }}"
  synchronize:
    src: rsync://user@remote_server:/module/
    dest: /destination/directory/
    // if needed
    rsync_opts:
       - "--include=what_needed"
       - "--exclude=**/**"
    mode: pull
    delegate_to: "{{ inventory_hostname }}"

when on remote_server you need to startup rsync with daemon mode. Simple example:

pid file = /var/run/rsyncd.pid
lock file = /var/run/rsync.lock
log file = /var/log/rsync.log
port = port

[module]
path = /path/to/needed/directory/
uid = nobody
gid = nobody
read only = yes
list = yes
auth users = user
secrets file = /path/to/secret/file
CrusaderX
  • 131
  • 1
  • 2
  • 7
3

I was able to solve this using local_action to scp to file from machineA to machineC and then copying the file to machineB.

user3228188
  • 1,641
  • 3
  • 14
  • 19
3

If you want to do rsync and use custom user and custom ssh key, you need to write this key in rsync options.

---
 - name: rsync
   hosts: serverA,serverB,serverC,serverD,serverE,serverF
   gather_facts: no
   vars:
     ansible_user: oracle
     ansible_ssh_private_key_file: ./mykey
     src_file: "/path/to/file.txt"
   tasks:
     - name: Copy Remote-To-Remote from serverA to server{B..F}
       synchronize:
           src:  "{{ src_file }}"
           dest: "{{ src_file }}"
           rsync_opts:
              - "-e ssh -i /remote/path/to/mykey"
       delegate_to: serverA
Sasha Golikov
  • 638
  • 5
  • 11
3

You can use deletgate with scp too:

- name: Copy file to another server
  become: true
  shell: "scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null admin@{{ inventory_hostname }}:/tmp/file.yml /tmp/file.yml"
  delegate_to: other.example.com

Because of delegate the command is run on the other server and it scp's the file to itself.

Kris
  • 19,188
  • 9
  • 91
  • 111
1

A simple way to used copy module to transfer the file from one server to another

Here is playbook

---
- hosts: machine1 {from here file will be transferred to another remote machine}
  tasks:
  - name: transfer data from machine1 to machine2

    copy:
     src=/path/of/machine1

     dest=/path/of/machine2

    delegate_to: machine2 {file/data receiver machine}
jpp
  • 159,742
  • 34
  • 281
  • 339
  • This floated up during a session today, but neither of us could replicate this using ansible 2.6.4. Putting this task into a playbook with creating a file on machine1 first and listing the directory afterwards failed with "Could not find or access '/tmp/source-49731914' on the Ansible Controller." Creating an empty file on the host machine solved it, but did a copy host>machine2. Maybe there was a buggy behavior in some version? – Stephan B Nov 08 '18 at 14:54
0

In 2021 you should install wrapper:

ansible-galaxy collection install ansible.posix

And use

- name: Synchronize two directories on one remote host.
  ansible.posix.synchronize:
    src: /first/absolute/path
    dest: /second/absolute/path
  delegate_to: "{{ inventory_hostname }}"

Read more:

https://docs.ansible.com/ansible/latest/collections/ansible/posix/synchronize_module.html

Checked on:

ansible --version                          
ansible 2.10.5
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/daniel/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.9/site-packages/ansible
  executable location = /sbin/ansible
  python version = 3.9.1 (default, Dec 13 2020, 11:55:53) [GCC 10.2.0]
Daniel
  • 7,684
  • 7
  • 52
  • 76
0

When transferring system secrets from machine1 to machine2 you may not have direct access between them so solutions involving delegate_to will fail. This happens when you keep your private key on your ansible control node and your public key in ~/.ssh/authorized_keys on ansible user accounts for machine1 and machine2. Instead you can pipe a file or directory from one machine to the other via ssh, using password-free sudo for remote privilege escalation:

-name "Copy /etc/secrets directory from machine1 and machine2"
  delegate_to: localhost
  shell: |
    ssh machine1 sudo tar -C / -cpf - etc/secrets | ssh machine2 sudo tar -C / -xpf -

For example, to set up an ad hoc computing cluster using the MUNGE daemon for authentication you can use the following to copy the credentials from the head node to the workers.

setup_worker.yml

- name: "Install worker packages"
  apt:
    name: "{{ packages }}"
  vars:
    packages:
    - munge
    # ...and other worker packages...
  become: true
- name: "Copy MUNGE credentials from head to {{ host }}"
  delegate_to: localhost
  shell:
    ssh head.domain.name sudo tar -C / -cpf - etc/munge | ssh {{ ansible_facts["nodename"] }} sudo tar -C / -xpf -
  become: false
  when: ansible_facts["nodename"] != "head.domain.name"
- name: "Restart MUNGE"
  shell: |
    systemctl restart munge
  become: true

This is run as ansible-playbook -e host=machine1 setup_worker.yml. Since the ansible user is unprivileged the remote system tasks need become: true. The copy task does not need privilege escalation since the controller is only used to set up the pipeline; the privilege escalation happens via the sudo in the ssh command. The variable host contains the host pattern used on the command line, not the worker being initialized. Instead use ansible_facts["nodename"], which will be the fully-qualified domain name for the current worker (assuming the worker is properly configured). The when clause prevents us from trying to copy the directory from the head node onto itself.

Neapolitan
  • 2,101
  • 9
  • 21