0

in my mpi program each process has or works with a block of data:

char *datablock;

the blocks are of similar but not identical size.

What is the best way (which functions to use and how) to distribute those blocks from each process to each other process? In the end i want each process to have (maybe) an array of all blocks:

char **blockarray;

so that

*blockarray[i] // for i in [0... number_of_processes-1]

is the former block of the ith process. BUT it is not the order that matters, "i" does not have to be the process id and the order can differ on each process (if this is faster)! I just want the fastest way to get each block on each thread.

Oliver
  • 149
  • 1
  • 9

1 Answers1

3

You should use MPI_Allgatherv Have a look at the documentation here: http://mpi.deino.net/mpi_functions/MPI_Allgatherv.html

With this function you can distribute data from each process to all other processes.

You don't need your char *datablock; You just have the char **blockarray; Each time when you need to synchronize the data you call MPI_Allgatherv. Pseudo code:

id = process_id
recvcounts = [length(blockarray[0]), length(blockarray[1]), ...]
disply = [0, recvcounts[1], recvcounts[0]+recvounts[1], ...]

MPI_Allgatherv(*blockarray[id], length(*blockarray[id]), sendtype, *blockarray, *recvcounts, 
               *displs, recvtype, comm);
jonie83
  • 1,136
  • 2
  • 17
  • 28
  • Thanks this looks as what i have searched for. But one question: Do i need to malloc the other blocks before using Allgatherv to "fill" those "empty" blocks? Normaly on each the ith process only *blockarray[i] be a datablock, each other blockarray[i] would be pointing to NULL in my case. – Oliver Sep 19 '14 at 10:15
  • Yes you need to allocate the memory before you use `MPI_ALLgatherv` otherwise you will get a segmentation fault. – jonie83 Sep 19 '14 at 11:06
  • Though hinted at by the way `disply` is initialised, it is not apparent that `blockarray[]` should hold pointers to consecutively laid regions of memory with no gaps between them, e.g. pointers inside a big flat block. Also, using the same memory area for both sending and receiving (`blockarray[id]`) is explicitly forbidden by the MPI standard - there is a special in-place mode (`MPI_IN_PLACE`) for such cases and it is described in the manual page you have linked to. – Hristo Iliev Sep 19 '14 at 12:53