0

I am aware similar questions have been addressed previously, see below why they don't apply to my case. I have a piece of code that looks as follows:

int current_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &current_rank);

if (current_rank==ROOT_RANK){
   ...
  #pragma omp for
  for (long i = 0; i < num; i++) {
      auto &choumbie = catalogue->get()[i];
      /* array choumbie is modified */}
   ...}

and then I would like to synchronize the array 'choumbie' over all processes. I tried to implement it following this example and the documentation. So, right after if (current_rank==ROOT_RANK), I did:

  int gsize;
  int *rbuf; // address of receive buffer
  auto &choumbie = catalogue->get();

  MPI_Comm_size(comm, &gsize);
  int send_count=num;
  rbuf = (int*)malloc(gsize*send_count*sizeof(double));
  MPI_Allgather(choumbie,send_count, MPI_DOUBLE, rbuf, send_count, MPI_DOUBLE, comm);

I think the array 'choumbie' which I want to synchronize doesn't enter this way, but also I didn't find any other helpful example. Looks like that first argument has to be the memory address of the array that is to be sent, but that doesn't seem consistent with the example I provided above.

P.S.: num is the same for each rank.

This question was not helpful in my case, because I would like to use MPI_Allgather (also in C++, not Fortran). Also this was not helpful in my case, because I would like to avoid using MPI_Barrier.

Suyama87
  • 85
  • 10
  • Is "num" the same on each rank? "rbuf" is an integer array but you are communicating using the type MPI_DOUBLE. You might need to allocate "rbuf" with the global sum of "num" using an allreduce function. – wvn Jun 25 '20 at 16:59
  • Yes, num is the same. I have updated the post accordingly. Can you please provide a code example of what you mean? Wrt rbuf, that was a mistype, it's a double, also fixed. – Suyama87 Jun 25 '20 at 17:02
  • I missed something in your original code, you will not need a call to Allreduce. Code to follow. – wvn Jun 25 '20 at 17:11
  • What do you mean by synchronise the array? If you want the copy at `ROOT_RANK` to be distributed to all the other ranks, then simply use `MPI_Bcast`. With `MPI_Allgather` you are gathering local copies of `choumbie` and concatenating them into one large array in `rbuf`. Or is `choumbie` block-distributed? – Hristo Iliev Jun 25 '20 at 18:59
  • I would like to gather the information from all ranks to `choumbie`. So irrespective of whatever any way in which the processes have (or have not) modified the array, I would like the final `choumbie` to have been updated by any modifications it might have undergone from all ranks. But, for anything you propose, can you please provide code examples, because the problem is that I don't understand the syntax in my context in the first place. – Suyama87 Jun 25 '20 at 20:16
  • Sorry, I really don't get what you need. It is either that you have chunks of a big array spread among the nodes and you want to collect those chunks after modifying them - easy, done with `MPI_Allgather`. Or you have multiple copies of the same array and want to merge the local modifications - not that easy at all, unless the modifications are in contiguous non-overlapping regions. Judging by the accepted answer, yours is the former case. – Hristo Iliev Jun 26 '20 at 14:24

1 Answers1

0

So long as num is the same on each rank, this is close. It depends on what catalogue->get(); gives you. I am going to assume that it is an integer array. You should simply need:

MPI_Allgather(choumbie,send_count, MPI_INT, rbuf, send_count, MPI_INT, comm);
wvn
  • 624
  • 5
  • 12