I am aware similar questions have been addressed previously, see below why they don't apply to my case. I have a piece of code that looks as follows:
int current_rank;
MPI_Comm_rank(MPI_COMM_WORLD, ¤t_rank);
if (current_rank==ROOT_RANK){
...
#pragma omp for
for (long i = 0; i < num; i++) {
auto &choumbie = catalogue->get()[i];
/* array choumbie is modified */}
...}
and then I would like to synchronize the array 'choumbie' over all processes. I tried to implement it following this example and the documentation. So, right after if (current_rank==ROOT_RANK), I did:
int gsize;
int *rbuf; // address of receive buffer
auto &choumbie = catalogue->get();
MPI_Comm_size(comm, &gsize);
int send_count=num;
rbuf = (int*)malloc(gsize*send_count*sizeof(double));
MPI_Allgather(choumbie,send_count, MPI_DOUBLE, rbuf, send_count, MPI_DOUBLE, comm);
I think the array 'choumbie' which I want to synchronize doesn't enter this way, but also I didn't find any other helpful example. Looks like that first argument has to be the memory address of the array that is to be sent, but that doesn't seem consistent with the example I provided above.
P.S.: num is the same for each rank.
This question was not helpful in my case, because I would like to use MPI_Allgather (also in C++, not Fortran). Also this was not helpful in my case, because I would like to avoid using MPI_Barrier.