0

Consider the following fragment of OpenMP code which transfers private data between two threads using an intermediate shared variable

#pragma omp parallel shared(x) private(a,b)
{
    ...
   a = somefunction(b);
   if (omp_get_thread_num() == 0) {
      x = a;
   }
}
#pragma omp parallel shared(x) private(a,b)
{
  if (omp_get_thread_num() == 1) {
    a = x;
  }
  b = anotherfunction(a);
  ...
}

I would (in pseudocode ) need to transfer of private data from one process to another using a single-sided message-passing library. Any ideas?

Hristo Iliev
  • 72,659
  • 12
  • 135
  • 186
Manolete
  • 3,431
  • 7
  • 54
  • 92
  • your code samples arent right, it's two *consecutive* parallel regions. To share date, use memory (with locks/mutexes), not MPI – Anycorn May 07 '11 at 17:34
  • Are not consecutive anyway, but that idea is to achieve the same functionality with single-sided communication, not with locks. – Manolete May 07 '11 at 17:47
  • Your code is only correct because of the implicit memory barrier associated with the join at the end of a parallel region. If you fuse the parallel regions, your code needs a fence/flush to be well-defined. And since join also implies a barrier, there is no point to this code anyways. You'll get the variable sharing and the synchronization for free already. See the Parallel Research Kernels synch_p2p OpenMP implementation for a proper message passing example in OpenMP. – Jeff Hammond Sep 07 '15 at 15:22

1 Answers1

3

This is possible, but there's a lot more "scaffolding" involved -- after all, you are communicating data between potentially completely different computers.

The coordination for this sort of thing is done between windows of data which are accessible from other processors, and with lock/unlock operations which coordinate the access of this data. The locks aren't really locks in the sense of being mutexes, but they are more like synchronization points coordinating data access to the window.

I don't have time right now to explain this in the detail I'd like, but below is an example of using MPI2 to do something like shared memory flagging in a system that doesn't have shared memory:

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include "mpi.h"

int main(int argc, char** argv)
{
    int rank, size, *a, geta;
    int x;
    int ierr;
    MPI_Win win;
    const int RCVR=0;
    const int SENDER=1;

    ierr = MPI_Init(&argc, &argv);
    ierr |= MPI_Comm_rank(MPI_COMM_WORLD, &rank);
    ierr |= MPI_Comm_size(MPI_COMM_WORLD, &size);

    if (ierr) {
        fprintf(stderr,"Error initializing MPI library; failing.\n");
        exit(-1);
    }

    if (rank == RCVR) {
        MPI_Alloc_mem(sizeof(int), MPI_INFO_NULL, &a);
        *a = 0;
    } else {
        a = NULL;
    }

    MPI_Win_create(a, 1, sizeof(int), MPI_INFO_NULL, MPI_COMM_WORLD, &win);

    if (rank == SENDER) {
        /* Lock recievers window */
        MPI_Win_lock(MPI_LOCK_EXCLUSIVE, RCVR, 0, win);

        x = 5;

        /* put 1 int (from &x) to 1 int rank RCVR, at address 0 in window "win"*/
        MPI_Put(&x, 1, MPI_INT, RCVR, 0, 1, MPI_INT, win);

        /* Unlock */
        MPI_Win_unlock(0, win);
        printf("%d: My job here is done.\n", rank);
    }

    if (rank == RCVR) {
        for (;;) {
            MPI_Win_lock(MPI_LOCK_EXCLUSIVE, RCVR, 0, win);
            MPI_Get(&geta, 1, MPI_INT, RCVR, 0, 1, MPI_INT, win);
            MPI_Win_unlock(0, win);

            if (geta == 0) {
                printf("%d: a still zero; sleeping.\n",rank);
                sleep(2);
            } else
                break;
        }
        printf("%d: a now %d!\n",rank,geta);
        printf("a = %d\n", *a);
    MPI_Win_free(&win);
    if (rank == RCVR) MPI_Free_mem(a);
    MPI_Finalize();

    return 0;
}
Jonathan Dursi
  • 50,107
  • 9
  • 127
  • 158