1

if I have a array A[100][100][100], How I create a window for remote memory access for the six edge subarrays (ghost cells) especially for A[0][:][:] and A[100][:][:]. In MPI-1,I create vector type to send/recv ghost cells. In MPI-2 and -3, do I need to expose the entire array or only the ghost cells? Of course, the latter will be much better if possible.

Hristo Iliev
  • 72,659
  • 12
  • 135
  • 186
ubc_ben
  • 163
  • 1
  • 10

1 Answers1

2

MPI RMA windows are contiguous areas in memory and for performance reasons implementations might require that they are allocated specifically using MPI_ALLOC_MEM. The boundary cells on 4 of the 6 sides of a 3-D array are not contiguous in memory. Some implementations could also require that windows start aligned on a page or other kind of boundary. Therefore you have to register a window that spans the whole array.

While it is technically possible to expose two separate windows for A[0][:][:] and A[99][:][:] and these would not expose any other parts of the array, this is simply not possible for A[:][0][:], A[:][99][:], and so on because of their discontinuous character.

I would suggest that you allocate A using MPI_ALLOC_MEM (or MPI_Alloc_mem if you program in C/C++). You could then use the appropriate vector types in MPI_GET and MPI_PUT in order to easily access the remote halo cells as well as the local cells that are to be copied over.

Hristo Iliev
  • 72,659
  • 12
  • 135
  • 186
  • Thank you for your reply.It's terrible that we cannot load/store locally the other part of the array when we put/get the ghost cells remotely if we expose the whole array in the RMA window. I still prefer using Isend/Irecv. – ubc_ben Nov 08 '13 at 04:30
  • The semantics of MPI-3 RMA are much more relaxed than MPI-2 RMA there are ways to do what you want. – Jeff Hammond Feb 22 '15 at 02:40