Questions tagged [mpi-rma]

This tag should be used for questions concerning the one-sided communication mode of MPI, also known as MPI RMA.

MPI RMA (Remote Memory Access) allows MPI ranks to make use of one-sided get/set/operate semantics and access directly certain parts of the memory space of other ranks instead of exchanging data with those via the classical send/receive message passing.

26 questions
9
votes
2 answers

Asynchronous Finite Difference Scheme using MPI_Put

A paper by Donzis & Aditya suggests, that it is possible to use a finite difference scheme that might have a delay in the stencil. What does this mean? A FD scheme might be used to solve the heat equation and reads (or some simplification of…
Thomas
  • 1,199
  • 1
  • 14
  • 29
6
votes
4 answers

Creating a counter that stays synchronized across MPI processes

I have quite a bit of experience using the basic comm and group MPI2 methods, and do quite a bit of embarrassingly parallel simulation work using MPI. Up until now, I have structured my code to have a dispatch node, and a bunch of worker nodes. …
MarkD
  • 4,864
  • 5
  • 36
  • 67
5
votes
1 answer

Segmentation Fault in Fortran program using RMA functions of MPI-2

The following short Fortran90 program crashes as long as it contains the MPI_GET call. Rank 1 tries to read a value from rank 0 and hangs in MPI_WIN_UNLOCK. Rank 0 tries crashes in MPI_BARRIER with a segmentation fault. I repeatedly check the syntax…
ebo
  • 8,985
  • 3
  • 31
  • 37
3
votes
2 answers

MPI: How to use MPI_Win_allocate_shared properly

I would like to use a shared memory between processes. I tried MPI_Win_allocate_shared but it gives me a strange error when I execute the program: Assertion failed in file ./src/mpid/ch3/include/mpid_rma_shm.h at line 592: local_target_rank >=…
Reda94
  • 331
  • 4
  • 12
3
votes
1 answer

How to replicate the function of MPI_Accumulate in MPI-2+

I am learning MPI one sided communications introduced in MPI-2/MPI-3, and came across this online course page about MPI_Accumulate: MPI_Accumulate allows the caller to combine the data moved to the target process with data already present, such…
thor
  • 21,418
  • 31
  • 87
  • 173
2
votes
0 answers

Issues when using MPI_Win_create() and MPI_Get() functions

In MPI (MPICH) I am trying to use windows. I have a 3D grid topology and additional communicator i_comm. MPI_Comm cartcomm; int periods[3]={1,1,1}, reorder=0, coords[3]; int dims[3]={mesh, mesh, mesh}; //mesh is size of each dimention …
Ana Khorguani
  • 896
  • 4
  • 18
2
votes
1 answer

Consistency of MPI_Fetch_and_op

I am trying to understand the MPI-Function `MPI_Fetch_and_op() through a small example and ran into a strange behaviour I would like to understand. In the example the process with rank 0 is waiting till the processes 1..4 have each incremented the…
nando
  • 41
  • 7
2
votes
1 answer

MPI: Ensure an exclusive access to a shared memory (RMA)

I would like to know which is the best way to ensure an exclusive access to a shared resource (such as memory window) among n processes in MPI. I've tried MPI_Win_lock & MPI_Win_fence but they don't seem to work as expected, i.e: I can see that…
Reda94
  • 331
  • 4
  • 12
2
votes
0 answers

MPI: MPI_Get not working

The following code creates a window in process 0 (master) and the other processes put some values in it and I'm trying to get the window of the master from other processes each time to print it but I'm getting totally confusing results. Here's the…
Reda94
  • 331
  • 4
  • 12
1
vote
1 answer

Is MPI_ACCUMULATE with MPI_REPLACE always a better option than MPI_PUT

I was going through the accumulate and atomic MPI RMA calls which are introduced in MPI-3. After reading I found out that there is a MPI_REPLACE operator which can be used in MPI_Accumulate to perform a similar functionality as that of MPI_PUT. And…
1
vote
0 answers

MPI_WIN_ALLOCATE_SHARED and synchronization

I try to do a mpi shared memory example , but every time i get some weird value. It's a 1D stencil, just doing the sum of elements at position i-1,i and i+1 I'm running this program on 2 node of 32 MPI process and with the domain size nx=64, the…
D. Lecas
  • 101
  • 2
1
vote
0 answers

MPI_Win_allocate() does not return

When I run the following code with mpirun -n 2 ./out it works with no problem but with mpirun -n 3 ./out MPI_Win_allocate() does not return. I checked this out by printing to the screen before and after MPI_Win_allocate(). Also, if I comment out…
Shibli
  • 5,879
  • 13
  • 62
  • 126
1
vote
0 answers

What's a practical limit on how much memory can be attached with MPI_Win_attach?

I noticed these bits from the MPI 3.1 standard: Advice to users. Attaching memory to a window may require the use of scarce resources; thus, attaching large regions of memory is not recommended in portable programs. Advice to implementors. A…
jjramsey
  • 1,131
  • 7
  • 17
1
vote
0 answers

Can master thread call MPI_Win_lock_all() once for all threads?

I am writing a hybrid MPI/OpenMP code with MPI-3 remote memory access feature. I am assuming MPI_THREAD_SERIALIZED is available. Can I do this? MPI_Win_lock_all(0, win); #pragma omp parallel { ... #pragma omp…
Abhishek
  • 87
  • 5
1
vote
1 answer

How to check if MPI one-sided communication has finished?

I am using MPI_Raccumulate function which is one-sided communication from source to destination with pre-defined aggregation function. I want to check whether all the MPI_Raccumulate call has finished (sender sent the data and receiver received the…
syko
  • 3,477
  • 5
  • 28
  • 51
1
2