1

I have a shared memory used by multiple processes, these processes are created using MPI.

Now I need a mechanism to control the access of this shared memory.

I know that named semaphore and flock mechanisms can be used to do this but just wanted to know if MPI provides any special locking mechanism for shared memory usage ?

I am working on C under Linux.

nav_jan
  • 2,473
  • 4
  • 24
  • 42

2 Answers2

2

MPI actually does provide support for shared memory now (as of version 3.0). You might try looking at the One-sided communication chapter (http://www.mpi-forum.org/docs/mpi-3.0/mpi30-report.pdf) starting with MPI_WIN_ALLOCATE_SHARED (11.2.3). To use this, you'll have to make sure you have an implementation that supports it. I know that the most recent versions of both MPICH and Open MPI work.

Wesley Bland
  • 8,816
  • 3
  • 44
  • 59
  • Can you share some example on how it works? I read the pdf you shared but it is not much clear how to use mpi_win* functions. I also saw http://mpi.deino.net/mpi_functions/MPI_Win_lock.html but it is not too much helpful either. I just need to share an `int` across processes any process can read/write on that integer. Also are there any access control mechanisms for this type of shared memory which MPI provides ? Thanks! – nav_jan Jun 14 '13 at 17:17
  • 1
    To share data across multiple processes with RMA, you'll need to create an MPI_Window using MPI_Win_allocate or MPI_Win_allocate_shared. Then you can use MPI_Put and MPI_Get to access the data. You'll need to add the synchronization functions MPI_Win_lock and MPI_Win_unlock around your data accesses to ensure accurate synchronization. I don't think I can post too many links here, but if you Google for "MPI RMA Tutorial", you should be able to find some slides that discuss how to use it. – Wesley Bland Jun 14 '13 at 17:46
  • One thing to note is that the shared memory version of RMA is new to MPI-3 so if you're looking at tutorials and documentation, check the date to see if it's talking about distributed memory RMA (pre-version 2.0) or distributed and shared memory RMA (version 3.0). Version 3.0 of MPI was just published in September 2012 so it isn't ubiquitous yet. All of the old tutorials still apply, but you might need to add a few things to use shared memory. – Wesley Bland Jun 14 '13 at 17:49
1

No, MPI doesn't provide any support for shared memory. In fact, MPI would not want to support shared memory. The reason is that a program written with MPI is supposed to scale to a large number of processors, and a large number of processors never have shared memory.

However, it may happen, and often does, that groups of small number of processors (in that set of large number of processors) do have shared memory. To utilize that shared memory however, OpenMP is used.

OpenMP is very simple. I strongly suggest you learn it.

Shahbaz
  • 46,337
  • 19
  • 116
  • 182
  • Thanks for your answer! I did some more research and came across http://mpi.deino.net/mpi_functions/MPI_Win_lock.html , I am still working on it (trying to understand MPI_win* commands). Do you think this solve the problem I have here ? – nav_jan Jun 14 '13 at 07:25
  • I generally wouldn't recommend counting on shared memory with MPI, since then you can't actually run the program on Linux clusters or Super computers etc, which MPI is meant to be used for. Nevertheless, it seems like those functions are relevant to what you need. – Shahbaz Jun 14 '13 at 07:37
  • +1. Thanks for your advice, Certainly I need to do more study before I can start on implementation. Will try to update my findings here. – nav_jan Jun 14 '13 at 07:45