2

I have a client server system using MPI (using ports) in C++. Its running good. Is doing what I intend it to do.

I recently read about remote memory access (RMA) in MPI using MPI_Win memory windows. I'm wondering if it is possible to create a system similar to client-server using RMA. (Lets say the synchronization between the clients to access same chunk of memory is handled somehow.)

I would like to create a window on the server and make the clients access memory through this window.

Does someone already have some experience with this model? Any comments are welcome.

Gilles
  • 9,269
  • 4
  • 34
  • 53
AdityaG
  • 428
  • 1
  • 3
  • 17
  • I wrote [this super simple example](http://stackoverflow.com/a/32646142/5239503) a while ago, which uses one-sided MPI communications. Although this uses an intra-communicator, this might give you a flavour of what can be done with these. – Gilles Oct 15 '15 at 11:27

1 Answers1

1

Creation of RMA windows is a collective operation that involves the process group of an intracommunicator. To make it work with an intercommunicator, you must first merge the two process groups via MPI_INTERCOMM_MERGE and then use the resultant intracommunicator for RMA operations. Note that doing so removes part of the insulation benefits that intercommunicators provide.

Hristo Iliev
  • 72,659
  • 12
  • 135
  • 186
  • 1
    In the documentation of MPICH and OpenMPI, there is no mention if the communication should only be intraCommunicator. Its just another MPI_Comm . @Hristo – AdityaG Oct 15 '15 at 16:12
  • 2
    So is written in [the MPI standard](http://www.mpi-forum.org/docs/mpi-3.1/mpi31-report.pdf), Section 11.2: _"MPI provides the following window initialization functions: `MPI_WIN_CREATE`, `MPI_WIN_ALLOCATE`, `MPI_WIN_ALLOCATE_SHARED`, and `MPI_WIN_CREATE_DYNAMIC`, **which are collective on an intracommunicator.**"_ – Hristo Iliev Oct 15 '15 at 17:22
  • 1
    I'm heavily involved in MPI Forum pertaining to RMA and @HristoIliev is exactly right. – Jeff Hammond Oct 15 '15 at 17:25
  • @HristoIliev ... yes ... I experimented yesterday with intercommunicator, yes it failed. So you are correct. I will try what you suggested. – AdityaG Oct 16 '15 at 07:54
  • @HristoIliev .. You suggestion worked. I could communicate between server and client with MPI_WIN. thank you for the help. But I tried to look up what "insulation benefits" I will loose in the merge. Can you point me in a direction to find these benefits of intercommunicators ? – AdityaG Oct 16 '15 at 09:32
  • @AdityaG, what I mean is that once both the server and the connected client are in the same intracommunicator, you must make sure that both are aware and play nice in that respect, e.g. when it comes to collective operations that now span both programs, in order to prevent possible deadlocks. – Hristo Iliev Oct 16 '15 at 10:41
  • @HristoIliev I thought about this. To avoid such a condition I am thinking of making a copy of initial communicators in server and client and user those for all the internal communication purposes. (only window will be created using the merged intercommunicator). But my confusion is about which ranks (from the newly created intracomm) to provide to MPI_Get and MPI_Put in order to access data from the processors of clients. – AdityaG Oct 16 '15 at 12:36
  • Use `MPI_Comm_remote_group` to obtain the remote process group of the intercommunicator. Then use `MPI_Group_translate_ranks` to translate those ranks into ranks in the joined intracommunicator. – Hristo Iliev Oct 16 '15 at 14:25
  • FWIW I agree with you about insulation benefits. But the issue with collectives is an important implementation concern. We got a request for this feature already but there wasn't a clear use case. Frankly, if you want asynchronous communication in this distributed model, you're better off with send and (m)probe-(m)recv. The overhead is worth it if your ultimate goal is to remain loosely coupled. – Jeff Hammond Oct 18 '15 at 19:49