0

If I want to list each communicator's id of a variable, how could I do that? Below is an attempt to demonstrate this idea:

from mpi4py import MPI
comm = MPI.COMM_WORLD

obj = "I am an example. My ID is unique to each communicator."
mpi_id = 'rank %i has id %s'%(comm.rank, str(id(obj)))
comm.send(mpi_id, tag=11, dtest=comm.rank)

mpi_id_list = []
for i in range(comm.size):
    mpi_id_list.append( comm.recv(source=i, tag=11))

print mpi_id_list
kilojoules
  • 9,768
  • 18
  • 77
  • 149
  • 1
    The question is not clear to me. What is the purpose of the data structure you are trying to create? How do you define the contents of this *list*? Should the list be the same on each rank? Different? Only on master? Also your example code posts more `recv`s than `send`, so it cannot work. – Zulan Apr 08 '16 at 07:31
  • Yes I am not sure how to implement a working version. The python `id` function reveals the identifier associated with an object. Objects will always have the same id on on communicator, but the id number is different on each communicator. I would like a list of the id associated with an object for each communicator. The code works up to the assignment of `mpi_id`, which is the form I want each list entry to have, – kilojoules Apr 08 '16 at 16:13
  • I think you are mistaking _MPI ranks / processes_ for _communicators_. The latter are the logical contexts, in which MPI communications happen, while the former are the actual communicating entities. – Hristo Iliev Apr 08 '16 at 18:49
  • @HristoIliev I am interested in your point. I want to have a deep understanding of mpi communicators in the context of python, which is why I posted this question. My understanding is that each communicator has an assigned rank. – kilojoules Apr 08 '16 at 19:47
  • Your understanding is not correct. Communicators in MPI are global contexts. Each _communicator_ has an associated _group_ and each entity within that group (basically a process) gets assigned a numeric ID - its _rank_. Communicators do not have ranks - they have one group of ranks each. – Hristo Iliev Apr 08 '16 at 22:15
  • So why does each communicator assign a unique id to python objects? – kilojoules Apr 08 '16 at 22:17
  • `id()` returns the ID of an object. Those are local to each MPI process and are guaranteed to be unique among different objects in the same process (`id()` actually returns the memory address of the object). There is no guarantee for uniqueness among objects residing in different processes. That it is so in your case is a pure coincidence and it depends heavily on the memory layout and various address randomisation strategies being enabled. – Hristo Iliev Apr 09 '16 at 08:28

1 Answers1

2

In MPI, each comm.send(...,dest=x) should be matched by a comm.recv(...) executed by the process of rank x. All messages can be sent to the process of rank 0 and the process 0 must receive all these messages. This operation is a collective operation called a reduction.

The following code can be executed on 4 processes by typing mpirun -np 4 main.py

from mpi4py import MPI
comm = MPI.COMM_WORLD

obj = "I am an example. My ID is unique to each communicator."
mpi_id = 'rank %i has id %s'%(comm.rank, str(id(obj)))
comm.send(mpi_id, tag=11, dest=0)

mpi_id_list = []
if comm.rank==0:
   mpi_id_list = []
   for i in range(comm.size):
      mpi_id_list.append( comm.recv(source=i, tag=11))

   print mpi_id_list

#broadcasting the list
mpi_id_list = comm.bcast(mpi_id_list, root=0)

#now, the list is the same on all processes.
print "rank "+str(comm.rank)+" has list "+str(mpi_id_list)

Notice that this example makes use of the collective operation comm.bcast() to broadcast the resulting list to all processes. See https://mpi4py.scipy.org/docs/usrman/tutorial.html for mpi4py examples of different collective operations. For instance, you be tempted by the comm.allreduce() operation:

list=comm.allreduce([mpi_id])
print list
francis
  • 9,525
  • 2
  • 25
  • 41