1

I'm trying to use MPI to re-write the paralellization routine for a simulation package I'm using. I'm trying to implement a specific feature and have been having trouble. I'm going to illustrate my problem using another simpler example that shows what I'm trying to do.

I'm basically trying to have a counter that would be shared by all MPI threads. Everytime this counter would be incremented, the specific thread would then broadcast it to everyone else so that each thread would have an up-to-date version of the counter when they want to increment it. I understand that this would be easily doable using OpenMP with shared memory, but I'm wondering if there's a way to make this work with MPI across more than just one CPU (for example, doing a 500-core simulation on a supercomputer).

I've tried pretty much every combination of MPI_Bcast, MPI_Send and MPI_Recv that I could think of, but I think there's something I might not be understanding properly.

Marc
  • 87
  • 7
  • Possible duplicate: http://stackoverflow.com/questions/4948788/creating-a-counter-that-stays-synchronized-across-mpi-processes – suszterpatt Mar 19 '11 at 17:52

2 Answers2

2

You won't be able to do this with the MPI-1 APIs you suggest above. However, MPI-2 allows for "remote memory operations" which allow you to do such a thing. I answered a very similar question here, based on the MPI-2 book and its online examples: Creating a counter that stays synchronized across MPI processes There, only the "counter increment" is implemented. It doesn't do the broadcast; but do you really need such an operation? Wouldn't it be enough for the other tasks just to check out the value of the counter whenever it's needed?

Community
  • 1
  • 1
Jonathan Dursi
  • 50,107
  • 9
  • 127
  • 158
  • Thank you very much. It would indeed be enough for the other tasks to check out the value of the counter, in reality what I'm having is a "Number of iterations left", and each job would grab chunks of like, 10,000 until there are no iterations left from that counter. All they need is a way of getting an up-to-date count of how many iterations are left, and having that count be fool-proof if for example one of the job crashes (which is why I can't just divide up number of iterations evenly at the start and just let them do their thing). I'll look into the links, thanks again! – Marc Mar 19 '11 at 18:42
0

Can't you invert the scheme? Create a dedicated 'counter server' that any thread can ask for counter value once it needs that value.

This may not fit all scenarios, of course.

9000
  • 39,899
  • 9
  • 66
  • 104
  • I thought maybe this way could work, but the number of cores available is pretty limited on the cluster I'm running the code from, so I would feel bad using one core to do nothing but keep track of a counter. – Marc Mar 19 '11 at 18:44
  • Unless serving a counter takes a lot of resources, you can safely run it on a node along with its normal load. As far as I understand, MPI runs on nodes with a regular OS, so you could just run a custom process for it. Serving a single number even via a constantly reconnecting TCP socket is efficient enough and should not require much CPU. – 9000 Mar 19 '11 at 19:03