1

I want to write a small program in MPI (Java implementation) A variable x (double variable) is declared. Threads try to modify the variable (let's say a random modification). When a thread i finds a new value of X which is smaller than the older one, a broadcasting to other threads is done so that they can update the value of their variable X

I have looked at the Bcast function in MPI ... but in all examples it was called by all threads whether the variable is modified or not.

Study Learn
  • 55
  • 1
  • 5
  • 1
    That's right, that's how broadcasts work - all the processes in a communicator take part. Until the processes exchange messages one process doesn't know that the value of a variable on another process has changed. You might want to look at other collectives such as `mpi_gather`, `mpi_scatter` or `mpi_allgather`. There are others too. – High Performance Mark Sep 23 '14 at 17:43

1 Answers1

2

This is one of those scenarios that are quite easy to implement in a multithreaded environment (e.g. OpenMP or Java threads) and very hard to impossible to implement efficiently in MPI. The usual approach is to refactor your algorithm in such a way that the best value could be communicated every N steps (with N possibly equal to 1, but that could be very inefficient due to the communication overhead) and then use Intracomm.Allreduce with the reduce operation set to MPI.MIN. Each process provides its own minimum value and the reduction returns the global minimum. If you would also like to know the rank of the process that holds the global minimum value, MPI.MINLOC should be used instead.

If you are trying to implement parallel genetic optimisation, there are some C++ libraries that might give you an inspiration.

Community
  • 1
  • 1
Hristo Iliev
  • 72,659
  • 12
  • 135
  • 186
  • I see, this is always a problem when talking about multi-machines/distributed systems. Yeah your idea is good but as you said it will be inefficient because of the communication overhead when N=1 ... and if N>1 it will not be optimized enough .. Thanks for these C++ libraries for parallel genetic optimization ... My domain does not focus on that but it is always interesting to read new things – Study Learn Sep 23 '14 at 19:15