1

With Intel-MPI I can pin the MPI processes started by mpirun to a certain cores on a node. For example with 24 cores and Intel-MPI:

mpirun -np 12 -genv I_MPI_PIN_PROCESSOR_LIST=0-11 ./some.exe &
mpirun -np 12 -genv I_MPI_PIN_PROCESSOR_LIST=12-23 ./other.exe &

With OpenMPI there is the option --bind-to with one these arguments: none, hwthread, core, l1cache, l2cache, l3cache, socket, numa, board.

I noticed that --bind-to socket binds process 0 to socket 0 and process 1 to socket 1, and so on. This is bad, since for best communication between some.exe processes, all of them should be on one socket and other.exe processes should be on the other socket.

Is there no equivalent pin option in OpenMPI?

Lukas
  • 163
  • 2
  • 11
  • 1
    try `mpirun --cpu-set 0,1,2,3,4,5,6,7,8,9,10,11 ./some.exe` – Gilles Gouaillardet May 14 '20 at 08:35
  • does not work as ``` --report-bindings ``` tells me then ```MCW rank 0 is not bound (or bound to all available processors) MCW rank 1 is not bound (or bound to all available processors) ``` and so on – Lukas Oct 20 '22 at 17:03
  • the binding report could be wrong here ... try `mpirun --tag-output ... grep Cpus_allowed_list /proc/self/status` to confirm how tasks are pinned. – Gilles Gouaillardet Oct 20 '22 at 23:54
  • you can also give a try to the `--map-by core` option. Use `--map-by core --bind-to socket` instead if you only want to pin on sockets instead of cores. – Gilles Gouaillardet Oct 20 '22 at 23:56
  • ```--map-by l2cache``` is the closest I found. It assigns all ranks to the same l2cache/socket. But if I run another mpirun command, it assigns the same processors again -> inefficient. Anyway I found it to work good enough without process pinning. – Lukas Oct 21 '22 at 22:08
  • 1
    Two `mpirun` "instances" are independent and have no knowledge of each other, so yes, both jobs will end up time sharing. Consider using a resource manager such as Slurm in order to prevent this. – Gilles Gouaillardet Oct 22 '22 at 01:47

0 Answers0