I'm working on HPC that has 144 cores.
I have 24 nodes and every node has 6cpus. like
node 0:0,1,2,3,4,5
node 1: 6,7,8,9,10,11
...
Using Mpich2. I'm running my c ecxecutable like this. mpiexec -n 25 ./a.out
In the a.out it will work as rank 0 makes a master rank 1 (which is free), and the master rank 1 makes X=6 numbers(it will change sometime 3,6,7) execs parallel. on the rank 2,3,4,5,6,7 rank using numaactl -l --phycpubind = %d x.out it is working but the thing is that I get the error sched_setaffinity: Invalid argument.
ps -aF prints the write binding in PSR for x.out.
sched_setaffinity: Invalid argument.in future makes a problem or is it making a problem now.
Thank you.
Asked
Active
Viewed 797 times
0

Krunal
- 1
- 3
-
Are you using a batch system? Usually these things involve the batch system... – Zulan May 18 '16 at 15:43
1 Answers
1
What you are looking for is 'Process Affinity'.
The affinity paradigm chosen guides the implementation to map the process to the scheme you opted for, you have the option to map the process to socket/core/hwthread.
Mpich has a '-bind-to' switch that enables this. For example:
mpiexec -bind-to core:144 -n ...
should bind your processes to 144 exclusive cores.
try mpiexec -bind-to -help
for more information about this.
Here is the user guide.

Sidharth N. Kashyap
- 299
- 2
- 6
-
I am trying to use `-bind-to` in a multiple program configuration, aka `mpiexec -bind-to core:1 -n 1 foo : -bind-to core:2 -n 1 bar` without success (duplicate setting: bind-to), what is the proper way to set process affinity per executable? – Samuel Dec 15 '21 at 10:44