Questions tagged [mvapich2]

MVAPICH2 is a high performance MPI 2.2 implementation for OpenFabrics and other high speed network interconnects. It is based on MPICH2 and MVICH and is provided as open source under BSD license.

33 questions
1
vote
0 answers

Running cpi example of MVAPICH2 by mpirun_rsh failed

I am a new user of MVAPICH2, and I encountered troubles when I started with it. First, I think I have installed it successfully, through this: ./configure --disable-fortran --enable-cuda make -j 4 make install There were not errors. But…
1
vote
1 answer

MPIR Prefix in MPICH/MVAPICH

The following link represent function name prefix conventions in MPICH/MVAPICH (e.g., MPID and MPIU prefixes) Function Name Prefix Convention in MPICH/MVAPICH I am just wondering what MPIR prefix represents (not explained in the link above)? At…
Iman
  • 188
  • 9
1
vote
0 answers

MVAPICH2 R3 rendez-vu protocol magic

I am wondering why the R3 protocol shows great performance, when using multiple different buffers which exhaust the registration cache. Does it not need to pin and unpin the buffers provided for sending or how does it hide this overhead? Is it…
1
vote
1 answer

How to find out InfiniBand installation path

I want to compile MVAPICH2 myself, but not sure where to find psm.h file, it cant be found in default places. Anyone knows that which command I can use to find InfiniBand?
Daniel
  • 2,576
  • 7
  • 37
  • 51
0
votes
1 answer

mpirun on CPUs with specified IDs

Does anyone know how to execute mpirun on specified CPUs? Though "mpirun -np 4 " specifies the number of CPUs used, what I here want to do is to specify CPU IDs. The OS is CentOS 5.6 and MVAPICH2 is used on a single node with 6x2 cores. Thank you…
0
votes
0 answers

MVAPICH2 + process spawning

I use MVAPICH2.3 version to run parallel programs. On this site (http://mpi.deino.net/mpi_functions/MPI_Comm_spawn_multiple.html) I found out the example of spawning execution via MPI_Comm_Spawn_Multiple(). After successive compiling and execution…
0
votes
2 answers

Is it possible to migrate one process from one core of a node to another core of another node in MPI?

If I want to remap processes-core for MPI program, can I migrate after those are spawned? For example: Node 1 have: P0,P3,P6 and Node 2 have: P1,P4,P7. Can I migrate P1 to Node 1? Topology aware MPI suggests remapping in research papers. That hints…
0
votes
1 answer

MPICC: where to find OpenMPI's mpicc's "showme" when using MVAPICH2's mpicc

I want to find mvapich2's equivalent of OpenMPI's --showme flags. In particular, I'm trying to compile a library that I did not develop with the following code in its local.mk file: # If using OpenMPI, and mpicc is in your path, then no modification…
0
votes
1 answer

Intel MPI benchmark fails when # bytes > 128: IMB-EXT

I just installed Linux and Intel MPI to two machines: (1) Quite old (~8 years old) SuperMicro server, which has 24 cores (Intel Xeon X7542 X 4). 32 GB memory. OS: CentOS 7.5 (2) New HP ProLiant DL380 server, which has 32 cores (Intel Xeon Gold 6130…
Jae
  • 1
  • 2
0
votes
1 answer

MVAPICH 2.3 configure for multiple devices

While mvapich from version 2.3 deprecated the Nemesis interface, is there any way now to configure it at once for Infiniband support with fallback to TCP when failed? Or do I have to have two compilations for different network setups in my grid?
Houmles
  • 197
  • 11
0
votes
1 answer

Disabling registration cache in MPICH3.2

When using MVAPICH2 I export this variable: MV2_USE_LAZY_MEM_UNREGISTER=0 In the user guide this variable is defined as: "Setting this parameter enables mvapich2 to use memory registration cache." If I needed to use this feature in MPICH, which…
Bub Espinja
  • 4,029
  • 2
  • 29
  • 46
0
votes
1 answer

CUDA-aware MPI for two GPUs within one K80

I am trying to optimize the performance of a MPI+CUDA benchmark called LAMMPS (https://github.com/lammps/lammps). Right now I am running with two MPI processes and two GPUs. My system has two sockets and each socket connects to 2 K80. Since each K80…
silence_lamb
  • 377
  • 1
  • 3
  • 12
0
votes
0 answers

Unclear MPI message

I'm running a parallel application and it runs properly until sudden abort with the following massage from couple of cores: [n18:mpi_rank_91][handle_cqe] Send desc error in msg to 103, wc_opcode=0 [n18:mpi_rank_91][handle_cqe] Msg from 103:…
Jacob
  • 59
  • 5
0
votes
1 answer

MVAPICH on multi-GPU causes Segmentation fault

I'm using MVAPICH2 2.1 on a Debian 7 machine. It has multiple cards of Tesla K40m. The code is as follows. #include #include #include #include #include int main(int argc, char** argv) { …
Hot.PxL
  • 1,902
  • 1
  • 17
  • 30
0
votes
1 answer

MVAPICH2 - supported network types

Can MVAPICH2 be installed on a normal ethernet network other than InfiniBand or other HPC networking technology?
Maddy
  • 2,114
  • 7
  • 30
  • 50