Questions tagged [openmpi]

Open MPI is an open source implementation of the Message Passing Interface, a library for distributed memory parallel programming.

The Open MPI Project is an open-source implementation of the Message Passing Interface, a standardized and portable message-passing system designed to leverage to computational power of massively parallel, distributed memory computers.

Message passing is one of the distributed memory models most often used, with MPI being the most used message passing API, offering two types of communication between processes: point-to-point or collective. MPI can run in distributed and shared memory architectures.

An application using MPI consists usually of multiple simultaneously running processes, normally on different CPUs, which are able to communicate with each other. Normally, this type of application is programmed using the SPMD model. Nevertheless, most MPI implementations also support the MPMD model.

More information about the MPI standard may be found on the official MPI Forum website, in the official documentation, and in the Open MPI documentation.

1341 questions
7
votes
1 answer

Install OpenMPI on Ubuntu 17.10

I am using the following command for OpenMPI installation on Ubuntu 17.10: sudo apt-get install openmpi-bin openmpi-common openssh-client openssh-server libopenmpi1.3 libopenmpi-dbg libopenmpi-dev. However, I get the following error: E: Unable to…
LM10
  • 1,089
  • 4
  • 10
  • 16
7
votes
1 answer

Initializing MPI cluster with snowfall R

I've been trying to run Rmpi and snowfall on my university's clusters but for some reason no matter how many compute nodes I get allocated, my snowfall initialization keeps running on only one node. Here's how I'm initializing…
6
votes
1 answer

Sending an int array via MPI_Send

I'm new to MPI and i would like to send an int array via MPI_Send to another process. // Code example int main(int argc, char ** argv) { int * array; int tag=1; int size; int rank; MPI_Status status; MPI_Init (&argc,&argv); …
John M.
  • 245
  • 1
  • 3
  • 13
6
votes
1 answer

What is the proper way to handle MPI communicators in Fortran?

I read that is recommended to use the MPI module rather than include mpif.h file. However, I get the following error Error: There is no specific subroutine for the generic ‘mpi_comm_split’ when I run this program program hello_world use…
Tarek
  • 1,060
  • 4
  • 17
  • 38
6
votes
2 answers

How to enable multithreading flag in openmpi in linux?

I tried using MPI_THREAD_MULTIPLE option in openmpi. For that to work i found that i need to enable the multiple thread option in openmpi configuration. I don't know how to do that? Can someone please help me in this. Thank you in advance.I checked…
Murali krishna
  • 823
  • 1
  • 8
  • 23
6
votes
1 answer

Undefined references when mixing Intel C++ and Fortran in OpenMPI

I have a compilation problem using Open MPI 1.8.4 and Intel Compiler v15.2. This is a large code that uses Fortran and C++. The code was previously compiled using Open MPI 1.6 And the issue was not there. Here is the content of the make file:…
stas_s
  • 71
  • 1
  • 4
6
votes
1 answer

How to replace --cpus-per-proc with --map-by in OpenMPI

I need to update some old codes to work with the most recent version of OpenMPI, but I'm very confused by the new --map-by system. In particular, I'm not sure how to replace --cpus-per-proc N. Several websites have suggested using --map-by…
Kat S.
  • 63
  • 4
6
votes
2 answers

MPI Send and Recv Hangs with Buffer Size Larger Than 64kb

I am trying to send data from process 0 to process 1. This program succeeds when the buffer size is less than 64kb, but hangs if the buffer gets much larger. The following code should reproduce this issue (should hang), but should succeed if n is…
Ruvu
  • 101
  • 8
6
votes
1 answer

Bizarre deadlock in MPI_Allgather

After much Googling, I have no idea what's causing this issue. Here it is: I have a simple call to MPI_Allgather in my code which I have double, triple, and quadruple-checked to be correct (send/receive buffers are properly sized; the send/receive…
Jacob
  • 83
  • 7
6
votes
1 answer

Is there a limit for the message size in mpi using boost::mpi?

I'm currently writing a simulation using boost::mpi on top of openMPI and everything works great. However once I scale up the system and therefore have to send larger std::vectors I get errors. I've reduced the issue to the following…
tik
  • 63
  • 5
6
votes
1 answer

How to build openmpi with homebrew and gcc-4.9?

By default brew install openmpi uses clang to create its wrapper. I need to specify gcc-4.9(Homebrew installed) for the wrapper. I have tried $export CC=gcc-4.9 $brew install openmpi $brew install --cc=gcc-4.9 openmpi $brew install --with-gcc49…
ilciavo
  • 3,069
  • 7
  • 26
  • 40
6
votes
0 answers

C++ program with Open MPI doesn't work without internet connection

There is a problem with MPI - program works when there is an internet connection on my PC, but doesn't work without it. I got this error: It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many…
6
votes
1 answer

MPI_ERR_TRUNCATE: On Broadcast

I have an int I intend to broadcast from root (rank==(FIELD=0)). int winner if (rank == FIELD) { winner = something; } MPI_Barrier(MPI_COMM_WORLD); MPI_Bcast(&winner, 1, MPI_INT, FIELD, MPI_COMM_WORLD); MPI_Barrier(MPI_COMM_WORLD); if (rank…
Jiew Meng
  • 84,767
  • 185
  • 495
  • 805
6
votes
5 answers

assign two MPI processes per core

How do I assign 2 MPI processes per core? For example, if I do mpirun -np 4 ./application then it should use 2 physical cores to run 4 MPI processes (2 processes per core). I am using Open MPI 1.6. I did mpirun -np 4 -nc 2 ./application but wasn't…
codereviewanskquestions
  • 13,460
  • 29
  • 98
  • 167
6
votes
1 answer

MPI Internals: Communication implementation between processes

I am trying to figure out how the actual process communication is happening inside MPI communicators. I have 8 nodes, each has 12 cores (96 instances running). Each process has unique rank assigned and processes are able to communicate between each…
davidlt
  • 1,007
  • 2
  • 11
  • 17