Questions tagged [openmpi]

Open MPI is an open source implementation of the Message Passing Interface, a library for distributed memory parallel programming.

The Open MPI Project is an open-source implementation of the Message Passing Interface, a standardized and portable message-passing system designed to leverage to computational power of massively parallel, distributed memory computers.

Message passing is one of the distributed memory models most often used, with MPI being the most used message passing API, offering two types of communication between processes: point-to-point or collective. MPI can run in distributed and shared memory architectures.

An application using MPI consists usually of multiple simultaneously running processes, normally on different CPUs, which are able to communicate with each other. Normally, this type of application is programmed using the SPMD model. Nevertheless, most MPI implementations also support the MPMD model.

More information about the MPI standard may be found on the official MPI Forum website, in the official documentation, and in the Open MPI documentation.

1341 questions
5
votes
3 answers

Cost of OpenMPI in C++

I have the following program C++ program which uses no communication, and the same identical work is done on all cores, I know that this doesn't use parallel processing at all: unsigned n =…
datguyray
  • 131
  • 7
5
votes
1 answer

How to modify MPI blocking send and receive to non-blocking

I am trying to understand the difference between blocking and non-blocking message passing mechanisms in parallel processing using MPI. Suppose we have the following blocking code: #include #include #include "mpi.h" int main…
Mike H.
  • 51
  • 4
5
votes
3 answers

OpenMP & MPI explanation

A few minutes ago I stumbled upon some text, which reminded me of something that has been wondering my mind for a while, but I had nowhere to ask. So, in hope this may be the place, where people have hands on experience with both, I was wondering if…
Friedrich
  • 51
  • 1
  • 2
5
votes
1 answer

Error when running OpenMPI based library

I have install openmpi library from the standard apt-get install available in Ubuntu. I run a python code which call MPI libraries. I get the following error. Any ideas whatis the source of error? Is it an OpenMPI configuration error? How to fix…
godot101
  • 305
  • 1
  • 4
  • 12
5
votes
0 answers

Executing MPI on a heterogeneous cluster (1 count of MPI_INT = consistent?)

I am trying to execute an MPI program across a heterogeneous cluster, one running Ubuntu 12.04 (64-bit) and the other CentOS 6.4 (64-bit). I compile a simple MPI program on CentOS, scp it over to Ubuntu, and test that it works with 1 or many MPI…
ricky116
  • 744
  • 8
  • 21
5
votes
2 answers

Avoid Accept Incoming Network Connections dialog in mpirun on Mac-OSX

I am a MPI beginner. I am trying to run the simplest MPI "hello world" code on my macbook running Mac_OSX Mountain Lion. It has only 1 processor but it has 4 cores. The C++ code goes like this #include #include "mpi.h" using namespace…
Guddu
  • 2,325
  • 2
  • 18
  • 23
5
votes
1 answer

mpirun --cpu-set vs. --rankfile (Open MPI) 1.4.5

I want to accurately pin my MPI processes to a list of (physical) cores. I refer to the following points of the mpirun --help output: -cpu-set|--cpu-set Comma-separated list of ranges specifying logical …
el_tenedor
  • 644
  • 1
  • 8
  • 19
5
votes
1 answer

An "atomic" call to cout in MPI

I am interested in whether there is a command or a technique within OpenMPI to have an atomic call to write to stdout (or, for that matter, any stream). What I have noticed is that during the execution of MPI programs, calls to write to cout (or…
Madeleine P. Vincent
  • 3,361
  • 5
  • 25
  • 30
5
votes
2 answers

Parallel Demonstration Program

An assignment that I've just now completed requires me to create a set of scripts that can configure random Ubuntu machines as nodes in an MPI computing cluster. This has all been done and the nodes can communicate with one another properly.…
Lilienthal
  • 4,327
  • 13
  • 52
  • 88
5
votes
1 answer

How to use MPI_Irecv?

From OpenMPI docs: C++ syntax Request Comm::Irecv(void* buf, int count, const Datatype& datatype, int source, int tag) const So I imagine I do something like: MPI::Request req; req = MPI_Irecv(&ballChallenges[i], 2, MPI_INT, i, TAG_AT_BALL,…
Jiew Meng
  • 84,767
  • 185
  • 495
  • 805
5
votes
3 answers

How to build boost with mpi support on homebrew?

According to this post (https://github.com/mxcl/homebrew/pull/2953), the flag "--with-mpi" should enable boost_mpi build support for the related homebrew formula, so I am trying to install boost via homebrew like this: brew install boost…
Chris
  • 3,245
  • 4
  • 29
  • 53
4
votes
2 answers

MPI-size and number of OpenMP-Threads

I am trying to write a hybrid OpenMP/MPI-program, and am therefore trying to understand the correlation between the number of OpenMP-Threads and MPI-processes. Therefore, I created a small test program: #include #include #include…
arc_lupus
  • 3,942
  • 5
  • 45
  • 81
4
votes
0 answers

Permanently allocating MPI communicator for C/C++

MPI communicator is created in Fortran and passed to C which in turn returns a pointer (c_ptr) to the C communicator. This is done to avoid constructing C communicator for every C function. But when I try to reuse the C communicator in ReuseComm I…
Shibli
  • 5,879
  • 13
  • 62
  • 126
4
votes
2 answers

How could I run Open MPI under Slurm

I am unable to run Open MPI under Slurm through a Slurm-script. In general, I am able to obtain the hostname and run Open MPI on my machine. $ mpirun hostname myHost $ cd NPB3.3-SER/ && make ua CLASS=B && mpirun -n 1 bin/ua.B.x inputua.data #…
alper
  • 2,919
  • 9
  • 53
  • 102
4
votes
1 answer

How to disable C++ wrappers of MPI?

My project is mostly C and Fortran, but I had to use MPI from inside a C++ file. I don't want to use C++ wrapper nor link against libmpi_cxx.so, I use only the plain C interface. But just including mpi.h in my C++ file is enough for the linker to…
lvella
  • 12,754
  • 11
  • 54
  • 106