Questions tagged [openmpi]

Open MPI is an open source implementation of the Message Passing Interface, a library for distributed memory parallel programming.

The Open MPI Project is an open-source implementation of the Message Passing Interface, a standardized and portable message-passing system designed to leverage to computational power of massively parallel, distributed memory computers.

Message passing is one of the distributed memory models most often used, with MPI being the most used message passing API, offering two types of communication between processes: point-to-point or collective. MPI can run in distributed and shared memory architectures.

An application using MPI consists usually of multiple simultaneously running processes, normally on different CPUs, which are able to communicate with each other. Normally, this type of application is programmed using the SPMD model. Nevertheless, most MPI implementations also support the MPMD model.

More information about the MPI standard may be found on the official MPI Forum website, in the official documentation, and in the Open MPI documentation.

1341 questions
13
votes
4 answers

Is it possible to send data from a Fortran program to Python using MPI?

I am working on a tool to model wave energy converters, where I need to couple two software packages to each other. One program is written in Fortran, the other one in C++. I need to send information from the Fortran program to the C++ program at…
13
votes
2 answers

Open MPI - mpirun exits with error on simple program

I have recently installed OpenMPI on my computer and when I try to run a simple Hello World program, it exits with the next error: ------------------------------------------------------- Primary job terminated normally, but 1 process returned a…
fenusa0
  • 146
  • 1
  • 1
  • 5
13
votes
3 answers

Having Open MPI related issues while making CUDA 5.0 samples (Mac OS X ML)

When I'm trying to make CUDA 5.0 samples an error appears: Makefile:79: * MPI not found, not building simpleMPI.. Stop. I've tried to download and build the latest version of Open MPI reffering to Open MPI "FAQ / Platforms / OS X / 6. How do I…
Geradlus_RU
  • 1,466
  • 2
  • 20
  • 37
12
votes
4 answers

Why Do All My Open MPI Processes Have Rank 0?

I'm writing a parallel program using Open MPI. I'm running Snow Leopard 10.6.4, and I installed Open MPI through the homebrew package manager. When I run my program using mpirun -np 8 ./test, every process reports that it has rank 0, and believes…
aperiodic
  • 121
  • 1
  • 1
  • 3
12
votes
5 answers

Error when starting Open MPI in MPI_Init via Python

I am trying to access a shared library with OpenMPI via python, but for some reason I get the following error message: [Geo00433:01196] mca: base: component_find: unable to open /usr/li/openmpi/lib/openmpi/mca_paffinity_hwloc: perhaps a missing…
Jannis
  • 173
  • 2
  • 6
12
votes
2 answers

Is there an easy way to use clang with Open MPI?

OpenMPI strongly recommends using their wrapper compilers. Behind the scenes, their wrapper compiler mpiCC calls gcc (by default?) and adds the necessary flags for MPI code to compile. However, other compilers give more descriptive error messages…
Ammar
  • 573
  • 1
  • 4
  • 14
12
votes
3 answers

Running OpenMPI program without mpirun

I'm using gcc and OpenMPI. Usually I run MPI programs using the mpirun wrapper -- for example, mpirun -np 4 myprogram to start 4 processes. However, I was wondering if it's possible to easily generate a binary which will do that automatically…
Jay
  • 9,585
  • 6
  • 49
  • 72
11
votes
1 answer

fault tolerance in MPICH/OpenMPI

I have two questions- Q1. Is there a more efficient way to handle the error situation in MPI, other than check-point/rollback? I see that if a node "dies", the program halts abruptly.. Is there any way to go ahead with the execution after a node…
Param
  • 197
  • 2
  • 7
11
votes
2 answers

Probe seems to consume the CPU

I've got an MPI program consisting of one master process that hands off commands to a bunch of slave processes. Upon receiving a command, a slave just calls system() to do it. While the slaves are waiting for a command, they are consuming 100% of…
Ben Kovitz
  • 4,920
  • 1
  • 22
  • 50
11
votes
1 answer

what does it mean configuring MPI for shared memory?

I have a bit of research related question. Currently I have finished implementation of structure skeleton frame work based on MPI (specifically using openmpi 6.3). the frame work is supposed to be used on single machine. now, I am comparing it with…
LeTex
  • 1,452
  • 1
  • 14
  • 28
10
votes
1 answer

GPU allocation in Slurm: --gres vs --gpus-per-task, and mpirun vs srun

There are two ways to allocate GPUs in Slurm: either the general --gres=gpu:N parameter, or the specific parameters like --gpus-per-task=N. There are also two ways to launch MPI tasks in a batch script: either using srun, or using the usual mpirun…
Jakub Klinkovský
  • 1,248
  • 1
  • 12
  • 33
10
votes
4 answers

OpenMPI MPI_Barrier problems

I having some synchronization issues using the OpenMPI implementation of MPI_Barrier: int rank; int nprocs; int rc = MPI_Init(&argc, &argv); if(rc != MPI_SUCCESS) { fprintf(stderr, "Unable to set up MPI"); MPI_Abort(MPI_COMM_WORLD,…
hola
  • 105
  • 1
  • 6
10
votes
1 answer

using MPI with docker containers

I have created a docker image based on Ubuntu 16.04 and with all the dependencies needed to run MPI. It is public on docker-hub at: https://hub.docker.com/r/orwel84/ubuntu-16-mpi/ I use this image to create an MPI container. I can also compile a…
revolutionary
  • 3,314
  • 4
  • 36
  • 53
10
votes
1 answer

Difference between mpif90 and mpifort

What is the difference between these two compilers, mpif90 and mpifort? Both seems to be for Fortran 90 code. Both got installed when I installed openMPI on Linux. Are the usage (compiler options) different?
boxofchalk1
  • 493
  • 1
  • 6
  • 13
10
votes
1 answer

InfiniBand: transfer rate depends on MPI_Test* frequency

I'm writing a multi-threaded OpenMPI application, using MPI_Isend and MPI_Irecv from several threads to exchange hundreds of messages per second between ranks over InfiniBand RDMA. Transfers are in the order of 400 - 800KByte, generating about 9…
1
2
3
89 90