Questions tagged [mpi]

MPI is the Message Passing Interface, a library for distributed memory parallel programming and the de facto standard method for using distributed memory clusters for high-performance technical computing. Questions about using MPI for parallel programming go under this tag; questions on, eg, installation problems with MPI implementations are best tagged with the appropriate implementation-specific tag, eg MPICH or OpenMPI.

MPI is the Message Passing Interface, a library for distributed memory parallel programming and the de facto standard method for using distributed memory clusters for high-performance technical computing. Questions about using MPI for parallel programming go under this tag; questions on, eg, installation problems with MPI implementations are best tagged with the appropriate implementation-specific tag, e.g. MPICH or OpenMPI.

The official documents for MPI can be found at the webpages of the MPI forum; a useful overview is given on the Wikipedia page for MPI. The current version of the MPI standard is 3.0; the Forum is currently working on versions 3.1, which will have smaller updates and errata fixes, and 4.0, which will have significant additions and enhancements.

Open source MPI Libraries that implement the current standard include

Versions for most common platforms can be downloaded from the links above. Platform specific implementations are also available from various vendors.

A number of excellent tutorials for learning the basics of MPI programming can be found online, typically at the websites of supercomputing centres; these include (in no particular order):

Definitive Book Guide

  1. An Introduction to Parallel Programming - Peter Pacheco.
  2. Parallel Programming in C with MPI and OpenMP - Michael J. Quinn
  3. MPI: The Complete Reference (Volume 2) - William Gropp, Steven Huss-Lederman, Andrew Lumsdaine, Ewing L. Lusk, Bill Nitzberg, William Saphir, Marc Snir
  4. Using MPI: Portable Parallel Programming with the Message-Passing Interface - William Gropp, Ewing Lusk, Anthony Skjellum
6963 questions
27
votes
4 answers

openacc vs openmp & mpi differences ?

I was wondering what are the major differences between openacc and openmp. What about MPI, cuda and opencl ? I understand the differences between openmp and mpi, especially the part about shared and distributed memory Do any of them allow for a…
Sid5427
  • 721
  • 3
  • 11
  • 19
26
votes
7 answers

What is the best tutorial for learning MPI for C++?

I plan to use MPI for my C++ code. I have installed MPICH2 on my computers. But I do not know much about MPI and hope to find some materials to read. I hope you experts can recommend some good materails to me. Any advice will be appreciated.
Jackie
  • 1,071
  • 2
  • 12
  • 17
26
votes
3 answers

Difference between MPI_Allgather and MPI_Alltoall functions?

What is the main difference betweeen the MPI_Allgather and MPI_Alltoall functions in MPI? I mean can some one give me examples where MPI_Allgather will be helpful and MPI_Alltoall will not? and vice versa. I am not able to understand the main…
Kranthi Kumar
  • 1,184
  • 3
  • 13
  • 26
25
votes
7 answers

MPI for multicore?

With the recent buzz on multicore programming is anyone exploring the possibilities of using MPI ?
25
votes
4 answers

What is the best MPI implementation

I have to implement MPI system in a cluster. If anyone here has any experience with MPI (MPICH/OpenMPI), I'd like to know which is better and how the performance can be boosted on a cluster of x86_64 boxes.
prasanna
  • 1,887
  • 3
  • 20
  • 26
25
votes
1 answer

MPI vs GPU vs Hadoop, what are the major difference between these three parallelism?

I know for some machine learning algorithm like random forest, which are by nature should be implemented in parallel. I do a home work and find there are these three parallel programming framework, so I am interested in knowing what are the major…
user974270
  • 627
  • 3
  • 8
  • 18
24
votes
3 answers

Sending and receiving 2D array over MPI

The issue I am trying to resolve is the following: The C++ serial code I have computes across a large 2D matrix. To optimize this process, I wish to split this large 2D matrix and run on 4 nodes (say) using MPI. The only communication that occurs…
Ashmohan
  • 491
  • 1
  • 11
  • 22
24
votes
5 answers

How to use mpi on Mac OS X

I have been searching for a way to use mpi on my mac but everything is very advanced. I have successfully installed open-mpi using brew install open-mpi I have .c files ready for compiling and running. When I type: mpicc -o
jjCS
  • 241
  • 1
  • 2
  • 3
24
votes
1 answer

difference between MPI_Send() and MPI_Ssend()?

I know MPI_Send() is a blocking call ,which waits until it is safe to modify the application buffer for reuse. For making the send call synchronous(there should be a handshake with the receiver) , we need to use MPI_Ssend() . I want to know the…
Ankur Gautam
  • 1,412
  • 5
  • 15
  • 27
22
votes
4 answers

How can I pipe stdin from a file to the executable in Xcode 4+?

I have an mpi program and managed to compile and link it via Xcode 4. Now I want to debug it using Xcode 4. How can I pipe the standard input to the program from a file? In terminal I would type mpirun -np 2 program < input.txt I am able to run the…
sta
  • 456
  • 1
  • 3
  • 10
22
votes
8 answers

shared memory, MPI and queuing systems

My unix/windows C++ app is already parallelized using MPI: the job is splitted in N cpus and each chunk is executed in parallel, quite efficient, very good speed scaling, the job is done right. But some of the data is repeated in each process, and…
Blklight
  • 1,633
  • 1
  • 15
  • 14
22
votes
1 answer

Convert a libc backtrace to a source line number

I have an MPI application with which combines both C and Fortran sources. Occasionally it crashes due to a memory related bug, but I am having trouble finding the bug (it is somewhere in someone else's code, which at the moment I'm not very familiar…
davepc
  • 320
  • 2
  • 10
22
votes
1 answer

Tools to measure MPI communication costs

I'm using MPI and I want to measure the communication costs, so that I can then compare them to the 'processing' costs, e.g., how much time do I need to scatter a list through n processes and then compare it to how much time I need to sort it. Does…
dx_mrt
  • 707
  • 7
  • 13
21
votes
3 answers

MPI global execution time

I'm working on a little application that multiples an array with a Matrix. It works without any problem. I'm loking for measure the execution time of the application. I can find the individual time of execution of each processes (its starting and…
jomaora
  • 1,656
  • 3
  • 17
  • 26
21
votes
2 answers

Kubernetes and MPI

I want to run an MPI job on my Kubernetes cluster. The context is that I'm actually running a modern, nicely containerised app but part of the workload is a legacy MPI job which isn't going to be re-written anytime soon, and I'd like to fit it into…
Ben
  • 843
  • 8
  • 21