Questions tagged [mpi]

MPI is the Message Passing Interface, a library for distributed memory parallel programming and the de facto standard method for using distributed memory clusters for high-performance technical computing. Questions about using MPI for parallel programming go under this tag; questions on, eg, installation problems with MPI implementations are best tagged with the appropriate implementation-specific tag, eg MPICH or OpenMPI.

MPI is the Message Passing Interface, a library for distributed memory parallel programming and the de facto standard method for using distributed memory clusters for high-performance technical computing. Questions about using MPI for parallel programming go under this tag; questions on, eg, installation problems with MPI implementations are best tagged with the appropriate implementation-specific tag, e.g. MPICH or OpenMPI.

The official documents for MPI can be found at the webpages of the MPI forum; a useful overview is given on the Wikipedia page for MPI. The current version of the MPI standard is 3.0; the Forum is currently working on versions 3.1, which will have smaller updates and errata fixes, and 4.0, which will have significant additions and enhancements.

Open source MPI Libraries that implement the current standard include

Versions for most common platforms can be downloaded from the links above. Platform specific implementations are also available from various vendors.

A number of excellent tutorials for learning the basics of MPI programming can be found online, typically at the websites of supercomputing centres; these include (in no particular order):

Definitive Book Guide

  1. An Introduction to Parallel Programming - Peter Pacheco.
  2. Parallel Programming in C with MPI and OpenMP - Michael J. Quinn
  3. MPI: The Complete Reference (Volume 2) - William Gropp, Steven Huss-Lederman, Andrew Lumsdaine, Ewing L. Lusk, Bill Nitzberg, William Saphir, Marc Snir
  4. Using MPI: Portable Parallel Programming with the Message-Passing Interface - William Gropp, Ewing Lusk, Anthony Skjellum
6963 questions
2
votes
1 answer

fast graph partitioning for mpi parallel

I am new to graph partitioning, but I think the question I am asking should already have a good answer. I just want to partition a huge network (billions of nodes) into a few sub-graphs. so when using MPI, each sub-graph is processed by different…
2
votes
1 answer

Is it possible to change the counts of MPI_Probe using PMPI interface?

Suppose I have two process A and B. A is sending some numbers to B. But B doesn't know how many. B is using MPI_Probe to probe about the number of items and then allocates a buffer to receive those numbers. I am intercepting the send and receive…
Shafi
  • 33
  • 4
2
votes
1 answer

Create MPI type for struct containing dynamic array

I'm trying to send a struct which has one of the member as a dynamic array, but this array doesn't seem to be sent properly. Any suggestion on how to do this? This is what I have: struct bar { int a; int b; int* c; }; void…
ada lee
  • 41
  • 4
2
votes
1 answer

How do we compute the `MPI_graph_create` index array?

Can anyone use plain simple English to explain how index in function MPI_Graph_create(MPI_Comm comm_old, int nnodes, const int index[], const int edges[], int reorder, MPI_Comm *comm_graph) I have been analyzing the MPI_Graph_create…
Walker
  • 323
  • 1
  • 12
2
votes
1 answer

Set MPI to run on a range of ports

I have a mpirun version of mpirun (Open MPI) 1.8.7 and using a Centos7 operated cluster. To set my firewall configs between the nodes, I need to know which ports does MPI use? Or set specific ports range to mpirun commands? Looking at the man page,…
east.charm
  • 480
  • 1
  • 7
  • 17
2
votes
0 answers

OpenMPI: Multiple Ethernet Interfaces Failure

I wanted to setup a 3-node ring network, each connects to the other 2 using 2 Ethernet ports directly without a switch/router. The interface configurations looks like this: I've used ifconfig on each node to configure each port, and made sure I can…
Shawn
  • 713
  • 1
  • 7
  • 8
2
votes
2 answers

OpenMPI: package mpi does not exist

I work with OpenMPI. I want to run Hello.java and Ring.java from the examples here . I compile Hello.java with this line: javac Hello.java Then I can run it with mpirun. But when I compile it, I get this error: Hello.java:25: error: package mpi…
Ibo
  • 39
  • 1
  • 4
2
votes
2 answers

Adding an array using MPI in C language

I am very new to MPI programing. In the following code, I am trying to add first 3 elements using process 1 and last 3 elements using process 2. Result should show Sum is 11 for the first three elements and Sum is 18 for the last three elements. I…
D P.
  • 1,039
  • 7
  • 27
  • 56
2
votes
0 answers

MPI_Comm_Spawn called multiple times

We are writing a code to solve non linear problem using an iterative method (Newton). Anyway, the problem is that we don't know a priori how many MPI processes will be needed from one iteration to another, due to e.g. remeshing, adaptivity, etc. And…
bertbk
  • 105
  • 7
2
votes
1 answer

MPI_Allreduce mix elements in the sum

I am parallelising a fortran code which works with no problem in a no-MPI version. Below is an excerpt of the code. Every processor does the following: For a certain number of particles it evolves certain quantities in the loop "do 203"; in a…
Fred
  • 31
  • 5
2
votes
0 answers

mpi: drop a message without processing it

In MPI I can see if there are any messages waiting for a given process with MPI_Probe. That will tell me who sent me the message, its tag, and importantly how big it is. If for some reason a sender tries to send me a message that is too large (I…
Rob Latham
  • 5,085
  • 3
  • 27
  • 44
2
votes
1 answer

High performance computing in .net/C#

I currently have a back testing framework written in C# that is used to back test trading strategies. It takes few hours to run on a single computer. I would like to use High Performance technology to cut the run time to few minutes. I tried…
user977606
  • 95
  • 2
  • 11
2
votes
3 answers

Clang static analysis with MPI

I would like to use Clang's static analyzer for analyzing parallel code, i.e., code with needs MPI compiler wrappers. When configuring with CMake, however, I always get $ scan-build cmake /path/to/source -- Check for working CXX compiler:…
Nico Schlömer
  • 53,797
  • 27
  • 201
  • 249
2
votes
1 answer

As one MPI process executes MPI_Barrier(), other processes hang

I have an MPI program for having multiple processes read from a file that contains list of file names and based on the file names read - it reads the corresponding file and counts the frequency of words. If one of the processes completes this and…
Dhanashree
  • 21
  • 3
2
votes
2 answers

Why does this sample code (f90, MPI, derived types) causes invalid read/write (valgrind or dmalloc)?

This is the incriminated code (it is related to another question I asked, here): program foo use mpi implicit none type double_st sequence real(kind(0.d0)) :: x,y,z integer :: acc end type double_st integer, parameter ::…
janou195
  • 1,175
  • 2
  • 10
  • 25
1 2 3
99
100