Questions tagged [mpi]

MPI is the Message Passing Interface, a library for distributed memory parallel programming and the de facto standard method for using distributed memory clusters for high-performance technical computing. Questions about using MPI for parallel programming go under this tag; questions on, eg, installation problems with MPI implementations are best tagged with the appropriate implementation-specific tag, eg MPICH or OpenMPI.

MPI is the Message Passing Interface, a library for distributed memory parallel programming and the de facto standard method for using distributed memory clusters for high-performance technical computing. Questions about using MPI for parallel programming go under this tag; questions on, eg, installation problems with MPI implementations are best tagged with the appropriate implementation-specific tag, e.g. MPICH or OpenMPI.

The official documents for MPI can be found at the webpages of the MPI forum; a useful overview is given on the Wikipedia page for MPI. The current version of the MPI standard is 3.0; the Forum is currently working on versions 3.1, which will have smaller updates and errata fixes, and 4.0, which will have significant additions and enhancements.

Open source MPI Libraries that implement the current standard include

Versions for most common platforms can be downloaded from the links above. Platform specific implementations are also available from various vendors.

A number of excellent tutorials for learning the basics of MPI programming can be found online, typically at the websites of supercomputing centres; these include (in no particular order):

Definitive Book Guide

  1. An Introduction to Parallel Programming - Peter Pacheco.
  2. Parallel Programming in C with MPI and OpenMP - Michael J. Quinn
  3. MPI: The Complete Reference (Volume 2) - William Gropp, Steven Huss-Lederman, Andrew Lumsdaine, Ewing L. Lusk, Bill Nitzberg, William Saphir, Marc Snir
  4. Using MPI: Portable Parallel Programming with the Message-Passing Interface - William Gropp, Ewing Lusk, Anthony Skjellum
6963 questions
2
votes
0 answers

Optimize writing to shared file with MPI

In my MPI program, I need to write the results of some computation to a single (shared) file, where each MPI process writes its portion of the data at different offsets. Simple enough. I have implemented it like: offset = rank * sizeof(double) *…
user3452579
  • 413
  • 4
  • 14
2
votes
1 answer

Infiniband vs. Gigabit ethernet, how do I control which is used by an MPI program

I have an MPI program that runs on a computer cluster that has both ethernet and Infiniband connectivity. When I compile with mpavich2's mpicc, it automatically links to the Infiniband libraries. Is there a way to control which network is used…
irritable_phd_syndrome
  • 4,631
  • 3
  • 32
  • 60
2
votes
2 answers

MPI how to receive dynamic arrays from slave nodes?

I am new to MPI. I want to send three ints to three slave nodes to create dynamic arrays, and each arrays will be send back to master. According to this post, I modified the code, and it's close to the right answer. But I got breakpoint when…
just_rookie
  • 873
  • 12
  • 33
2
votes
2 answers

Parallel calculation of the sum of an array with OpenMPI & Debugging tips

I am trying to convert a serial programma to parallel one using OpenMPI as practice. I used the following simple code to calculate the sum of an array, and tried to convert it to run on multiple nodes, but im getting an MPI_ERROR during runtime that…
2
votes
1 answer

Dimensioning in MPI Scattering

So I am working on a simple matrix multiplication code using MPI. One of the problems I am facing is in scattering one of the matrices to all the processors. I am assuming that the dimension of my matrix might not be divisible by the number of…
Utsav Jain
  • 21
  • 3
2
votes
1 answer

MPI collective operations and process lifetime (C/C++)

For the problem I'd like to discuss, let's take MPI_Barrier as an example. The MPI3 standard states If comm is an intracommunicator, MPI_BARRIER blocks the caller until all group members have called it. The call returns at any process only …
sperber
  • 661
  • 6
  • 20
2
votes
0 answers

Getting an error using a second BCast with MPI in C

Im having troubles with the following code #include #include #include //1 mostrar lista //2 borrar elementos //3 ordenar lista //4 añadir elemento int main(int argc,char**argv){ int rank; int *list; //Lista …
Sage Harpuia
  • 348
  • 2
  • 13
2
votes
1 answer

Are the Hockney model parameters functions of message size?

Using the Hockney model, transferring time is modeled by t(s) = α + βm, where α is the latency for each message, and β is the transfer time per byte (or reciprocal of network bandwidth). But from some papers (like this paper), latency and transfer…
voxter
  • 853
  • 2
  • 14
  • 30
2
votes
2 answers

MPI_Scatter, to scatter a diagonal elements

I am trying to solve a simple program using the MPI Library. There is 4*N × 4*N matrix is stored on process 0. Length of each side of the matrix is DIM LEN = 4*N. I need to crate a diagonal Datatype. However, instead of 4*N, the datatype should only…
jgm
  • 1,230
  • 1
  • 19
  • 39
2
votes
0 answers

Gathering a distributed array using MPI

I have an array that is distributed across 3 processes in a cyclic fashion, let's say these are their parts: Proc 0: {0.0, 0.0, 0.1, 0.1, 0.2} Proc 1: {1.0, 1.0, 1.1, 1.1} Proc 2: {2.0, 2.0, 2.1, 2.1} I would like to gather all of these into one…
Circus
  • 31
  • 1
  • 4
2
votes
0 answers

mpiexec won't run mpi4py script when two hosts are utilized in an MPI cluster established through LAN

So I have this other desktop PC, that serves as my server, primesystem and a laptop as my client, zerosystem that is connected to it. They both serve as my ssh-server and ssh-client respectively, and is connected through an ethernet (not crossover)…
anobilisgorse
  • 906
  • 2
  • 11
  • 25
2
votes
0 answers

Avoid succession of blocking MPI_BCAST

I am trying to improve one of the code I am using for numerical simulations. One of the computation step requires to compute several large arrays, whose computation is complex and costly. What is done now is that each array is computed by a specific…
2
votes
2 answers

Collectives communication:LogP model measuring benchmarks

Now, i am measuring LogP model parameters ,and i want to find some benchmarks which measure its parameters.I find a LogP benchmark from this paper : "Fast Measurement of Parameters for Message Passing Platforms " at this link:benchmark. But i can…
voxter
  • 853
  • 2
  • 14
  • 30
2
votes
1 answer

difference between slurm sbatch -n and -c

The cluster that I work with recently switched from SGE to SLURM. I was wondering what the difference between sbatch options --ntasks and --cpus-per-task? --ntasks seemed appropriate for some MPI jobs that I ran but did not seem appropriate for…
2
votes
2 answers

Why doesn't MPI_SEND work within my for loop? It works fine if explicitly stated

I'm trying to send a number to p-1 processes. Process 0 sends this value to all other processes. I use an MPI_SEND Command to do this. When I explicitly write out MPI_SEND commands for 3 processes, it works fine. But when I want to put it in a loop,…