Questions tagged [mpi]

MPI is the Message Passing Interface, a library for distributed memory parallel programming and the de facto standard method for using distributed memory clusters for high-performance technical computing. Questions about using MPI for parallel programming go under this tag; questions on, eg, installation problems with MPI implementations are best tagged with the appropriate implementation-specific tag, eg MPICH or OpenMPI.

MPI is the Message Passing Interface, a library for distributed memory parallel programming and the de facto standard method for using distributed memory clusters for high-performance technical computing. Questions about using MPI for parallel programming go under this tag; questions on, eg, installation problems with MPI implementations are best tagged with the appropriate implementation-specific tag, e.g. MPICH or OpenMPI.

The official documents for MPI can be found at the webpages of the MPI forum; a useful overview is given on the Wikipedia page for MPI. The current version of the MPI standard is 3.0; the Forum is currently working on versions 3.1, which will have smaller updates and errata fixes, and 4.0, which will have significant additions and enhancements.

Open source MPI Libraries that implement the current standard include

Versions for most common platforms can be downloaded from the links above. Platform specific implementations are also available from various vendors.

A number of excellent tutorials for learning the basics of MPI programming can be found online, typically at the websites of supercomputing centres; these include (in no particular order):

Definitive Book Guide

  1. An Introduction to Parallel Programming - Peter Pacheco.
  2. Parallel Programming in C with MPI and OpenMP - Michael J. Quinn
  3. MPI: The Complete Reference (Volume 2) - William Gropp, Steven Huss-Lederman, Andrew Lumsdaine, Ewing L. Lusk, Bill Nitzberg, William Saphir, Marc Snir
  4. Using MPI: Portable Parallel Programming with the Message-Passing Interface - William Gropp, Ewing Lusk, Anthony Skjellum
6963 questions
2
votes
1 answer

Boost mpi x64 warning "size_t to int"

I have built Boost with mpi successfully, but I got lots of warnings using boost mpi under x64 platform. I am using Boost 1.59.0 + vs2015. Please help me get rid of these warnings Here's my test code. #include #include…
定坤宋
  • 35
  • 3
2
votes
0 answers

Communication between mpiexec and mpi4py not working?

I have written a script which I was running on a Ubuntu 14.04 LTS machine in python2.7 using mpi4py. Here is a snippet from the beginning: from mpi4py import MPI comm = MPI.COMM_WORLD rank = comm.Get_rank() size = comm.Get_size() print…
P-M
  • 1,279
  • 2
  • 21
  • 35
2
votes
1 answer

Sending partial MPI messages

To avoid allocating an intermediary buffer, it makes sense in my application that my MPI_Recv receives one single big array, but on the sending side, the data is non-contiguous, and I'd like it to make the data available to the network interface as…
lvella
  • 12,754
  • 11
  • 54
  • 106
2
votes
2 answers

Mpi4py mpi_test always returns false

I couldn't find a simmilar question here, so here goes: Why does the following code always output (False, None)? Shouldn't it be (True, None), if the test() was called 3 seconds after the process 0 send the message? Also, if I call req.wait() before…
Zlatan Sičanica
  • 315
  • 2
  • 11
2
votes
2 answers

MPI, python, Scatterv, and overlapping data

The MPI standard, 3.0, says about mpi_scatterv: The specification of counts, types, and displacements should not cause any location on the root to be read more than once." However, my testing of mpi4py in python with the code below does not…
bob.sacamento
  • 6,283
  • 10
  • 56
  • 115
2
votes
1 answer

Running mixed mpi executables on 32 bit and 64 bit processors

I am trying to make an mpi cluster using the following tutorial using ubuntu 14.04 and beagleboard xm board. The problem is that my client is beagleboard-xm which has a 32 bit armv7 processor. I have created an executable using mpic++ -o…
srai
  • 1,023
  • 2
  • 14
  • 27
2
votes
1 answer

sending and receiving array in MPI C

Here's how my code should work. Slave nodes will perform some computations, and each node will send a value, minE with a corresponding linear array, phi. Root node will then receive, 2 values. I'm trying to figure out on how I will store N-1 (number…
Rowel
  • 115
  • 2
  • 11
2
votes
1 answer

Setup ssh to connect 2 PC and use MPI

I am here because I've found different problems setting up SSH using this guide proposed in this other question. First of all I've a computer (I want to use it as master) called: timmy@timmy-Lenovo-G50-80. My other computer is a Virtual Machine…
Timmy
  • 693
  • 2
  • 8
  • 26
2
votes
1 answer

Wrong values when reading file with MPI IO

Here is a simple C program reading a file in parallel with MPI IO: #include #include #include "mpi.h" #define N 10 main( int argc, char **argv ) { int rank, size; MPI_Init(&argc, &argv); MPI_Comm_rank(…
David Froger
  • 645
  • 6
  • 15
2
votes
2 answers

MPI_Reduce not transferring results to root process

I have a very simple MPI program to test the behavior of MPI_Reduce. My objectives are simple: ~ Start by having each process create a random number (range 1-100) Then run program with mpirun -np 5 Have process 0, find the sum…
Chisx
  • 1,976
  • 4
  • 25
  • 53
2
votes
1 answer

MPI scatter to distribute large csv file

I have a large csv file and I need to process every row to count some words. I need to use some MPI approach to distribute data processing among multiple process. Currently, I'm using scatter/gather in mpi4py library. The problem is that I need to…
stardiv
  • 43
  • 1
  • 8
2
votes
0 answers

How to compile and execute MPICH2 for Android

For cross compiling MPICH2 for Android. I found the reference here http://hex.ro/wp/projects/personal-cloud-computing/compiling-mpich2-for-android-and-running-on-two-phones/ and here www.scientificbulletin.upb.ro/rev_docs_arhiva/fullffc_583765. I…
Hemant Tiwari
  • 145
  • 1
  • 1
  • 9
2
votes
1 answer

Issue with MPI spawn and merge

I am trying to get started on dynamic process creation in MPI. I have a parent code (main.c) trying to spawn new worker/child processes (worker.c) and merge both into one intracommunicator. The parent code (main.c) is #include #include…
marc
  • 949
  • 14
  • 33
2
votes
1 answer

how to set number of task per node in slurm based on the parameter that I passed it to my program?

I want to set number of task per node as variable in slurm like: #SBATCH --ntasks-per-node=s*2; (s is number of socket per node that I pass it as parameter to my program). The code is as followed: a part of test.c file: if (argc <…
Matrix
  • 2,399
  • 5
  • 28
  • 53
2
votes
0 answers

MPI_Finalize() won't finalize if stdout and stderr are redirected via freopen

I have a problem using MPI and redirection of stdout and stderr. When launched with multiple processes, if both stdout and stderr are redirected in (two different) files, then every processes will get stucked in MPI_Finalize(), waiting indefinitely.…
bertbk
  • 105
  • 7