Questions tagged [mpi]

MPI is the Message Passing Interface, a library for distributed memory parallel programming and the de facto standard method for using distributed memory clusters for high-performance technical computing. Questions about using MPI for parallel programming go under this tag; questions on, eg, installation problems with MPI implementations are best tagged with the appropriate implementation-specific tag, eg MPICH or OpenMPI.

MPI is the Message Passing Interface, a library for distributed memory parallel programming and the de facto standard method for using distributed memory clusters for high-performance technical computing. Questions about using MPI for parallel programming go under this tag; questions on, eg, installation problems with MPI implementations are best tagged with the appropriate implementation-specific tag, e.g. MPICH or OpenMPI.

The official documents for MPI can be found at the webpages of the MPI forum; a useful overview is given on the Wikipedia page for MPI. The current version of the MPI standard is 3.0; the Forum is currently working on versions 3.1, which will have smaller updates and errata fixes, and 4.0, which will have significant additions and enhancements.

Open source MPI Libraries that implement the current standard include

Versions for most common platforms can be downloaded from the links above. Platform specific implementations are also available from various vendors.

A number of excellent tutorials for learning the basics of MPI programming can be found online, typically at the websites of supercomputing centres; these include (in no particular order):

Definitive Book Guide

  1. An Introduction to Parallel Programming - Peter Pacheco.
  2. Parallel Programming in C with MPI and OpenMP - Michael J. Quinn
  3. MPI: The Complete Reference (Volume 2) - William Gropp, Steven Huss-Lederman, Andrew Lumsdaine, Ewing L. Lusk, Bill Nitzberg, William Saphir, Marc Snir
  4. Using MPI: Portable Parallel Programming with the Message-Passing Interface - William Gropp, Ewing Lusk, Anthony Skjellum
6963 questions
2
votes
1 answer

MPI Distributed reading over a non-standard type

I am trying to read a binary file containing a sequence of char and double. (For example 0 0.125 1 1.4 0 2.3 1 4.5, but written in a binary file). I created a simple struct input, and also an MPI Datatype I will call mpi_input corresponding to this…
waffle
  • 121
  • 8
2
votes
1 answer

Persistent communication in MPI - odd behaviour

I am solving the coarsest grid of parallel geometric multi grid using jacobi iterations and using Non-blocking calls MPI_Isend() and MPI_Irecv(). There are no problems in this. As soon as I replace the non-blocking communications with persistent…
Gaurav Saxena
  • 729
  • 1
  • 6
  • 13
2
votes
0 answers

system command not executing with mpiicc -O

I have intel Parallel studio XE cluster edition 2015 on my 10 Node server connected with infiniband band. I wrote my code in C. My code consists of system commands with sprintf command like below: printf("started \n"); system("cp metis_input.txt…
2
votes
0 answers

Parallel derivatives of multidimensional real data with FFTW

I would like to build a 2D MPI-parallel spectral differentiation code. The following piece of code seems to work fine for the x-derivative, both in serial and in parallel: alloc_local = fftw_mpi_local_size_2d(N0,N1,MPI_COMM_WORLD,&local_n0,…
JoeP
  • 21
  • 3
2
votes
0 answers

learning prefix sum by tree reduction

I need to learn about prefix sum by tree reduction and write an MPI code in C for that. I already know prefix sum by recursive doubling or scan, and have some background in programming by MPI. Here is the structure of tree reduction which I should…
Amir
  • 637
  • 1
  • 6
  • 11
2
votes
2 answers

MPI_Scatterv doesn't work

I've wrote a program in C/MPI that simply split a NxN matrix in submatrix (for rows) and then giving it to all processes with the routine MPI_Scatterv. The dimension N is not necessarily multiple of the number of processes. I decide to give one more…
Pax
  • 63
  • 5
2
votes
2 answers

Can we run MPI programs in single system or is it imperative to run it in a cluster only?

I have access to a clustered network at my college using PelicanHPC where In run various MPI programs, but at home I want to practice writing/using other MPI programs. Is there a way that I can run MPI programs on my own system? (I work on Ubuntu…
Rahul
  • 11,129
  • 17
  • 63
  • 76
2
votes
1 answer

Difference between MPICH2 and mpi4y

What is the difference between mpich2 and mpi4py? I just installed MPICH2 on my raspbian cluster. Do I need mpi4py as well?
mrlarssen
  • 325
  • 8
  • 19
2
votes
0 answers

MS-MPI application failed on more than one node

I have two virtual boxes with windows 7. Their IPs are 10.0.0.20 and 10.0.0.22. From one virtual box I can ping the other one. On both boxes I open an smpd connection: smpd -p 8677 On both box I can see that port 8677 is listening. From one box,…
user1482030
  • 777
  • 11
  • 23
2
votes
1 answer

How to observe elapsed time for all processes of an MPI program

I want to observe the performance of my MPI program using time command in linux. It shows only real, user and sys values for the program. However, I should examine what happens on each process. So, is there a way to see how long does my program take…
simon_tulia
  • 396
  • 1
  • 6
  • 22
2
votes
1 answer

Requesting integer multiple of "M" cores per node on SGE

I want to submit a multi-threaded MPI job to SGE, and the cluster I am running in has different nodes that each has different number of cores. Let's say the number of threads per process is M (M == OMP_NUM_THREADS for OpenMP) How can I request that…
Wirawan Purwanto
  • 3,613
  • 3
  • 28
  • 28
2
votes
1 answer

strange character in Fortran write output

I want to time some subroutines. Here is the template I use to write the name and duration of execution: SUBROUTINE get_sigma_vrelp ...declarations... real(8) :: starttime, endtime CHARACTER (LEN = 200) timebuf starttime = MPI_Wtime() …
2
votes
2 answers

How to specify which processes run on which node in a parallel program

I am running my MPI program on a Intel Sandy Bridge cluster, on a 16 nodes partition. There as two processors per node and 8 cores per processor. I started a run with "mpirun -n 256 ./myprogram". Now I need a representative process on each node…
Tania
  • 418
  • 3
  • 20
2
votes
1 answer

Deadlock in simple MPI pipelined ring broadcast code

I'm learning MPI code. I am trying to do a pipelined ring broadcast using different sized chunks. However, when I run my code, it reaches a deadlock while Process 0 attempts to send the second chunk of data, and I have no idea why. Any help would be…
iltp38
  • 519
  • 2
  • 5
  • 13
2
votes
1 answer

Remote memory access using intercommunicator

I have a client server system using MPI (using ports) in C++. Its running good. Is doing what I intend it to do. I recently read about remote memory access (RMA) in MPI using MPI_Win memory windows. I'm wondering if it is possible to create a…
AdityaG
  • 428
  • 1
  • 3
  • 17
1 2 3
99
100