Questions tagged [openmpi]

Open MPI is an open source implementation of the Message Passing Interface, a library for distributed memory parallel programming.

The Open MPI Project is an open-source implementation of the Message Passing Interface, a standardized and portable message-passing system designed to leverage to computational power of massively parallel, distributed memory computers.

Message passing is one of the distributed memory models most often used, with MPI being the most used message passing API, offering two types of communication between processes: point-to-point or collective. MPI can run in distributed and shared memory architectures.

An application using MPI consists usually of multiple simultaneously running processes, normally on different CPUs, which are able to communicate with each other. Normally, this type of application is programmed using the SPMD model. Nevertheless, most MPI implementations also support the MPMD model.

More information about the MPI standard may be found on the official MPI Forum website, in the official documentation, and in the Open MPI documentation.

1341 questions
0
votes
1 answer

Outputting custom data type to file

Given a struct Pixel and its equivalent MPI_Type mpiPixel i create an array of pixels and write it to a file. Everything runs correctly except, the output in the file ends in some sort of a bit pattern (being interpreted as integers). The file is…
Aiden Strydom
  • 1,198
  • 2
  • 14
  • 43
0
votes
0 answers

Timing in OpenMPI

Quick question about timing in OpenMPI. I see that with qstat -a I can show the wall time, and with qstat I can see the CPU time. Is it possible to have these two values written to the output file when the job is done so I can check the performance…
Dan
  • 2,647
  • 2
  • 27
  • 37
0
votes
1 answer

MPI's Scatterv operation

I'm not sure that I am correctly understanding what MPI_Scatterv is supposed to do. I have 79 items to scatter amounts a variable amount of nodes. However, when I use the MPI_Scatterv command I get ridiculous numbers (as if the array elements of my…
Dan
  • 2,647
  • 2
  • 27
  • 37
0
votes
0 answers

Is the message queue for MPI_Send and MPI_Receive global or specific to each destination?

Please correct me if I am misunderstanding how MPI_Send and MPI_Recv work, since I have just started learning MPI. My current understanding is that the MPI standard guarantees that two messages which are sent one after another from one sender to one…
merlin2011
  • 71,677
  • 44
  • 195
  • 329
0
votes
0 answers

Install MPICH2 cannot fild libmpi.so.1 ( uninstalled openmpi already)

I installed mpich2 using: ./configure --prefix=/usr/mpich2-install --enable-threads --enable-shared make make install But when I run a MPI program, there is a error: ./exe_framework: error while loading shared libraries: libmpi.so.1: cannot open…
user3392320
  • 67
  • 1
  • 8
0
votes
0 answers

MPI_Isend stuck sometime.(openmpi mutiple-thread)

(First of all, I want to thank Hristo Iliev. He helps me a lot in my current MPI project.) The problem is that MPI_Irecv will stuck sometimes (stuck probability is close to 1/2). My program is more than 20,000 lines. So I cannot list it here. The…
user3392320
  • 67
  • 1
  • 8
0
votes
0 answers

Is there any MPI function can do partial data exchange?

I am trying to parallelize my image smoothing program recently The algorithm to do this is easy to understand. #define MAX_SMOOTH_LEVEL 1000 For i=0 to MAX_SMOOTH_LEVEL For each pixel in rgb Color rgb[IMG_HEIGHT][IMG_WIDTH],…
Tim Hsu
  • 402
  • 5
  • 19
0
votes
1 answer

Passing multiple variables in MPI

I am trying an implementation in MPI where I am invoking multiple slaves (upto 4) on the same machine (localhost) and distributing the computations of my for loop amongst the slaves. MPI is suited for my current application and I cannot take the…
Ajay Nair
  • 1,827
  • 3
  • 20
  • 33
0
votes
0 answers

New macport openmpi error for python mpi wrapper

Some time ago we wrote our own mpi wrapper for python. Everything worked fine on mac after I recently upgraded to: openmpi-default @1.7.3_2+gcc48 (active) openmpi-devel-default @1.9a1_30433+gcc48 (active) openmpi-gcc45 @1.7.3_2+fortran…
El Dude
  • 5,328
  • 11
  • 54
  • 101
0
votes
2 answers

Open MPI, determine rank of process to send to

I have two different executables each with a specific role. One of the two processes sends the other information by calling MPI_isend. But how do I know the rank of the other process? I found out that when I run my stack as follows, that exe1, the…
nvdstruis
  • 33
  • 1
  • 7
0
votes
1 answer

Difference between omp_get_wtime() and mpi_wtime() when using both MPI and shared memory parallelization

I am using both OpenMPI and OpenMP (shared memory) to parallelize a piece of code. I am trying to time that code for benchmarking and speedup purposes, and I don't understand the differences between omp_get_wtime() and mpi_wtime(). Here is an…
Jason Maldonis
  • 307
  • 3
  • 10
0
votes
1 answer

What should I do if I want to send message by MPI and receive messages at the same time?

Backgroup: rank 0 send message to rank 1, after rank 1 completes its work it returns messages to rank 0 actually I run a thread for sending message and the other one for receiving in rank 0 like this: int tag = 1; void* thread_send(void* argc) { …
Jerry
  • 121
  • 1
  • 2
  • 10
0
votes
1 answer

MPI master unable to receive

I am using MPI in fortran for computation of my data. I verified by printing the data that, computations are being performed on the desired rang by each process just fine but, it the master is unable to collate the data. Here is the code that I am…
Trancey
  • 699
  • 1
  • 8
  • 18
0
votes
1 answer

MPI Scatter only sending first element

I am just simply trying to scatter some strings to nodes and then receive them back in a new array. When I print the new array the terminal will output name1 (empty line) (empty line) (empty line) Here is my scatter: …
user1720205
0
votes
1 answer

Where to place array creation method in an MPI program

I heard that all code in an MPI program should be decared between MPI_Init and MPI_Finalize. So what is the impact of the following difference in the beow MPI programs. int main(int argc, char** argv) { MPI_Init(&argc, &argv); …
Aiden Strydom
  • 1,198
  • 2
  • 14
  • 43