Questions tagged [openmpi]

Open MPI is an open source implementation of the Message Passing Interface, a library for distributed memory parallel programming.

The Open MPI Project is an open-source implementation of the Message Passing Interface, a standardized and portable message-passing system designed to leverage to computational power of massively parallel, distributed memory computers.

Message passing is one of the distributed memory models most often used, with MPI being the most used message passing API, offering two types of communication between processes: point-to-point or collective. MPI can run in distributed and shared memory architectures.

An application using MPI consists usually of multiple simultaneously running processes, normally on different CPUs, which are able to communicate with each other. Normally, this type of application is programmed using the SPMD model. Nevertheless, most MPI implementations also support the MPMD model.

More information about the MPI standard may be found on the official MPI Forum website, in the official documentation, and in the Open MPI documentation.

1341 questions
0
votes
2 answers

How to MPI_SEND and MPI_RECV

i have an this input file .txt where there are sequences: NAMEOFSEQUENCE1/SEQUENCE1 NAMEOFSEQUENCE2/SEQUENCE2 NAMEOFSEQUENCE3/SEQUENCE3 I done a struct: typedef struct lane{ char *name; char *sequence; }lane; and wrote this code: int…
Pascal NoPascensor
  • 171
  • 1
  • 1
  • 14
0
votes
0 answers

Intercommunication between Python MPI master and C MPI worker?

I'm trying to write an MPI worker in C that will communicate with the MPI master, written in python. The master will send out scatters and gathers, and the C worker should receive those and return variables via gather. The problem is, I'm having…
0
votes
1 answer

Safety guarantee for interleaved MPI Isend / Recv

In a related question I learned that performing request = Isend(...); Recv(...); request.Wait(); is not guaranteed to work, as Isend may not do anything until request.Wait(), hence deadlocking at Recv(...) (see original question for details). But…
stefan
  • 10,215
  • 4
  • 49
  • 90
0
votes
1 answer

deadlock with non-blocking MPI communications

The following code is a routine that communicates ghost points to top/bottom and left/right neighbors. The routine is called during the loop of an iterative method, about hundreds of times. the problem is that, although it is written with…
tm8cc
  • 1,111
  • 2
  • 12
  • 26
0
votes
1 answer

LU Decomposition MPI

This is a MPI code for LU Decomposition. I have used the following strategy - There is a master(rank 0) and others are slaves. The master sends rows to each slave. Since each slave might receive more than row, I store all the received rows in a…
p_kajaria
  • 87
  • 2
  • 12
0
votes
2 answers

Bug in MPI code

I am trying to do LU decomposition using MPI. Below is the snapshot of my code: if(rank == 0) { //Send to each processor the row it owns for(p=0;p
user3351750
  • 927
  • 13
  • 24
0
votes
1 answer

Count parameter in MPI_Bcast

MPI_Bcast (&buffer,count,datatype,root,comm) The tutorial at https://computing.llnl.gov/tutorials/mpi/ says that count - Indicates the number of data elements of a particular type to be sent. What does that mean? Does it mean that - count copies…
p_kajaria
  • 87
  • 2
  • 12
0
votes
1 answer

Query in MPI initialization

If we call MPI_Init() we know that multiple copies of the same executable run on different machines. Suppose MPI_Init() is in a function f(), then will multiple copies of main() function exist too? The main problem that I am facing is of taking…
p_kajaria
  • 87
  • 2
  • 12
0
votes
2 answers

Synchronize array over MPI processes, if each thread changed a part of it?

I have a program I want to parallelize using MPI. I have not worked with MPI before. The program calculates the behavior for a large numer of objects over time. The data of these objects is stored in arrays, e.g. double precision :: body_x(10000)…
el.mojito
  • 170
  • 1
  • 10
0
votes
1 answer

MPI_Waitall error: address not mapped

I have the following code: #include #include #include #include static int rank, size; char msg[] = "This is a test message"; int main(int argc, char **argv) { MPI_Init(&argc, &argv); …
Ra1nWarden
  • 1,170
  • 4
  • 21
  • 37
0
votes
1 answer

program with mpirun doesn't operate

I tried quantum computational program gamess with mpirun and it runned well yesterday. However, when I tried another calculation by the same procedure, it failed with next messeages.How can I fix it? I confirmed that there was no mpi process running…
0
votes
1 answer

"_ompi_mpi_int" in Funktion "_main" LNK2019

I was trying to compile mpi_prime.c with openmpi on windows. I tried it with the 32bit and 64bit version of OpenMPI_v1.6.2. I got these outputs. Microsoft (R) C/C++-Optimierungscompiler Version 17.00.61030 für x86 Copyright (C) Microsoft…
aldr
  • 838
  • 2
  • 19
  • 33
0
votes
1 answer

MPI_Barrier in C

I'm trying to implement a program using MPI, for which I need to have a block of code to be executed in a particular processor and until the execution completes other processors must wait.I thought it can be achieved using MPI_Barrier (though I'm…
Aboorva Devarajan
  • 1,517
  • 10
  • 23
0
votes
1 answer

MPI_Recv message ordering vs MPI_Send message ordering

While trying to simulate the behaviour of a network using OpenMPI, I am experiencing an issue which can be summed up as follows: Rank 2 sends a message (message1) to rank 0; Rank 2 sends a message (message2) to rank 1; Rank 2 sends a message…
Gog
  • 17
  • 1
  • 5
0
votes
1 answer

Measuring data transfer between processes in OpenMPI

I am working in a cluster with ubuntu and OpenMPI 1.4. My code is working good but i would like to measure the time that take send data between root process and slave nodes. I think my code doesn't give me the correct information. void…
John Smith
  • 97
  • 2
  • 10