Questions tagged [openmpi]

Open MPI is an open source implementation of the Message Passing Interface, a library for distributed memory parallel programming.

The Open MPI Project is an open-source implementation of the Message Passing Interface, a standardized and portable message-passing system designed to leverage to computational power of massively parallel, distributed memory computers.

Message passing is one of the distributed memory models most often used, with MPI being the most used message passing API, offering two types of communication between processes: point-to-point or collective. MPI can run in distributed and shared memory architectures.

An application using MPI consists usually of multiple simultaneously running processes, normally on different CPUs, which are able to communicate with each other. Normally, this type of application is programmed using the SPMD model. Nevertheless, most MPI implementations also support the MPMD model.

More information about the MPI standard may be found on the official MPI Forum website, in the official documentation, and in the Open MPI documentation.

1341 questions
0
votes
1 answer

C, openMPI: What is the best way to distribute blocks of data from each process to all other processes?

in my mpi program each process has or works with a block of data: char *datablock; the blocks are of similar but not identical size. What is the best way (which functions to use and how) to distribute those blocks from each process to each other…
Oliver
  • 149
  • 1
  • 9
0
votes
1 answer

Using MPI4PY in FedoraScientific

Recently, I downloaded and installed Fedora Scientific 20 as I was impressed with the list of included software. My interest in the software is due to the inclusion of the MPI framework. I was able to compile and execute a simple C program using…
0
votes
2 answers

Cause all processes running under OpenMPI to dump core

I'm running a distributed process under OpenMPI on linux. When one process dies, mpirun will detect this and kill the other processes. But even though I get a core from the process which died, I don't get a core from the processes killed by…
Nathan
  • 1,218
  • 3
  • 19
  • 35
0
votes
0 answers

mpiexec hangs for remote execution

I have two EC2 instances. Ubuntu 12.04 running OpenMPI 1.4.3 Ubuntu 14.04 running OpenMPI 1.6.5 I run this command: mpiexec --hostfile machines ls where "machines" is a file that contains the IP address of the other server that the command is…
clarity
  • 368
  • 1
  • 4
  • 14
0
votes
0 answers

Open MPI: Get CPU and communication usage

Just a simple question: I need to examine two things while using Open MPI: cpu usage communication usage during MPI app running time. I found a few thing but I'm looking for new ideas already. What i mean with -communication usage-: I want to know…
OpusV
  • 23
  • 6
0
votes
1 answer

MPI_Waitany causes segmentation fault

I am using MPI to distribute images to different processes so that: Process 0 distribute images to different processes. Processes other than 0 process the image and then send the result back to process 0. Process 0 tries to busy a process…
PALEN
  • 2,784
  • 2
  • 23
  • 24
0
votes
1 answer

Why does this MPI code execute out of order?

I'm trying to create a "Hello, world!" application in (Open)MPI such that each process will print out in order. My idea was to have the first process send a message to the second when it's finished, then the second to the third, etc.: #include…
wchargin
  • 15,589
  • 12
  • 71
  • 110
0
votes
1 answer

Different behaviour of MPI 2.1 and MPI 3.0 implementaion. (MPI + OpenMP)

Is it necessary to set OMP_NUM_THREADS variable when you run MPI program which includes openmp code? When I see some tutorials, I saw that you MUST set OMP_NUM_THREADS (environmental variable). Im testing programs on my home cluster which is using…
maxi
  • 51
  • 1
  • 7
0
votes
1 answer

Strange multiplication result

In my code I have this multiplications in a C++ code with all variable types as double[] f1[0] = (f1_rot[0] * xu[0]) + (f1_rot[1] * yu[0]); f1[1] = (f1_rot[0] * xu[1]) + (f1_rot[1] * yu[1]); f1[2] = (f1_rot[0] * xu[2]) + (f1_rot[1] * yu[2]);…
M.K
  • 179
  • 1
  • 2
  • 10
0
votes
2 answers

OpenMPI communication over local network[2 machines]

Is there any way to make a MPI program work on two different machines (both windows 7) over local network ? Lets assume that I have 1 PC with local IP 192.168.1.1 and other one with 192.168.1.2. I've heard of DeinoMPI, but isn't there any way to do…
user2803017
  • 19
  • 1
  • 4
0
votes
2 answers

What are InfiniBand-Stacks?

I would like to ask you for an explanation what are the "InfiniBand-Stacks". Those were recently changed on our machine and I started running into MPI communication failures. I need some information in order to understand how this might be affecting…
Alexander Cska
  • 738
  • 1
  • 7
  • 29
0
votes
1 answer

Multi-Threaded MPI Process Suddenly Terminating

I'm writing an MPI program (Visual Studio 2k8 + MSMPI) that uses Boost::thread to spawn two threads per MPI process, and have run into a problem I'm having trouble tracking down. When I run the program with: mpiexec -n 2 program.exe, one of the…
bhilburn
  • 579
  • 7
  • 18
0
votes
1 answer

Are there alternatives to MPE if I use MPI?

Are there any alternatives to use something other than MPE if I want to draw in a window with all processes while I use MPI?
Bediko
  • 11
  • 1
  • 6
0
votes
1 answer

C++ OpenMPI linked-lists

Currently, I have a nice c++ graph algorithm written with custom struct definitions of linked-lists or arrays of linked-lists (I should turn this into a template definition, but it currently is not). This algorithm can easily be distributed, and I…
CodeKingPlusPlus
  • 15,383
  • 51
  • 135
  • 216
0
votes
1 answer

why mpirun duplicates the program by default?

I am new to openMPI, I have problem understanding the concepts. (I found this pretty helpful) 1- Could anyone breifly explain why we use openMPI? To my understanding, OpenMPI is used to parallelize those sections of the code which can run in…
sali
  • 219
  • 4
  • 10