Questions tagged [openmpi]

Open MPI is an open source implementation of the Message Passing Interface, a library for distributed memory parallel programming.

The Open MPI Project is an open-source implementation of the Message Passing Interface, a standardized and portable message-passing system designed to leverage to computational power of massively parallel, distributed memory computers.

Message passing is one of the distributed memory models most often used, with MPI being the most used message passing API, offering two types of communication between processes: point-to-point or collective. MPI can run in distributed and shared memory architectures.

An application using MPI consists usually of multiple simultaneously running processes, normally on different CPUs, which are able to communicate with each other. Normally, this type of application is programmed using the SPMD model. Nevertheless, most MPI implementations also support the MPMD model.

More information about the MPI standard may be found on the official MPI Forum website, in the official documentation, and in the Open MPI documentation.

1341 questions
0
votes
1 answer

MPI_Scatter using C with Dynamic Allocation memory

Could any help how to use MPI_Scatter to send the following matrix float **u, **u_local; if (rank == 0){ u = (float**) malloc(N * size * sizeof(float*)); for(i = 0; i < N * size; i++){ u[i] = (float*) malloc(M * sizeof(float)); …
user950356
0
votes
1 answer

DCOM and OpenMPI

I did the DCOMCNFG with both the launch and the remote access permissions, granting my local logon on each node . Have OpenMPI_v1.6.1-x64 installed in root and remote machines. HAve specified the path of .exe in the target node. But while running…
tony
  • 1
  • 1
0
votes
1 answer

cudaGetDeviceCount returns 0 on parallel execution on > 2 CPUs

I am having some issues with cudaGetDeviceCount returning zero if used in mpirun with -np greater than 2. The portion of code from a much larger program is: bool cpuInterfaces::checkGPUCount(int gpusPerMachine){ int GPU_N; …
Dan C.
  • 41
  • 8
0
votes
1 answer

MPI - Issue with column type in language C

I have a problem with MPI_Send and MPI_Recv communication where I send column from a process and received by another one. For debugging, I show you below a basic example where I initialize a 10x10 matrix (x0 array) with x_domain = 4 and y_domain =…
0
votes
1 answer

Why can't I open get 8 processes at a time?

I'm a beginner to MPI. When I coded my first program, I came into a tough problem to me. MPI_Init(&argc, &argv) ; MPI_Comm_rank( MPI_COMM_WORLD, &rank) ; MPI_Comm_size( MPI_COMM_WORLD, &size) ; printf("Process: %d\n", rank); printf("Procs_num:…
Peiyun
  • 149
  • 1
  • 7
0
votes
2 answers

How can I call a (c++)function just for a subset of processes using MPI library?

The question says it all. I have three communicators(Groups are also available). Now I want to call the a function just for one communication subset. That is mask the function for other subsets. Is this possible or should I explicitly right a loop…
Armin
  • 134
  • 3
  • 16
0
votes
2 answers

gui for mpi program

I have a problem about a simple mpi program.This program have some 3D points and these points are moving during the program. I created an simple code by implemented c++ and then I tried to add an simple gui. I used gnuplot library and I have a…
eyildirim
0
votes
1 answer

Can someone explain this valgrind error with open mpi?

My basic question is about how the suppression files work in valgrind. I have looked at a lot of the documentation that points to using the following on mpi versions > 1.5 (mine is 1.6): mpirun -np 2 valgrind…
Muttonchop
  • 353
  • 4
  • 22
0
votes
1 answer

openmpi with valgrind (can I compile with MPI in Ubuntu distro?)

I have a naive question: I compiled a version of Openmpi 1.4.4. with Valgrind : ./configure --prefix=/opt/openmpi-1.4.4/ --enable-debug --enable-memchecker --with-valgrind=/usr.... I want to do memory check. Usually for debuggin (and running) I…
Denis
  • 1,526
  • 6
  • 24
  • 40
0
votes
1 answer

OpenMPI node & network topology

i am currently building a small utility library as part of a larger project. OpenMPI has a well documented API library but i am a little puzzled when it comes to the lower level communication between nodes. I know that when writing your algorithm,…
mayotic
  • 342
  • 1
  • 3
  • 15
0
votes
3 answers

How to use the same array on different processors using MPI

I would like to have the same array called hist(1:1000) on different processors using OpenMPI, such that when one processor modifies hist this modification is updated in the rest of the processors. I have written a code and declared hist(1:1000) but…
armando
  • 1,360
  • 2
  • 13
  • 30
0
votes
1 answer

MPI - How to approach a dynamic work load that is not evenly divisible by the number of threads?

I'm noticing that all the MPI calls need some amount of symmetry or they hang and/or produce unexpected results. How do you attack a dynamic problem or data set? Every example I find online always breaks the problem into evenly divisible chunks or…
Zak
  • 12,213
  • 21
  • 59
  • 105
0
votes
2 answers

Custom datatype (MPI_Datatype datatype)?

Is there such a thing as a custom datatype in MPI, or do you have to flatten everything into a text string and pass as MPI_CHAR? If you are required to flatten everything, is there a built-in function I am overlooking?
Zak
  • 12,213
  • 21
  • 59
  • 105
0
votes
1 answer

MPI-2 on CPU vs GPU

I am working on parallelising a code using MPI-2. It is successfully speeding up while I am using 8 core processors. I was just wondering what would be the effect if I use GPUs for the same purpose instead of CPUs? According to my research so far,…
user1105630
  • 65
  • 1
  • 1
  • 5
-1
votes
1 answer

C language mpirun running error from cygwin

I try to run an C language MPI program using cygwin environment (console). The compilation process works fine, but I get an error when I try to run the output program. What I configured by now: I use cygwin environment, here is the installed…
dacian
  • 95
  • 1
  • 8