Questions tagged [mpi-io]

MPI/IO provides high performance, portable, parallel I/O interface to high performance, portable, parallel MPI programs

The purpose of MPI/IO is to provide high performance, portable, parallel I/O interface to high performance, portable, parallel MPI programs. Parallel I/O is not a daily bread. Although some supercomputer systems in the past offered parallel disk subsystems, e.g., the Connection Machine CM5 had a Scalable Disk Array, SDA, the Connection Machine CM2 had the Data Vault and IBM SP had PIOFS and today it has GPFS, communication with those peripherals was architecture and operating system dependent.

See the MPI/IO site for more.

64 questions
1
vote
1 answer

How to write a multi-dimensional array of structs to disk using MPI I/O?

I am trying to write an array of complex numbers to disk using MPI I/O. In particular, I am trying to achieve this using the function MPI_File_set_view so that I can generalise my code for higher dimensions. Please see my attempt below. struct…
Nanashi No Gombe
  • 510
  • 1
  • 6
  • 19
1
vote
1 answer

MPI_File_write_at : Writing the same struct twice results in slightly different data blocks in binary file

I have the following simple MPI code: #include #include int main() { struct test { int rank = 1; double begin = 0.5; double end = 0.5; }; MPI_Init(NULL, NULL); int world_rank; …
handy
  • 696
  • 3
  • 9
  • 22
1
vote
1 answer

Formatted output with MPI_File_write_at?

I'm trying to write a parallel IO program with MPI, I'm required to write the data to the file with a format as: 02 03 04 in the file instead of 2 3 4. fprintf(fpOut,"%.2d ",var); Would be the serial counterpart of what I'm trying to do. I've…
Ilknur Mustafa
  • 301
  • 2
  • 11
1
vote
1 answer

Example of using MPI_Type_create_subarray to do 2d cyclic distribution

I would like to have an example showing how to use MPI_Type_create_subarray to build 2D cyclic distribution for large matrix. I know that MPI_Type_create_darray will give me 2D cyclic distribution, but it is not compatible with SCALAPACK process…
dev.robi
  • 19
  • 5
1
vote
1 answer

What could make MPI_File_write_all fail with Floating point exception?

I have a call to MPI_File_write_all: double precision buf[100][100][100]; int data_size = 100*100*100; MPI_Status stat_mpi; MPI_file sgfh; ... MPI_File_write_all(sgfh, (void*)buf, data_size, MPI_DOUBLE, &stat_mpi); The size of buf can vary,…
bob.sacamento
  • 6,283
  • 10
  • 56
  • 115
1
vote
1 answer

MPI I/O, mix of single- and multiple-process output

I need an MPI C code to write data to a binary file via MPI I/O. I need process 0 to write a short header, then I need the whole range of processes to write their own pieces of the array indicated by the header. Then I need process 0 to write…
bob.sacamento
  • 6,283
  • 10
  • 56
  • 115
1
vote
1 answer

Interleaving binary data from different processors on MPI-IO

I'm trying to write a binary file using MPI I/O (MPI-2.0, mpich2). Below is a minimal example where 2 files 'chars' and 'ints' should be printed as '0123...' and 'abcd...' respectively. #include #include int main(int argc, char**…
1
vote
1 answer

MPI_ERR_BUFFER when performing MPI I/O

I am testing MPI I/O. subroutine save_vtk integer :: filetype, fh, unit integer(MPI_OFFSET_KIND) :: pos real(RP),allocatable :: buffer(:,:,:) integer :: ie if (master) then open(newunit=unit,file="out.vtk", & …
1
vote
1 answer

How to improve speed of MPI I/O on large number of cores?

I've been trying to run a code using MPI I/O on a large number of cores. The time required for each core to read from and write to a single file (the same for all cores) increases with the number of cores used. I'm currently using 512 cores and this…
mzp
  • 181
  • 9
1
vote
1 answer

Unrecognizable characters in the file written by mpi file write

I was beginning to learn mpi i/o for my molecular dynamics code. First, I tried to run this code: http://www.mcs.anl.gov/research/projects/mpi/usingmpi2/examples/starting/io3f_f90.htm After compiling and running, I got 'testfile'. But when I 'vim…
futurewind
  • 11
  • 2
1
vote
1 answer

Using MPI-I/O to Read the Real Entity in an Fortran Unformatted File

I am trying to read a CFD mesh file through MPI-I/O. The file is a Fortran unformatted format with big-endianness, and it contains mixed variables of integer and real*8 (the file starts with block-size integers, followed by x,y,z coordinates of that…
1
vote
1 answer

MPI-IO: MPI_File_Set_View vs. MPI_File_Seek

I encounter a problem when trying to write a file with MPI-IO, in Fortran 90. If I do the following, using MPI_File_Set_View program test implicit none include "mpif.h" integer :: myrank, nproc, fhandle, ierr integer :: xpos, ypos …
MBR
  • 794
  • 13
  • 34
1
vote
1 answer

MPI IO Formatting for GNUPlot

I have a C++ program using MPI where I would like each process (up to 32) to write to a file. I am using a small, test data set consisting of 100 doubles distributed evenly across the processes. Here is how the output is formatted thus far: …
vrume21
  • 561
  • 5
  • 15
0
votes
1 answer

Difference between parallel HDF5 MPI I/O transfer modes (Independent vs Collective)

According to the HDF5 manual, the "MPI I/O transfer mode" can be set through the HDF5 function H5Pset_dxpl_mpio(...). One of the parameters is the transfer mode. It can be either H5FD_MPIO_INDEPENDENT (default) or H5FD_MPIO_COLLECTIVE. I can't find…
Squirrel
  • 23
  • 2
0
votes
0 answers

Error on file_set_view in a collective mpi_write

I am just trying to write in a collective way in MPI Fortran from a CFD code. In each process, data are divided in blocks, with a general number of cells, and a structure var(b) is created which hosts the two variables r and p of the block b. Then a…
artu72
  • 1
  • 2