0

I am tring to write a 4 * 4 array using MPI_SET_VIEW and MPI_WRITE_ALL. The xx is a 4 * 4 array and I expect xx = (0,1,2,3; 0,1,2,3; 0,1,2,3; 0,1,2,3) for this code. The globle_size and the local_size equal to 4 and 2. I first create a subarray 2 * 2 file type, so that I divide this 4 * 4 array into 4 parts which is 2 * 2. Then I set view of this file type, and write the xx. The result shold equal to (0,1,2,3; 0,1,2,3; 0,1,2,3; 0,1,2,3), however, it is not. Some of the results are right, and some are wrong.

1 When I do j=ls2,le2, i = ls1,le1, xx(i,j)=i, is array xx() a 4*4 array or 2 * 2 array? ls1=ls2=0,le1=le2=1.

2 For MPI_WRITE_ALL, Should I use 4 * 4 array or 2 * 2 array? and what should I put for the count?1 or 4?

3 For MPI_WRITE_ALL, Should I use filetype as typestyle?

  integer::filesize,buffsize,i,Status(MPI_STATUS_SIZE),charsize,disp,filetype,j,count
  integer::nproc,cart_comm,ierr,fh,datatype
  
  INTEGER(KIND=MPI_OFFSET_KIND) offset
  integer,dimension(dim):: sizes,inersizes,start,sb,ss
  character:: name*50,para*100,zone*100


  do j=local_start(2),local_end(2)
     do i=local_start(1),local_end(1)
        xx(i,j)=i
     enddo
  enddo

  count=1
  offset=0
  start=cart_coords*local_length



  call MPI_TYPE_CREATE_SUBARRAY(2,global_length,local_length,start,MPI_ORDER_FORTRAN,&
  MPI_integer,filetype,ierr)
  call MPI_TYPE_COMMIT(filetype,ierr)

  call MPI_File_open(MPI_COMM_WORLD,'out.dat', &
  MPI_MODE_WRONLY + MPI_MODE_CREATE,MPI_INFO_NULL,fh,ierr)


  call MPI_File_set_view(fh,offset,MPI_integer,filetype,&
  "native",MPI_INFO_NULL,ierr)
  CALL MPI_FILE_WRITE(fh, xx,1, filetype, MPI_STATUS_ignore, ierr)
  • 1
    Please show a complete program (as you did yesterday) so we can compile and run it - https://stackoverflow.com/help/minimal-reproducible-example. Only that way can we be absolutely sure what we recommend is correct and addresses what you need. – Ian Bush Jul 15 '21 at 06:28
  • https://stackoverflow.com/questions/10341860/mpi-io-reading-and-writing-block-cyclic-matrix might be of use – Ian Bush Jul 15 '21 at 07:06

1 Answers1

0

Below is a code which I think does what you want. It is based upon what you posted yesterday and then deleted - please don't do this, rather edit the question to improve it. I have also changed to using a 6x4 global size and a 3x2 process grid as rectangular grids are more likely to catch bugs.

Anyway to answer your questions

1 - You only store a part of the array locally, so the array needs to be declared as only (1:2,1:2) - this is almost the whole point of distributed memory programming, each process only holds a part of the whole data structure

2 - You only have a 2x2 array locally, so it should be a 2x2 array holding whatever data is to be stored locally. You are writing an array of integers, so I think it is simplest to say you are writing 4 integers

3 - See above - You are writing an array of integers, so I think it is simplest to say you are writing 4 integers. The filetype is (in my experience) only used in the call to MPI_File_set_view to describe the layout of the data in the file via the filetype argument. When you actually write data just tell mpi_file_write and friends what you are writing

ijb@ijb-Latitude-5410:~/work/stack$ mpif90 --version
GNU Fortran (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

ijb@ijb-Latitude-5410:~/work/stack$ mpif90 --showme:version
mpif90: Open MPI 4.0.3 (Language: Fortran)
ijb@ijb-Latitude-5410:~/work/stack$ cat mpiio.f90
Program test
  Use mpi
  Implicit None
  Integer::rank,nproc,ierr,filetype,cart_comm
  Integer::fh
  Integer(kind=mpi_offset_kind):: offset=0
  Integer,Dimension(2,2)::buff
  Integer::gsize(2)
  Integer::start(2)
  Integer::subsize(2)
  Integer::coords(2)
  Integer:: nprocs_cart(2)=(/3,2/)
  Integer :: i, j
  Logical::periods(2)
  Character( Len = * ), Parameter :: filename = 'out.dat'

  gsize= [ 6,4 ]
  subsize= [ 2,2 ]
  periods = [ .False., .False. ]
  offset=0
  
  Call MPI_init(ierr)
  Call MPI_COMM_SIZE(MPI_COMM_WORLD, nproc, ierr)
  Call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)
  Call MPI_Dims_create(nproc, 2, nprocs_cart, ierr)
  Call MPI_Cart_create(MPI_COMM_WORLD, 2, nprocs_cart, periods, .True., &
       cart_comm, ierr)
  Call MPI_Comm_rank(cart_comm, rank, ierr)
  Call MPI_Cart_coords(cart_comm, rank, 2, coords, ierr)

  start=coords * subsize

  Do j = 1, 2
     Do i = 1, 2
        buff( i, j ) = ( start( 1 ) + ( i - 1 ) ) + &
             ( start( 2 ) + ( j - 1 ) ) * gsize( 1 )
     End Do
  End Do

  Call MPI_TYPE_CREATE_SUBARRAY(2,gsize,subsize,start,MPI_ORDER_FORTRAN,&
       MPI_integer,filetype,ierr)
  Call MPI_TYPE_COMMIT(filetype,ierr)

  ! For testing make sure we have a fresh file every time
  ! so don't get confused by looking at the old version
  If( rank == 0 ) Then
     Call mpi_file_delete( filename, MPI_INFO_NULL, ierr )
  End If
  Call mpi_barrier( mpi_comm_world, ierr )

  ! Open in exclusive mode making sure the delete has occurred
  Call MPI_File_open(MPI_COMM_WORLD,filename,&
       MPI_MODE_WRONLY + MPI_MODE_CREATE + MPI_MODE_EXCL, MPI_INFO_NULL, fh,ierr)

  Call MPI_File_set_view(fh,offset,MPI_integer,filetype,&
       "native",MPI_INFO_NULL,ierr)

  Call MPI_FILE_WRITE_all(fh, buff, 4, mpi_integer, MPI_STATUS_ignore, ierr)


  Call MPI_File_close(fh,ierr)
  Call MPI_FINALIZE(ierr)
  
End Program test
ijb@ijb-Latitude-5410:~/work/stack$ mpif90 -Wall -Wextra -fcheck=all -O -g -std=f2008 -fcheck=all mpiio.f90 
ijb@ijb-Latitude-5410:~/work/stack$ mpirun --oversubscribe -np 6 ./a.out 
ijb@ijb-Latitude-5410:~/work/stack$ od -v -Ad -t d4 out.dat
0000000           0           1           2           3
0000016           4           5           6           7
0000032           8           9          10          11
0000048          12          13          14          15
0000064          16          17          18          19
0000080          20          21          22          23
0000096
Ian Bush
  • 6,996
  • 1
  • 21
  • 27
  • Thank you so much for the answer and I am so sorry for my unclear code. Because It is really a large code and I just want to make it simple. For the first proble: When I create the xx array, the upper limit and the lower limit are from the local left top point and right bottom point which are two conner points. In this case, just as your code, do I create 6 2 * 2 array or just 1 6 * 4 array? for the second problem, I think your answer help me solve the biggest proble that I give the wrong type of the output file. Now, can I directly write the whole array but not 4 MPI_integers ? – Mac cchiatooo Jul 15 '21 at 13:02
  • The global size in my example is 6x4. I am using 6 cores in a 3*2 process grid. Thus the local size on a process is 2*2, and each of the 6 processes has one of these. As for the second question what do you mean by the whole array? The global object that is distributed across the processes, or the local 2*2 object? Actually in this case you are doing both. Each process writes all its local array, and that in total is the whole global array - did you look at the output of od that I provided? – Ian Bush Jul 15 '21 at 13:08
  • I now understand. I create 2 * 2 array on every processor, and then write all these arrays to one file. So, if I want to write_all(.., xx, 1, array_type), I need to first create a new subarray type array_type? – Mac cchiatooo Jul 15 '21 at 13:25
  • If you mean create a derived type that "contains" all the array and just write one of that derived type (instead of 4 integers) - yes. But I freely admit I never do it like that. – Ian Bush Jul 15 '21 at 13:28
  • I see. Just one more question, if I change all the MPI_INTEGER to MPI_DOUBLE_PRECISION, what should I notice? – Mac cchiatooo Jul 15 '21 at 13:36