I have an MPI-based program written in Fortran which produces a 3D array of complex data at each node (sections of a 2D time-series). I would like to use parallel I/O to write these arrays to a single file which can be relatively easily opened in python for further analysis/visualization. Ideally I would like the solution to be memory efficient (i.e. avoid the creation of intermediate temporary arrays).
Using NetCDF, I have managed to adapt a subroutine which achieves this for a 3D array of real numbers. However, I've hit a stumbling block when it comes to complex arrays.
In the following code I have attempted to extend the subroutine from reals to complex numbers by creating a compound datatype consisting of two reals and assuming that the real and imaginary components of the Fortran complex datatype are stored contiguously in the 1st dimension of the 3D array.
module IO
use NetCDF
use, intrinsic :: iso_fortran_env, only: dp => real64
implicit none
contains
subroutine output_3D(dataname, starts, ends, global_data_dims, &
local_data, MPI_communicator)
character(len=*), intent(in) :: dataname
integer, dimension(3), intent(in) :: starts
integer, dimension(3), intent(in) :: ends
integer, dimension(3), intent(in) :: global_data_dims
complex(dp), intent(in) :: local_data( 1:(ends(1) - starts(1)+ 1), &
1:(ends(2) - starts(2) + 1), &
1:(ends(3) - starts(3) + 1))
integer, dimension(3) :: expanded_starts
integer, intent(in) :: MPI_communicator
integer :: ncid, varid, dimid(3)
integer :: counts(3)
integer :: typeid
expanded_starts(1) = (starts(1))* 2 + 1
expanded_starts = starts(2)
expanded_starts(3) = starts(3)
call check(nf90_create( trim(dataname)//'.cdf', &
IOR(NF90_NETCDF4, NF90_MPIIO), &
ncid, &
comm = MPI_communicator, &
info = MPI_INFO_NULL))
call check(nf90_def_dim(ncid, "x", global_data_dims(1), dimid(1)))
call check(nf90_def_dim(ncid, "y", global_data_dims(2) * 2, dimid(2)))
call check(nf90_def_dim(ncid, "z", global_data_dims(3), dimid(3)))
! define a complex data type consisting of two real(8)
call check(nf90_def_compound(ncid, 16, "COMPLEX", typeid))
call check(nf90_insert_compound(ncid, typeid, "REAL", 0, NF90_DOUBLE))
call check(nf90_insert_compound(ncid, typeid, "IMAG", 8, NF90_DOUBLE))
! define a 3D variable of type "complex"
call check(nf90_def_var(ncid, dataname, typeid, dimid, varid))
! exit define mode
call check(nf90_enddef(ncid))
! Now in NETCDF data mode
! set to use MPI/PnetCDF collective I/O
call check(nf90_var_par_access(ncid, varid, NF90_COLLECTIVE))
counts(1) = (ends(1) - starts(1) + 1) * 2
counts(2) = (ends(2) - starts(2) + 1)
counts(3) = (ends(3) - starts(3) + 1)
call check(nf90_put_var(ncid, &
varid, &
local_data, &
start = expanded_starts,&
count = counts))
! close the file
call check(nf90_close(ncid))
return
end subroutine output_3D
subroutine check(status)
integer, intent ( in) :: status
if(status /= nf90_noerr) then
print *, trim(nf90_strerror(status))
stop 2
end if
end subroutine check
end module IO
program test_write
use IO
use MPI
complex(dp) :: data(2,2,3)
integer :: flock
integer :: rank
integer :: ierr
integer :: i, j, k
call MPI_init(ierr)
call MPI_comm_size(MPI_comm_world, flock, ierr)
call MPI_comm_rank(MPI_comm_world, rank, ierr)
do k = 1, 3
do j = 1, 2
do i = 1, 2
data(i,j,k) = cmplx(i, j, 8)
enddo
enddo
enddo
if (rank == 0) then
call output_3D_hdf5('out', [1,1,1], [2,2,3], [2,2,6], &
data, MPI_comm_world)
else
call output_3D_hdf5('out', [1,1,4], [2,2,6], [2,2,6], &
data, MPI_comm_world)
endif
call MPI_finalize(ierr)
end program test_write
The above code results in an "There is no specific function for nf90_put_var" error on compilation. This indicates that the function is not happy with the data type of the input array, so clearly there is something I'm missing regarding the usage of compound data types.
EDIT: One simple workaround is to assign the complex array to a real pointer as described in this post. The array can then be reshaped/recast using numpy to arrive at the complex array in python. It's a bit clunky, and somewhat dissatisfying - but is probably good enough for my purposes for now.