I am trying to use the mpi_f08
module to do halo exchange on a series of rank 4, 5, and 6 arrays. Previously I used subarray types for this, but ended up with so many that ifort couldn't keep track of all of them and started corrupting them when compiling with -ipo
.
I am using code along the lines of
call MPI_Isend(Array(1:kthird, ksizex_l, 1:ksizey_l, 1:ksizet_l, 1:size5, 1:size6), size, MPI_Double_Complex, ip_xup, 0 + tag_offset, comm, reqs(1))
call MPI_Irecv(Array(1:kthird, 0, 1:ksizey_l, 1:ksizet_l, 1:size5, 1:size6), size, MPI_Double_Complex, ip_xdn, 0 + tag_offset, comm, reqs(2))
(and then later a call to MPI_WaitAll
)
ifort 2017 with Intel MPI 2017 gives the following warning for each such line:
test_mpif08.F90(51): warning #8100: The actual argument is an array section or assumed-shape array, corresponding dummy argument that has either the VOLATILE or ASYNCHRONOUS attribute shall be an assumed-shape array. [ARRAY]
In spite of this, the halo exchange works fine for rank-4 and -5 arrays. However, when it comes to rank-6 arrays, array data goes to and comes from completely the wrong places, with data from the halo on the sending process (which was not in the array segment passed into MPI_Isend
) appearing in the bulk of the receiving process (which was not passed into MPI_Irecv
).
Using ifort 2018 and Intel MPI 2019 preview gives an additional error (not warning):
test_halo_6_aio.F90(60): warning #8100: The actual argument is an array section or assumed-shape array, corresponding dummy argument that has either the VOLATILE or ASYNCHRONOUS attribute shall be an assumed-shape array. [ARRAY]
call MPI_Isend(Array(1:kthird, ksizex_l, 1:ksizey_l, 1:ksizet_l, 1:size5, 1:size6), size, MPI_Double_Complex, ip_xup, 0 + tag_offset, comm, reqs(1))
-------------------^
test_halo_6_aio.F90(60): error #7505: If an actual argument is an array section with vector subscript and corresponding dummy argument does not have VALUE attribute, it must not have ASYNCHRONOUS / VOLATILE attribute. [BUF]
call MPI_Isend(Array(1:kthird, ksizex_l, 1:ksizey_l, 1:ksizet_l, 1:size5, 1:size6), size, MPI_Double_Complex, ip_xup, 0 + tag_offset, comm, reqs(1))
^
Three interrelated questions:
- Is there something incorrect about my syntax in the calls to
MPI_Isend
andMPI_Irecv
that is causing the warnings? How can I fix it so that the warnings are no longer triggered? - Is this warning the cause of the array corruption I'm seeing with rank-6 arrays?
- How can I avoid corrupting rank-6 arrays?
I've put a failing example into this gist.