I have seen people producing segmentation fault with MPI_Barrier using C (Segmentation fault while using MPI_Barrier in `libpmpi.12.dylib`) and C++ (Why does MPI_Barrier cause a segmentation fault in C++). However, I do not reproduce the errors they get.
However, Now I get the same error fortran MPI_Barrier. My code is simple like:
program main
implicit none
include 'mpif.h'
! local variables
!
character(len=80) :: filename, input
character(len=4) :: command
integer :: ierror, i, l, cmdunit
logical :: terminate
integer :: num_procs, my_id, impi_error
real :: program_start, program_end
call MPI_INIT(impi_error)
call MPI_COMM_RANK(MPI_COMM_WORLD,my_id,impi_error)
call MPI_COMM_SIZE(MPI_COMM_WORLD,num_procs,impi_error)
call MPI_Barrier(MPI_COMM_WORLD)
program_start = MPI_Wtime()
filename='sc.cmd'
cmdunit=8
print *, my_id, cmdunit
call MPI_Barrier(MPI_COMM_WORLD)
call MPI_Barrier(MPI_COMM_WORLD)
call MPI_Barrier(MPI_COMM_WORLD)
call MPI_Barrier(MPI_COMM_WORLD)
call MPI_Barrier(MPI_COMM_WORLD)
program_end = MPI_Wtime()
if (my_id == 0) then
write(*,'(a,F25.16,a)') "MDStressLab runs in ", program_end - program_start, " s."
endif
call MPI_FINALIZE(impi_error)
end program
Nothing special about the code. However, when I use the command mpif90 tmp.f90 to compile the code and then run with command mpirun -n 2 ./a.out. It gives me:
0 8
1 8
Program received signal SIGSEGV: Segmentation fault - invalid memory reference.
Backtrace for this error:
Program received signal SIGSEGV: Segmentation fault - invalid memory reference.
Backtrace for this error:
#0 0x7FBF2C700E08
#1 0x7FBF2C6FFF90
#0 0x7F2EDF972E08
#2 0x7FBF2C3514AF
#1 0x7F2EDF971F90
#2 0x7F2EDF5C34AF
#3 0x7FBF2CA4F808
#4 0x400EB4 in MAIN__ at tmp.f90:?
#3 0x7F2EDFCC1808
#4 0x400EB4 in MAIN__ at tmp.f90:?
--------------------------------------------------------------------------
mpirun noticed that process rank 1 with PID 35660 on node min-virtual-machine exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------
The funny thing is that it only crashes with 2 nodes. It will run fine with 1~10 nodes except 2. Since this also happens randomly in C and C++, I think there might be some hidden bugs somewhere in the MPI library. That is just my guess. Could anybody help?