This is one of those weird questions, I'm afraid, sorry :(
I have a piece of code that uses function pointers. It's worked fine for me until I tried using it on a Ubuntu 18 kernel installed on a Windows 10 machine. I wanted to compile it in debug mode using gfortran and hence used the -O0 flag. Then, I started getting inexplicable seg faults when trying to call a function to which a pointer is pointing:
program test01
implicit none
! ---
abstract interface
real(8) function funcInterface(t)
real(8), intent(in) :: t
end function funcInterface
end interface
procedure(funcInterface), pointer :: testPointer => null()
real(8) :: result
! ---
testPointer => testFunction
result = testPointer(0.0d0) ! <--- issues here.
contains
real(8) function testFunction(t)
real(8), intent(in) :: t
testFunction = -1.0d0 * t
end function testFunction
end program test01
Here's a trace using the GNU debugger:
17 testPointer => testFunction
(gdb) print(testPointer)
$1 = (PTR TO -> ( real(kind=8) ()())) 0x0
(gdb) next
18 result = testPointer(0.0d0)
(gdb) print(testPointer)
$2 = (PTR TO -> ( real(kind=8) ()())) 0x7ffffffed2a0
(gdb) next
Program received signal SIGSEGV, Segmentation fault.
0x00007ffffffed2a0 in ?? ()
I also checked it with valgrind and it came out clear:
==5436== HEAP SUMMARY:
==5436== in use at exit: 0 bytes in 0 blocks
==5436== total heap usage: 21 allocs, 21 frees, 13,520 bytes allocated
==5436==
==5436== All heap blocks were freed -- no leaks are possible
When I compiled the exact same code on a different Linux machine, it worked just fine, both using gfortran and ifort with the same equivalent flags. On the machine with problems, I've tried gfortran versions 5 through 8, to the same effect. I suppose I can just not disable the optimisation and get on with the project, but it gets in the way of my debugging. If someone knows just why this causes issues on one machine but not the other, I'd really like to know.
Thanks,
Artur