0

As I understand from this link, MPI and DPCPP is possible together- https://community.intel.com/t5/Intel-oneAPI-HPC-Toolkit/Intel-MPI-support-GPU-Computing/td-p/1204653?profile.language=de

I am trying to use GDB on a simple MPI +DPCPP program as found here on Intel’s GitHub page - https://github.com/oneapi-src/oneAPI-samples/tree/master/DirectProgramming/DPC%2B%2B/ParallelPatterns/dpc_reduce

When I do

mpirun -n 4 -gdb ./mpi_code

the mpi_gdb attaches to the 4 processes. It also works with the gdb commands except when I put a breakpoint inside the GPU offloading part (e.g. at line 618). GDB completely skips this breakpoint and moves to the next one.

Is there anything I am missing ? Any parameter or environment variable or maybe a flag I need to set?

BoringSession
  • 55
  • 1
  • 7

0 Answers0