0

I tried installing and using Theano with Cuda-9.0 on a P100 node. The installation itself went smooth, but I get Segmentation fault (see below).

I tried with both Theano-0.9.0 and Theano-0.10.0beta1 in combination with libgpuarray/pygpu - 0.6.8 and 0.6.9. All of the cases result in segfault.

Here is my setup: * RHEL 7 * GCC: 4.8.5 * CUDA: 9.0 * cuDNN: 5.1.5 * Python: 2.7.13 * cmake: 3.7.2

[bsankara@c460 ~]$ python
Python 2.7.13 (default, Aug 10 2017, 07:33:11)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import theano
--------------------------------------------------------------------------
A process has executed an operation involving a call to the
"fork()" system call to create a child process.  Open MPI is currently
operating in a condition that could result in memory corruption or
other system errors; your job may hang, crash, or produce silent
data corruption.  The use of fork() (or system() or other calls that
create child processes) is strongly discouraged.

The process that invoked fork was:

  Local host:          [[52508,1],0] (PID 3946)

If you are *absolutely sure* that your application will successfully
and correctly survive a call to fork(), you may disable this warning
by setting the mpi_warn_on_fork MCA parameter to 0.
--------------------------------------------------------------------------
[c460:03946] *** Process received signal ***
[c460:03946] Signal: Segmentation fault (11)
[c460:03946] Signal code: Invalid permissions (2)
[c460:03946] Failing at address: 0x3fff8d48f5b0
[c460:03946] [ 0] [0x3fff9cdf0478]
[c460:03946] [ 1] /home/bsankara/software/ppc64le-08102017/lib/libgpuarray.so.2(load_libcuda+0x60)[0x3fff8631b5e0]
[c460:03946] [ 2] /home/bsankara/software/ppc64le-08102017/lib/libgpuarray.so.2(+0x3f384)[0x3fff862df384]
[c460:03946] [ 3] /home/bsankara/software/ppc64le-08102017/lib/libgpuarray.so.2(+0x41118)[0x3fff862e1118]
[c460:03946] [ 4] /home/bsankara/software/ppc64le-08102017/lib/libgpuarray.so.2(gpucontext_init+0x90)[0x3fff862c7930]
[c460:03946] [ 5] /home/bsankara/software/ppc64le-08102017/lib/python2.7/site-packages/pygpu-0.6.8-py2.7-linux-ppc64le.egg/pygpu/gpuarray.so(+0x2c974)[0x3fff8638c974]
[c460:03946] [ 6] /home/bsankara/software/ppc64le-08102017/lib/libpython2.7.so.1.0(+0x101050)[0x3fff9cc61050]
[c460:03946] [ 7] /home/bsankara/software/ppc64le-08102017/lib/python2.7/site-packages/pygpu-0.6.8-py2.7-linux-ppc64le.egg/pygpu/gpuarray.so(+0x54318)[0x3fff863b4318]
[c460:03946] [ 8] /home/bsankara/software/ppc64le-08102017/lib/python2.7/site-packages/pygpu-0.6.8-py2.7-linux-ppc64le.egg/pygpu/gpuarray.so(+0x56530)[0x3fff863b6530]
[c460:03946] [ 9] /home/bsankara/software/ppc64le-08102017/lib/libpython2.7.so.1.0(PyCFunction_Call+0x164)[0x3fff9cc31554]
[c460:03946] [10] /home/bsankara/software/ppc64le-08102017/lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x8e64)[0x3fff9ccc9484]
[c460:03946] [11] /home/bsankara/software/ppc64le-08102017/lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0xb40)[0x3fff9cccb360]
[c460:03946] [12] /home/bsankara/software/ppc64le-08102017/lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x8f04)[0x3fff9ccc9524]
[c460:03946] [13] /home/bsankara/software/ppc64le-08102017/lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0xb40)[0x3fff9cccb360]
[c460:03946] [14] /home/bsankara/software/ppc64le-08102017/lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x8f04)[0x3fff9ccc9524]
[c460:03946] [15] /home/bsankara/software/ppc64le-08102017/lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0xb40)[0x3fff9cccb360]
[c460:03946] [16] /home/bsankara/software/ppc64le-08102017/lib/libpython2.7.so.1.0(PyEval_EvalCode+0x34)[0x3fff9cccb484]
[c460:03946] [17] /home/bsankara/software/ppc64le-08102017/lib/libpython2.7.so.1.0(PyImport_ExecCodeModuleEx+0xe0)[0x3fff9cce8960]
[c460:03946] [18] /home/bsankara/software/ppc64le-08102017/lib/libpython2.7.so.1.0(+0x188e50)[0x3fff9cce8e50]
[c460:03946] [19] /home/bsankara/software/ppc64le-08102017/lib/libpython2.7.so.1.0(+0x18ad54)[0x3fff9ccead54]
[c460:03946] [20] /home/bsankara/software/ppc64le-08102017/lib/libpython2.7.so.1.0(+0x18a540)[0x3fff9ccea540]
[c460:03946] [21] /home/bsankara/software/ppc64le-08102017/lib/libpython2.7.so.1.0(PyImport_ImportModuleLevel+0x2f4)[0x3fff9cceb7b4]
[c460:03946] [22] /home/bsankara/software/ppc64le-08102017/lib/libpython2.7.so.1.0(+0x15d038)[0x3fff9ccbd038]
[c460:03946] [23] /home/bsankara/software/ppc64le-08102017/lib/libpython2.7.so.1.0(PyCFunction_Call+0x164)[0x3fff9cc31554]
[c460:03946] [24] /home/bsankara/software/ppc64le-08102017/lib/libpython2.7.so.1.0(PyObject_Call+0x74)[0x3fff9cbc1ab4]
[c460:03946] [25] /home/bsankara/software/ppc64le-08102017/lib/libpython2.7.so.1.0(PyEval_CallObjectWithKeywords+0x68)[0x3fff9ccbfc68]
[c460:03946] [26] /home/bsankara/software/ppc64le-08102017/lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x3214)[0x3fff9ccc3834]
[c460:03946] [27] /home/bsankara/software/ppc64le-08102017/lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0xb40)[0x3fff9cccb360]
[c460:03946] [28] /home/bsankara/software/ppc64le-08102017/lib/libpython2.7.so.1.0(PyEval_EvalCode+0x34)[0x3fff9cccb484]
[c460:03946] [29] /home/bsankara/software/ppc64le-08102017/lib/libpython2.7.so.1.0(PyImport_ExecCodeModuleEx+0xe0)[0x3fff9cce8960]
[c460:03946] *** End of error message ***
Segmentation fault

Any help would be appreciated. Thanks.

talonmies
  • 70,661
  • 34
  • 192
  • 269
baskaran
  • 25
  • 1
  • 7

2 Answers2

0

Grab a demo mpi c++ or c code from the web, and compile it with mpicc / mpic++. Check that the compiler works and the executable you made can run and can manage point to point communication between different nodes in the cluster.

You probably used a wrong mpicc to compile theano and that compiler doesn't have binary compatibility with the library for inifiniband (or any hardware that connects the computers in a cluster).

For example, if the InfiniBand library is compiled by gcc and theano is compiled by a mpicc that is based on the intel compiler then it won't work.

You can set an environmental variable to ask the mpicc of openmpi to use another compiler.

If you have multiple mpi implementations compiled by different compilers on that computer... Try to use ldd to find out which shared library object (those .so files) depends on which one.

The best case is of course use the same compiler and same mpi wrapper for the compiler to compile everything, and wrap the files into several modules.

hamster on wheels
  • 2,771
  • 17
  • 50
0

The answer turns to be in the gcc version and libgpuarray. For some reason, gcc-4.8.5 has issues with the libgpuarray and that's what was causing segmentation fault.

I installed gcc-5.4.0 in my user space and recompiled cmake and libgpuarray as well as others including theano and numpy (just to be sure) and then it doesn't have the Segmentation fault any more.

The other change was that the cluster admins updated CUDA to 9.0.151 with new driver 384.66

baskaran
  • 25
  • 1
  • 7