0

I have set up a few H16R instances on Microsoft Azure that support RDMA, and the Intel pingpong test works fine:

mpirun -hosts <host1>,<host2> -ppn 1 -n 2 -env I_MPI_FABRICS=dapl -env I_MPI_DAPL_PROVIDER=ofa-v2-ib0 -env I_MPI_DYNAMIC_CONNECTION=0 IMB-MPI1 pingpong

However, an issue arises when I want to compile MPI applications (LAMMPS, for instance). It doesn't appear that Microsoft includes Intel compilers on their HPC CentOS 7.1 images, despite the fact that these H16R instances communicate using Intel MPI.

So I installed OpenMPI and compiled LAMMPS using mpic++; however, OpenMPI's mpirun complains and won't run anything.

Do I actually need to purchase the Intel compiler for this task?? Is there no way to use OpenMPI on these VMs? This is rather expensive for a personal project.

Nick
  • 5,228
  • 9
  • 40
  • 69
  • I was able to get LAMMPS working on azure with infiniband, but I had to use Intel Parallel Studio Cluster edition. This costs megabucks for a proper license, but you can sign up to 1 month trial free. Note that I had to tweak the lammps makefiles in several places. – user9721518 Apr 30 '18 at 13:58

1 Answers1

1

You don't need Intel compilers in order to use Intel MPI. It works fine with GCC too. IMPI provides both Intel-specific compiler wrappers (mpiicc, mpiicpc, mpiifort) and generic ones (mpicc, mpicxx, mpif90, etc.) The latter work with any compatible compiler.

In order to use mpicxx for LAMMPS, you must tell the wrapper to use GCC either by providing it in a command-line argument:

$ mpicxx -cxx=g++ ...

or by setting the I_MPI_CXX environment variable:

$ export I_MPI_CXX=g++
$ mpicxx ...

The same applies to the C and Fortran wrappers. Run them with no arguments whatsoever and you'll get a list of options that can be used to provide the actual compiler name.

As for using an alternative MPI implementation, the virtual InfiniBand adapters provided by Azure seem to lack support for shared receive queues and Open MPI won't run with its default configuration. You could try running with the following mpiexec option:

--mca btl_openib_receive_queues P,128,256,192,128:P,2048,1024,1008,64:P,12288,1024,1008,64:P‌​,65536,1024,1008,64

This reconfigures all shared receive queues into private ones. I have no idea whether it actually works - I don't have access to an Azure HPC instance and this is all based on the error message from this question (unfortunately, the OP has not responded to my inquiry whether the above argument makes Open MPI work)

Community
  • 1
  • 1
Hristo Iliev
  • 72,659
  • 12
  • 135
  • 186
  • Where are these installed? They aren't in my PATH by default, and performing a `find / -name "mpiicc"` doesn't reveal anything either. Is there a package I should install? – Nick May 18 '17 at 08:04
  • On our cluster the wrappers are in `/opt/intel/impi/5.1.3.181/bin64/`. According to [the documentation](https://learn.microsoft.com/en-us/azure/virtual-machines/linux/classic/rdma-cluster), this is also where IMPI is to be found on the CentOS-based VMs. – Hristo Iliev May 18 '17 at 08:10
  • The IMPI directory is present, however, the compiler wrappers are not (only `mpirun`, `mpivars.sh`, etc.) I suppose they have been removed from the CentOS 7.1 HPC image that Azure is supplying. – Nick May 18 '17 at 08:42
  • Sounds like only the run-time components of Intel MPI that allow running precompiled executable files are provided. Check whether there is an `/opt/intelMPI/intel_mpi_packages/` directory with RPM packages as with the SLES instances. If so, perhaps the development version is there. – Hristo Iliev May 18 '17 at 15:24