I understand that OpenMPI uses OpenIB, and OpenIB uses Ip over Infiniband (IPoIB).
I don't understand why not use native IB, if is it faster than IPoIB?
Is there any implementation of MPI, that it use native IB?
I understand that OpenMPI uses OpenIB, and OpenIB uses Ip over Infiniband (IPoIB).
I don't understand why not use native IB, if is it faster than IPoIB?
Is there any implementation of MPI, that it use native IB?
OpenIB is the early name used by the Open Fabrics Alliance. As early as in 2005 the name OpenIB was tossed away in favor of Open Fabrics. Open Fabrics Alliance distributes OFED a software stack that supports many protocols and APIs including DAPL, IPoIB, SCSI over RDMA and many more. OFED relies on low level device drivers provided by hardware vendors.
Some hardware vendors distribute their custom builds of OFED. These custom builds do not differ much from any other OFED distribution except they are bundled together with device drivers.
In the past OFED used to include an MPI implementation, namely OpenMPI, but is no longer doing so (you probably read the OpenMPI FAQ).
OpenMPI still uses openib
name for its OFED based InfiniBand Byte Transfer Level component.
In different periods of time OpenMPI supported vendor specific InfiniBand APIs as well, such as Mellanox mVAPI (mvapi
BTL), and Mellanox Messaging Library mxm
.
However, openib
offers portability and if you use the latest versions, you are likely to have performance as good as that of the vendor specific APIs.
The openib
component is not using IP over IB. If you want to use IP over IB, use the tcp
component instead and set up mpiexec
hostfiles accordingly.
AFAIK, MVAPICH2 is an MPI distribution that does not use IPoIB, i.e., it directly uses verbs.