4

is MPI widely used today in HPC?

hsz
  • 148,279
  • 62
  • 259
  • 315
Alka
  • 49
  • 2

6 Answers6

8

A substantial majority of the multi-node simulation jobs that run on clusters everywhere is MPI. The most popular alternatives include things like GASNet which support PGAS languages; the infrastructure for Charm++; and probably linda spaces get an honourable mention, just due to the number of core-hours being spent running Gaussian. In HPC, UPC, co-array fortran/HPF, PVM etc ends up dividing up the tiny fraction that is left.

Any time you read in the science news about a simulation of a supernova, or of formula-one racing teams using simulation to "virtual wind-tunnel" their cars before making design changes, there's an excellent chance that it is MPI under the hood.

It's arguably a shame that it is so widely used by technical computing people - that there aren't more popular general-purpose higher-level tools which get the same uptake - but that's where we are at the moment.

Jonathan Dursi
  • 50,107
  • 9
  • 127
  • 158
  • 2
    Charm++ developer here. While Charm++ *can* be built to communicate using MPI, that's very much suboptimal for it. It actually has native communication layers for the whole spectrum of HPC systems: shared memory, Ethernet & Infiniband clusters, Cray XE/XK, IBM Blue Gene L/P/Q. We also had native layers for Myrinet, Elan, LAPI, and various other systems while they were still in service. – Phil Miller May 14 '12 at 14:57
  • As for the popularity thing, a recent study published at the most recent Cray Users' Group Meeting shows that NAMD users, written in Charm++, continue to consume about 20% of SUs on NSF's Kraken. Codes based on Global Arrays, most notably NWChem, also have substantial usage, and have native machine layers that don't necessarily run through MPI. – Phil Miller May 14 '12 at 15:00
4

I worked for 2 years in the HPC area and can say that 99% of cluster applications was written using MPI.

Elalfer
  • 5,312
  • 20
  • 25
4

MPI is widely used in high performance computing, but some machines try to boost performance by combining deploying shared memory compute nodes, which usually use OpenMP. In those cases the application would uses MPI and OpenMP to get optimal performance. Also some systems use GPUs to improve performance, I am not sure about how well MPI supports this particular execution model.

But the short answer would be yes. MPI is widely used in HPC.

akintayo
  • 176
  • 2
  • 7
2

It's widely used on clusters. Often it's the only way that a certain machine supports multi-node jobs. There are other abstractions like UPC or StarP, but those are usually implemented with MPI.

Adam
  • 16,808
  • 7
  • 52
  • 98
  • The final sentence is just plain wrong. UPC is not "usually implemented with MPI". There are a few UPC implementation supporting an MPI backend for portability, but the production-quality installs of UPC used in real science use a proprietary networking backend like GASNet or uGNI/DMAPP that deliver better performance by avoiding the MPI abstraction layer. – Dan Bonachea Oct 20 '22 at 05:04
  • Hi @DanBonachea of course you're the subject expert here. IIRC 12 years ago the portability of the MPI backend made it semi-mandatory on a great many systems. Thanks for the update! – Adam Jun 27 '23 at 05:49
2

Yes, for example, Top500 super computers are benchmarked using LINPACK (MPI based).

tszming
  • 2,084
  • 12
  • 15
2

Speaking about HPC, MPI is the main tool even nowdays. Although GPUs are strongly hitting HPC, MPI is still top 1.

Manolete
  • 3,431
  • 7
  • 54
  • 92