We just finished up another MPI Forum meeting earlier this week, hosted at the Cisco node 0 facility in San Jose, CA. A lot of the working groups are making tangible progress and bringing their work back to the full forum for review and discussion. Sometimes the working group reports are accepted and moved forward towards standardization; other times, the full Forum provides feedback and guidance, and then sends the working group back to committee to keep hashing out details. This is pretty typical stuff for a standard body.

This week, we had a first vote (out of two total) on the MPI_MPROBE proposal. It passed the vote, and will likely pass its next vote in March, meaning that it will become part of the MPI 3.0 draft standard.

Many of you may not be using IP-based MPI network transports, but as HPC is becoming more and more commoditized, IP-based MPI implementations may actually start gaining in importance. Not to ultra-high-performing systems, of course. But you’d be surprised how many 4-, 8-, and 16-node Ethernet-based clusters are sold these days… particularly as core counts are increasing — a 16-node Westmere cluster is quite powerful!

Owners of such systems are typically running ISV-based MPI applications, or other “canned” parallel software. Most of them don’t use InfiniBand or other high-speed interconnect — they just use good old Ethernet with TCP as the underlying transport for their MPI.

Many in the HPC research community are starting to work on “exascale” these days — the ability to do 10^18 floating point operations per second. Exascale is such a difficult problem that it will require new technologies in many different areas before it can become a reality. Case in point is this entry at Inside HPC today entitled, “InfiniBand Charts Course to Exascale“.

That being said, there’s a key piece missing from the discussion: the (networking) software. More specifically: the current OpenFabrics Verbs API abstractions are (probably) unsuitable for exascale, a fact that Fab Tillier (Microsoft) and I presented at the OpenFabrics workshop in Sonoma last year (1up, 2up).

Two hwloc community members have taken it upon themselves to provide high-quality native language bindings for Perl and Python. There’s active work going on, and discussions occurring between the hwloc core developers and these language providers in order to provide good abstractions, functionality, and experience.

The Perl CPAN module is being developed by Bernd Kallies: you can download it here (I linked to the directory rather than a specific tarball because he keeps putting up new versions).

The Python bindings are being developed by Guy Streeter (at Red Hat); his git repository is available here.

Some of the individuals posting to this site, including the moderators, work for Cisco Systems. Opinions expressed here and in any corresponding comments are the personal opinions of the original authors, not of Cisco. The content is provided for informational purposes only and is not meant to be an endorsement or representation by Cisco or any other party. This site is available to the public. No information you consider confidential should be posted to this site. By posting you agree to be solely responsible for the content of all information you contribute, link to, or otherwise upload to the Website and release Cisco from any liability related to your use of the Website. You also grant to Cisco a worldwide, perpetual, irrevocable, royalty-free and fully-paid, transferable (including rights to sublicense) right to exercise all copyright, publicity, and moral rights with respect to any original content you provide. The comments are moderated. Comments will appear as soon as they are approved by the moderator.