OPENMPI (Library)

Introduction

Open MPI is an open source, freely available implementation of both the MPI-1 and MPI-2 documents. Open MPI represents the combination of the the efforts of development teams associated with other well-known MPI implementations (FT-MPI, LA-MPI, LAM/MPI) with contributions from PACX-MPI. Each of these MPI implementations excelled in one or more areas. The driving motivation behind Open MPI is to bring the best ideas and technologies from the individual projects and create one world-class open source MPI implementation that excels in all areas.

Documentation

Mailing List

To sign up for email notices about pending version updates, removals and other
important announcements for this software package,
sign in.

Announcements

Jun 14, 2017: Openmpi 2.1.1 modules are now available on all sharcnet systems (except iqaluk due to disk space issues). The int and intdebug flavors available in previous versions have been renamed mtune and mtunedebug, see module show openmpi/intel1604-mtune/2.1.1 for details or submit a sharcnet ticket asking for intel compilation help. Version 2.1.1 includes two new module flavors ilp64 and ilp64debug to provide Fort integer size: 8 as described in http://diracprogram.org/doc/release-12/installation/int64/mpi.html

Jun 7, 2016: Please note ALL openmpi-1.8.3 sharcnet modules will be REMOVED very soon due to performance issues. Please recompile and use openmpi-1.8.7 instead immediately.

Nov 12, 2015: Please note, the sharcnet openmpi module versions 1.7.2, 1.8.3, 1.8.4, 1.8.5 will be removed from all sharcnet clusters Nov 19. Codes requiring a version more recent than openmpi 1.6.2 (as provided by the default sharcnet environment) may be moved to use one of the available openmpi 1.8.7 modules instead. If there are any questions or concerns please submit a ticket asap to https://www.sharcnet.ca/my/problems/submit thank-you.

Jul 29, 2015: Please note the sharcnet modules for openmpi 1.7.2, 1.8.1 and 1.8.6 will be removed Aug 5, 2015 from all sharcnet clusters. If there are any concerns please submit a ticket beforehand to https://www.sharcnet.ca/my/problems/submit thank-you.

May 22, 2015: Open MPI 1.8.5 New Features: 1) Improved on-node GPU to GPU transfers even when CUDAIPC not supported between GPUs 2) Properly handle Unified Memory. This is done by disabling CUDAIPC and GPU Direct RDMA optimizations on Unified Memory buffers. 3) Support for blocking reduction MPI APIs. See https://www.open-mpi.org/faq/?category=runcuda.

Dec 9, 2014: Please note that the openmpi/gcc/1.8.1 module is now renamed to openmpi/gcc/1.8.1-broken on certain clusters including orca and saw (so far). The module is still working on redin and hound. Using this module will result in hung jobs. We are currently investigating the cause and will update this message when more is known.

Aug 13, 2014: The following openmpi modules will be removed from all sharcnet clusters on 20aug2014. If there are any concerns with this change please submit a problem ticket to the sharcnet web portal asap. openmpi/intel/1.5.5, openmpi/intel/1.7.2, openmpi/gcc/1.7.2, openmpi/open64/1.7.2, openmpi/intel/1.7.3, openmpi/intel/1.7.4, openmpi/intel/1.7.5rc5

Jun 28, 2014: At this moment only redfin and saw have confirmed working openmpi 1.8.1 installations. We are working to resolve its operation on other clusters asap.

Jun 3, 2014: Advanced WARNING please note that openmpi 1.7.3, openmip 1.7.4 and openmpi 1.7.5rc5 contain bugs and should be avoided. They will be removed along with all legacy openmpi 1.4.X versions once openmpi 1.8.1 is installed on all sharcnet clusters and fully tested.

May 30, 2014: Please note that openmpi 1.7.3, openmip 1.7.4 and openmpi 1.7.5rc5 contain bugs and should be avoided. They will be removed once openmpi 1.8.1 is installed.

Nov 11, 2013: The openmpi/intel/1.6.1 module will be removed from angel, brown, goblin, hound, iqaluk, kraken, monk, orca, redfin, saw and wobbie on Nov 25, 2013. If there are any concerns please submit a ticket to https://www.sharcnet.ca/my/problems/submit

Sep 24, 2013: Users of the mpi queue on kraken should first run “module switch openmpi/intel/1.6.2 openmpi/intel/1.6.2_1” before submitting jobs with “-f myri”
since the default openpmi module does NOT support myrinet. For more information please see https://www.sharcnet.ca/help/index.php/Kraken#Network_.2F_MPI_job_considerations