openmpi: OpenMPI implementation of Message Passing Interface

Contents

  1. Overview of package
  2. Overview of package
    1. General usage
  3. Availability of package by cluster

Overview of package

General information about package
Package: openmpi
Description: OpenMPI implementation of Message Passing Interface
For more information: http://www.open-mpi.org
Categories:
License: OpenSource (BSD 3-clause)

General usage information

The Message Passing Interface (MPI) is a standard for message passing which forms the basis of most distributed memory parallel codes. OpenMPI is an open library implementing the MPI standard.

This module will add the commands mpirun and mpiexec for starting MPI-enabled applications - to your path, as well as the compiler wrappers - mpicc, mpic++, and mpifort for the C, C++, and Fortran languages, respectively.

The following environmental variables have been defined:

  • \$OPENMPI_ROOT has been set to the root of the openmpi installation
  • \$OPENMPI_LIBDIR points to the directory containing the libraries
  • \$OPENMPI_INCDIR points to the directory containing the header files

You will probably wish to use these by adding the following flags to your compilation command (e.g. to CFLAGS in your Makefile):

  • -I\$OPENMPI_INCDIR
and the following flags to your link command (e.g. LDFLAGS in your Makefile):
  • -L\$OPENMPI_LIBDIR -Wl,-rpath,\$OPENMPI_LIBDIR
You might need to provide the above variables to the configuration script, etc. when compiling codes, and/or use the compiler wrappers listed above.

Available versions of the package openmpi, by cluster

This section lists the available versions of the package openmpion the different clusters.

Available versions of openmpi on the Deepthought2 cluster (RHEL8)

Available versions of openmpi on the Deepthought2 cluster (RHEL8)
Version Module tags CPU(s) optimized for GPU ready?
3.1.5 openmpi/3.1.5 ivybridge, x86_64 Y

Available versions of openmpi on the Juggernaut cluster

Available versions of openmpi on the Juggernaut cluster
Version Module tags CPU(s) optimized for GPU ready?
3.1.5 openmpi/3.1.5 skylake_avx512, x86_64, zen Y

Available versions of openmpi on the Deepthought2 cluster (RHEL6) [DEPRECATED]

Available versions of openmpi on the Deepthought2 cluster (RHEL6) [DEPRECATED]
Version Module tags CPU(s) optimized for GPU ready?
4.0.1
  • 4.0.1
  • (a.k.a openmpi/4.0.1/gnu/9.1.0)
  • (a.k.a openmpi/gnu/9.1.0/4.0.1)
ivybridge N
1.10.2
  • 1.10.2
  • (a.k.a openmpi/1.10.2/gnu/6.1.0)
  • (a.k.a openmpi/1.10.2/intel/2016.3.210)
  • (a.k.a openmpi/1.10.2/pgi/17.3)
  • (a.k.a openmpi/gnu/6.1.0/1.10.2)
  • (a.k.a openmpi/intel/2016.3.210/1.10.2)
  • (a.k.a openmpi/pgi/17.3/1.10.2)
ivybridge N
1.8.6
  • 1.8.6
  • (a.k.a openmpi/1.8.6/gnu/4.9.3)
  • (a.k.a openmpi/1.8.6/gnu/4.8.1)
  • (a.k.a openmpi/1.8.6/intel/2015.0.3.032)
  • (a.k.a openmpi/1.8.6/intel/2013.1.039)
  • (a.k.a openmpi/1.8.6/sunstudio/12.4)
  • (a.k.a openmpi/gnu/4.9.3/1.8.6)
  • (a.k.a openmpi/gnu/4.8.1/1.8.6)
  • (a.k.a openmpi/intel/2015.0.3.032/1.8.6)
  • (a.k.a openmpi/intel/2013.1.039/1.8.6)
  • (a.k.a openmpi/sunstudio/12.4/1.8.6)
ivybridge N
WARNING
Performance Alert for Deepthought2 RHEL6 Users
We have seen significant performance issues when using OpenMPI version 1.6 or 1.6.5 with more than about 15 cores/node and when the setting -bind-to-core is NOT used. Deepthought2 users are encouraged to add the -bind-to-core to their mpirun command.