--> Skip to main content

NAMD: Nanoscale Molecular Dynamics

Contents

  1. Overview of package
  2. Overview of package
    1. General usage
  3. Availability of package by cluster
  4. Usage guidelines and hints
    1. Using Older Versions (versions 2.8 or older)
    2. Using Newer Versions on CPUs
    3. Using Newer Versions on GPUs

Overview of package

General information about package
Package: NAMD
Description: Nanoscale Molecular Dynamics
For more information: https://www.ks.uiuc.edu/Research/namd
Categories:
License: Free2Use (NAMD)

General usage information

Nanoscale Molecular Dynamics (NAMD) is software for molecular dynamics simulation, written using the Charm++ parallel programming model. It is noted for its parallel efficiency and is often used to simulate large systems (millions of atoms).

This module will add the namd and charmrun commands to your PATH.

Available versions of the package NAMD, by cluster

This section lists the available versions of the package NAMDon the different clusters.

Available versions of NAMD on the Zaratab cluster

Available versions of NAMD on the Zaratab cluster
Version Module tags CPU(s) optimized for GPU ready?
2.14
  • 2.14
  • (a.k.a namd/cuda/11.6.2/gcc/9.4.0/nompi/zen2/2.14)
zen2 Y
2.14
  • 2.14
  • (a.k.a namd/nocuda/gcc/9.4.0/openmpi/4.1.1/zen2/2.14)
zen2 Y
2.14
  • 2.14
  • (a.k.a namd/nocuda/gcc/8.4.0/openmpi/3.1.5/zen/2.14)
zen Y

Available versions of NAMD on the Juggernaut cluster

Available versions of NAMD on the Juggernaut cluster
Version Module tags CPU(s) optimized for GPU ready?
2.14 namd/2.14 skylake_avx512, x86_64, zen Y

Please note that versions 2.8 and under are not MPI or GPU aware, hence you MUST use charmrun to run namd2 if you are running an HPC job that will span nodes.

Versions 2.9 and higher are built with an MPI aware charm++, hence should be started like a standard MPI program using mpirun. A GPU-aware version is also available, and will ONLY run on nodes with a GPU. You must explicitly select it; module load namd/2.9 will ALWAYS return the non-CUDA/non-GPU enabled version, even if you have previously loaded a cuda module. GPU-enabled versions of namd2 will not work on nodes without a GPU.

Usage guidelines and hints

The following guidelines and hints only cover the mechanics of starting the namd2 program, especially on the High Performance Computing clusters. Unfortunately, the Division of Information Technology does not have the expertise to assist in the assembling the inputs, etc. to the NAMD program.

Using Older Versions (versions 2.8 or older)

WARNING
Although we will continue to maintain these older versions of NAMD on the original Deepthought cluster for a while, they are getting long in the tooth and we encourage users to migrate one of the newer versions. These older versions are NOT supported on Deepthought2.

For the older Division of IT maintained installations NAMD, versions 2.8 or older, the NAMD program is NOT built with support for MPI, and so you must use charmrun to run NAMD with multiprocess/multinode support. The following example runs a simple example case over 12 processor cores:

#!/bin/bash
#SBATCH -n 12
#SBATCH -A test-hi
#SBATCH -t 1:00

. ~/.profile
SHELL=bash

#Use charmrun for NAMD versions <= 2.8 on DIT systems
NAMD_VERSION=2.8
module load namd/$NAMD_VERSION

NAMD2=`which namd2`
WORKDIR=/export/lustre_1/payerle/namd/tests/alanin

cd $WORKDIR

charmrun ++remote-shell ssh +p12 $NAMD2 alanin

This is quick example, so the time limit is only set to 1 minute (which is much longer than needed to complete the job) --- you will likely need to extend the time in your runs. And of course change the allocation to charge against.

Note: You must give the ++remote-shell ssh arguments to charmrun, otherwise it will attempt to use rsh and fail. The +p12 argument tells charmrun to run the job over 12 cores; this again should be changed (and should match the #SBATCH -n 12 line) for the actual number of cores you intend to use.

Using newer Versions, on CPUs (versions 2.9 or higher)

Starting with version 2.9, the Division of IT builds of charm++ (and thus NAMD) are MPI aware. Thus for these versions, you should not be using the charmrun command to start NAMD but instead use the standard mpirun command, as in the example below.

#!/bin/bash
#SBATCH -n 12
#SBATCH -A test-hi
#SBATCH -t 1:00

. ~/.profile
SHELL=bash

#for newer NAMD builds, version 2.9 and higher, non-cuda
NAMD_VERSION=2.9
module load namd/$NAMD_VERSION

WORKDIR=/lustre_1/payerle/namd/tests/alanin
cd $WORKDIR

mpirun namd2 alanin

Again, this is a short example, so time limit is only 1 minute. You will likely need to increase this, and change the account to charge against.

Note that we run using mpirun to distribute the job among the requested cores. OpenMPI is Slurm-aware, so it will automatically distribute the namd2 code over the requested processor cores.

Using newer Versions, on GPUs (versions 2.9 or higher)

The Deepthought2 HPC has a significant number of nodes with GPUs available. We have a GPU-aware version of NAMD available, which you can use as shown in the following example:

#!/bin/bash
#SBATCH -N 1
#SBATCH -A test-hi
#SBATCH -t 1:00
#SBATCH --gres=gpu

. ~/.profile
SHELL=bash

#for newer NAMD builds, version 2.9 and higher, cuda enabled
NAMD_VERSION=2.9
module load namd/$NAMD_VERSION/cuda

NAMD2=`which namd2`
WORKDIR=/lustre/payerle/test/namd/alanin

cd $WORKDIR

namd2 alanin
p>Note here that we ask for a single node (#SBATCH -N 1) and for a node with GPUs (#SBATCH --gres=gpu), and that we load the GPU-enabled version of NAMD (module load namd/$NAMD_VERSION/cuda).

There are way more cores on even a single GPU than are needed for this simple case, so we only request one node. The GPU-enabled NAMD is also MPI aware, so we should be able to run on multiple nodes. (If someone succeeds in doing such, please send us your submit script to add to this page.)






Back to Top