ANSYS: Computer-aided engineering software

Contents

  1. Overview of package
  2. Overview of package
    1. General usage
  3. Availability of package by cluster
  4. Licensing
  5. Using fluent with MPI

Overview of package

General information about package
Package: ANSYS
Description: Computer-aided engineering software
For more information: http://www.ansys.com
Categories:
License: Proprietary

General usage information

Ansys is a suite of software for engineering analysis over a range of disciplines, including finite element analysis, structural analysis, and fluid dynamics.

****************************************************
NOTE
****************************************************
This is restrictively licensed software. It is currently being made available on the UMD Deepthought HPC clusters by the generosity of the Dept of Mechanical Engineering.

Available versions of the package ANSYS, by cluster

This section lists the available versions of the package ANSYSon the different clusters.

Available versions of ANSYS on the Deepthought2 cluster (RHEL8)

Available versions of ANSYS on the Deepthought2 cluster (RHEL8)
Version Module tags CPU(s) optimized for GPU ready?
EM ansys/EM x86_64 Y
21.2 ansys/21.2 x86_64 Y
20.2 ansys/20.2 x86_64 Y

Available versions of ANSYS on the Juggernaut cluster

Available versions of ANSYS on the Juggernaut cluster
Version Module tags CPU(s) optimized for GPU ready?
21.2 ansys/21.2 x86_64 Y

Available versions of ANSYS on the Deepthought2 cluster (RHEL6) [DEPRECATED]

Available versions of ANSYS on the Deepthought2 cluster (RHEL6) [DEPRECATED]
Version Module tags CPU(s) optimized for GPU ready?
19.5 ansys/19.5 x86_64 N
18.2 ansys/18.2 x86_64 N

Licensing

The ANSYS suite of software is commercially licensed software. It is currently being made available to users of the UMD Deepthought HPC clusters by the generosity of the Dept of Mechanical Engineering.

Using fluent with MPI

To make use of the fluent package within the ANSYS suite in a parallel fashion with a job spanning multiple nodes, you need to provide special arguments to the fluent command. In particular you would want to provide the arguments:

  • -g : to instruct fluent NOT to use a graphical environment
  • -t $SLURM_NTASKS : to start the number of tasks requested of Slurm
  • -mpi=intel : to ensure the correct MPI libraries are used
  • -ssh : to use ssh instead of rsh to connect to nodes
  • -cnf=NODEFILE : to tell fluent which nodes to use

In the -cnf argument, you need to provide the name of a file containing a list of the nodes to use. This can be done using the scontrol show hostnames command; the exact syntax varies depending on the shell you are using. See the examples below, paying attention to the lines with $FLUENTNODEFILE.

If you are using csh or tcsh, something like:

#!/bin/tcsh
#SBATCH --ntasks=64
#SBATCH -t 20:00:00
#SBATCH --mem-per-cpu=6000

module load intel
module load ansys

#Get an unique temporary filename to use for our nodelist
set FLUENTNODEFILE=`mktemp`

#Output the nodes to our nodelist file
scontrol show hostnames > $FLUENTNODEFILE

#Display to us the nodes being used
echo "Running on nodes:"
cat $FLUENTNODEFILE

#Run fluent with the requested number of tasks on the assigned nodes
fluent 3ddp -g -t $SLURM_NTASKS -mpi=intel -ssh -cnf="$FLUENTNODEFILE" -i YOUR_JOU_FILE

#Clean up
rm $FLUENTNODEFILE

If you use the bourne or bourne again shells:

#!/bin/bash
#SBATCH --ntasks=64
#SBATCH -t 20:00:00
#SBATCH --mem-per-cpu=6000

#Get our profile
. ~/.profile

module load intel
module load ansys

#Get an unique temporary filename to use for our nodelist
FLUENTNODEFILE=`mktemp`

#Output the nodes to our nodelist file
scontrol show hostnames > $FLUENTNODEFILE

#Display to us the nodes being used
echo "Running on nodes:"
cat $FLUENTNODEFILE

#Run fluent with the requested number of tasks on the assigned nodes
fluent 3ddp -g -t $SLURM_NTASKS -mpi=intel -ssh -cnf="$FLUENTNODEFILE" -i YOUR_JOU_FILE

#Clean up
rm $FLUENTNODEFILE