Package: | visit |
---|---|
Description: | interactive, scalable, visualization, animation and analysis tool |
For more information: | http://visit.llnl.gov/ |
Categories: | |
License: | OpenSource (LLNL License) |
VisIt is an open-source interactive parallel visualization and graphical analysis tool for viewing scientific data. It can be used to visualize scalar and vector fields defined on 2D and 3D structured and unstructured meshes. VisIt was designed to handle very large data set sizes in the terascale range and yet can also handle small data sets in the kilobyte range.
This module will add the visit and related commands to your PATH.
This section lists the available versions of the package visiton the different clusters.
Version | Module tags | CPU(s) optimized for | GPU ready? |
---|---|---|---|
3.3.3 | visit/3.3.3 | zen2 | Y |
This section discusses various topics relating to using Visit on High Performance Computing (HPC) clusters.
One major concern when using visualization software such as Visit on HPC clusters is how to display the data. HPC clusters can generate large amounts of data, and visualization tools are useful in enabling researchers to understand the data that what produced. But generally the researchers are not sitting anywhere near the HPC clusters, HPC clusters generally do not have displays attached, and users usually wish to view the data on their desktop workstations. While users can copy the data files from the HPC clusters to their workstations, this can be time consuming as sometimes the data files are quite large. And that assumes there is room on the workstation disk.
p>In the remainder of this subsection, we will discuss some ways to view data sitting on disks attached to an HPC cluster on your desktop or similar system.If you have a desktop that has an X server available, then the easiest solution might be to simply to ssh to one of the login nodes and run visit on the login node with the X display tunnelled back to your desktop. The help pages on using X11 discuss the mechanics of this; basically you ssh to the login node with X11 tunnelling enabled, and then run visit in the remote shell.
When this works, it can be the simplest way to view data remotely using Visit. However, even when it works, it can be sluggish. The visit process on the HPC system is sending all that graphics data back to your desktop for display, and things can become quite unresponsive at times. Furthermore, there can be quirks and incompatibilities between the version of X that Visit running on the HPC cluster was built against and the X server running on your desktop which can cause all sorts of issues. In general, if you encounter issues, it is probably easiest to just use Visit in client/server mode.
Visit supports a client/server mode wherein you launch the Visit GUI on your workstation/desktop but the data processing is handled on one or more remote systems. Graphical processing is split between the workstation and the remote systems.
This is particularly advantageous when working on High Performance Computing (HPC) clusters, as this mode of operation can:
NOTE: Although it should be possible to avail oneself of GPU enabled nodes for hardware accelerated processing of graphical data, this is NOT currently supported on the Deepthought clusters.
Within ViSit, this client/server mode is controlled by "Host Profiles". The following subsection deals with setting up these profiles (and includes some standard profiles for the Deepthought clusters). After that, we discuss using the profiles for visualization tasks.
Before you can do client/server visualization with ViSit, you need to
set up Host Profiles. You can
These files should go into the appropriate "hosts" directory on your
workstation. For Unix-like systems, this is usually
~/.visit/hosts
. On Windows systems I believe it is something
like My Documents\VisIt VISIT_VERSION\hosts
. After
copying the files there, you will need to start ViSit again for them to
be detected.
If you use one of these files, you can probably skip over the manual configuration described below, and proceed on to the section on using the profiles. However, that subsection is still useful if you wish to customize the standard profiles.
(The following instructions are based on Visit 2.10, but things should be similar for later versions.)
ol>Options | Host Profiles
page from the menu bar.Hosts
area to the left, and you can select one of them
to edit it. Or can click the new host button to create a new host entry.
Either way, it will open the entry with fields on the right side. There
are two tabs on the right, Host Settings
and Launch
Profiles
. We deal with Host Settings
first.
Host Nickname
is the name that will be shown to you for
the host profile. I suggest something like
UMD Deepthought2 Cluster
.
Remote hostname
is the hostname that Visit will ssh to
to open the remote Visit process. Here you should give the appropriate
hostname for the cluster, e.g
login.deepthought2.umd.edu
for the Deepthought2 cluste (RHEL8)rhel6.deepthought2.umd.edu
for the Deepthought2 cluste (RHEL6)login.juggernaut.umd.edu
for the Juggernaut clusterHostname aliases
field, you should include the pattern
that will match the hostnames for specific login nodes for the cluster. E.g.:
login-*.deepthought2.umd.edu
for Deepthought2 (RHEL8 and RHEL6)login-*.juggernaut.umd.edu
for JuggernautMaximum nodes
and Maximum processors
unchecked
Path to Visit Installation
, enter the value
/software/visit
for the Juggernaut cluster and Deepthought2
clusters (use /cell_root/software/visit
for the Deepthought2
RHEL6 nodes).
This will cause it to find custom wrapper scripts for these clusters which
will ensure the correct environmental variables are set to run the compute
engines, etc. on these clusters.
Username
, enter your username on the cluster. Remember
that on Bluecrab, your username includes @umd.edu
.
Tunnel data connections through SSH
. This is required if your
workstation has any sort of firewall on it, which is typically the case.
Launch Profiles
tab. The previous tab gave
basic information about connecting to the cluster, we now provide information
about how to run on the cluster. You can select an existing launch profile and
edit below, or use "New profile" button to create a new profile.
We are going to define three profiles:
serial
: this runs ViSit in one process on the login
node. parallel (debug partition)
: this will run ViSit
in a job submitted to the debug
partition.
I.e., a short job, but run at somewhat higher priority
for better interactive use.parallel
: this will run ViSit in a more generic
job. You can specify the number of cores/nodes/etc.serial
launch profile is easiest. Just click the "New
Profile" button, and enter its name, e.g. serial
. That's it.
Parallel
tab, and:
Parallel launch method
checkbox and select
sbatch/mpirun
in the drop down (probably the last entry).parallel (debug partition)
profile, also
click the Parition/PoolQueue
checkbox and enter debug
in the text box. For the generic parallel
profile, you are
probably best just leaving this unchecked/blank.Number of processors
value to
the desired default value. You will be able to adjust this each time you
use the profile, but this will be the default value. I recommend a value
of 20 for Deepthought2 and 28 for Juggernaut, as this is what is typically
available on a single node.Number of nodes
,
Bank/Account
, Time limit
, and Machine file
), if you check the checkbox you can set a default which can be modified each
time you use the profile. If left unchecked, you will not be able to modify
when using the profile, and it will default to whatever sbatch decides. I
would recommend checking the boxes for Number of nodes
,
Bank/Account
and Time Limit
, but typically
Machine File
can be left unchecked.Advanced
subtab just under the
Launch parallel engine
checkbox. You normally do not need to
set anything here, but for some more complicated cases they might be needed.
Use VisIt script to set up parallel envionrment
checkbox should be checked (this should be by default).Launcher arguments
: Here you can provide additional arguments to be passed to the sbatch
command. E.g., to
request 9 GB of RAM per CPU core (instead of the default 6 GB) you could add
here something like --mem-per-cpu=9000
.Sublauncher arguments
: Here you can provide
additional arguments to be passed to the mpirun
command.Sublauncher pre-mpi command
: Here you can
provide a command to be run before the mpirun
command
in the batch script.Sublauncher post-mpi command
: Here you can
provide a command to be run after the mpirun
command
in the batch script.Apply
button to make it effective. If you editted anything (i.e. created new
profiles or changed a profile), you should select the new/modified host profiles
and Export host
them to ensure they are saved and available in
your next ViSit session.Dismiss
button to close the Host Profiles window.
In this section we will briefly describe how to use the profiles. I am assuming you have a Host Profile for one of the Deepthought2 or Juggernaut clusters, with three launch profiles as described above, and that you have access to use the HPC cluster you have the profile for.
NOTE: I believe the version of VisIt that you are running on your workstation must match what is available on the cluster, at least down to the minor (second/middle) version number. If you do not have that on your workstation, you can try running VisIt on the login node with the display going back to your workstation, but having the heavy work being done on the compute nodes using the client/server model discussed here.
In general, using ViSit in client/server starts with opening the data
file. Just start to open a file as usual, but in the
Host
dropdown at
the top of the file open dialog, there should be an option for the various
host profiles you have defined. Select the appropriate host profile. It
will likely prompt you for a password (make sure the username given, if any
is correct, and correct it if not) (If no username is given, it assumes your
username on the workstation is the same as on the remote cluster). Within a
few seconds you should see a file list corresponding to your home directory
on the cluster. You can then select a file as usual.
If multiple launch profiles exist for that host, you will be given an option of choosing which profile you wish to use, and what options you wish to use with that launch profile if it supports any. If there is only a single launch profile, you obviously cannot choose a different launch profile, but a pop up will appear if there are any options for that launch profile. Otherwise, visit will just launch the profile with the defaults.
If you just wish to use ViSit on a file that resides
on the HPC cluster (without copying the file to your local workstation) but
do not need (or cannot use) the parallel capabilities of ViSit, the
serial
option is the easiest, and does not take additional
options. Just select it and hit OK. It may take a couple of seconds to
start the remote engine, but then should return and you can visualize your
data as if it were local.
To use one of the parallel profiles, just select it after selecting the
file. The parallel (debug partition)
is good for a short
interactive visualization, but is limited in number of processes/nodes and
to 15 minutes. However, since it uses the debug
partition, it
generally will spend less time waiting in the queue. The generic
parallel
profile is less restrictive, but depends on jobs
submitted via sbatch and can have significant wait times before the job
starts running.
When you select the profile, you typically will have the opportunity to change the defaults for wall time, number of nodes, and allocation account to which the job is to be charged. NOTE: ViSit seems to assume 8 processes per node by default, so e.g. if you request 20 processes on Deepthought2, it will try to spread them over 3 nodes. I strongly advise manually setting the number of nodes appropriately. Note also that the memory of the node is split evenly over all the ViSit processes on the node, so you might need to adjust the node count to use more than the minimal number of nodes in cases where memory requirements are higher.
When you finish updating options and hit "OK", your ViSit GUI will ssh to the login node for the cluster and submit a batch job requesting the desired number of nodes/cores. Typically you will see a pop up showing that ViSit is awaiting a connection from the compute engines --- this will not occur until after the batch job starts. For batch jobs submitted to the debug partition, this should typically be within a minute or two, but it is likely significantly longer for the general parallel profile.
When the job starts, after 20 seconds or so the connection should be made and the pop up will go away. At this point you can use ViSit as normal.
At some point, the scheduler may terminate your compute engines (e.g. due to exceeding walltime). You should be able to continue using the GUI, and when you try to do something that requires the compute engine, a pop up will appear allowing you to start up a new launch profile.
Here we provide a quick example of using VisIt with parallel processing
in client/server mode. This example assumes that you have already downloaded
the
appropriate standard profile above and placed it in the proper VisIt
configuration directory. We are going
to assume you going to run VisIt from the cluster login node, and so put
the host profile in your ~/.visit/hosts
directory on the cluster.
module load visit/VERSION
.visit
command.File
tab, select Open File
.
Host
), click on the drop down to the right. You should see an option for UMD CLUSTERNAME Cluster
matching the profile you downloaded. If not, you did not download the
profile or did not put the profile file in the correct location. Exit VisIt,
fix the issue with the profile, and restart VisIt. If you see it select it.
Path
, replace the existing text with
/software/visit
for the Deepthought2 or Juggernaut cluster (for
the RHEL6 nodes on Deepthought2, use
/cell_root/software/visit
). After the
file browser window updates, descend down the directory named after the VisIt
version and related subdirs (e.g. 2.10.2/osmesa/sys/data
on
Deepthought2 or 2.13.2/osmesa/linux-rhel7-x86_64/data
on
Juggernaut (since we are just looking for an example file, the version/builds
do not need to exactly match what you are running).
multi_ucd3d.silo
file. You can try
other files if you want, but only the multi_*
files are multidomain
datasets --- if you open a file not starting with multi_*
, it
will be a single domain dataset and you will not be able to effectively use
the parallelism in VisIt.
parallel (debug partition)
or
parallel
(generally parallel (debug partition)
will
give you quicker turn around for this simple example). Set the time limit to
something reasonable (like 15 minutes), and select the number of cores and nodes
you want. The multi_ucd3d.silo
file I believe has 36 data domains,
so it is probably best to choose a number which divides 36 evenly. Be sure
to select an appropriate number of nodes (there are 20 cores/node on
Deepthought2, 28-40 on Juggernaut). More than 36 cores is wasteful.
visit.USERNAME
, and the number should be in the
terminal from which you launched visit.
procid
scalar.
Controls
menu, select Expressions
New
, and enter procid
in the
Name
field.Insert function
, go to the
Miscellaneous
submenu item, and click on procid
Insert variable
, go to the
Meshes
submenu item, and click on mesh1
.procid(mesh1)
.
If not, fix it so it does. When done, click Apply
and
Dismiss
.
Plots
section, click on the Add
button,
and in the Pseudocolor
submenu select procid
. If
no procid
appears, you did not define the procid
scalar properly above.Plots
section, click Draw
.multi_ucd3d.silo
file; other multi_*
files will
be different shapes but still multicolored. Single domain files will be a
solid color). The different colors represent the processor id of the processor
responsible for rendering that section of the data.