The Deepthought2 cluster


The University of Maryland has a number of high performance computing resources available for use by campus researchers requiring compute cycles for parallel codes and applications. These are:

  • Deepthought2: Our flagship cluster, intended for large, parallel jobs, housed just off campus and maintained by the Division of Information Technology. It consists of over 480 nodes with dual socket (20 cores per node) Ivy Bridge 2.8 GHz processors, forty of which have dual Nvidia K20m GPUs. Most nodes have 128 GB of RAM, with a few having 1 TB of RAM. All nodes have FDR infiniband (56 Gb/s) interconnects, and there is 1 PB of fast lustre storage.
  • MARCC/Bluecrab: The University of Maryland is allocated 15% of the MARCC/Bluecrab cluster housed at the Maryland Advanced Research Computing Center, jointly managed by John Hopkins and the University of Maryland. This is suitable for large parallel jobs. There are about 650 compute nodes with dual Haswell CPUs plus another 50 or so with dual Broadwell CPUS. Both of these have 128 GB of RAM per node. In addition, there are about 70 nodes with dual Nvidia Tesla K80 GPUs, and 50 nodes with 1 TB of RAM. Although housed off campus, the Division of IT is working on establishing a fast pipe to the cluster.
  • Juggernaut: A small but growing cluster, providing compute resources to some users who could not be added to the Deepthought2 cluster because of constraints of its data center, and a testing ground for the next cluster at UMD.

The following cluster has been retired, but information about it is still provided on the website for historical reasons:

  • Deepthought: RETIRED The original of the Deepthought clusters, now intended for smaller, less parallel or serial jobs and for parallel code development. This resource is housed on campus and maintained by the Division of Information Technology. Because new hardware has been added over the years, it is very heterogeneous, which makes it less suitable for large parallel jobs. CPUs range from 2.0 to 2.9 GHz, from Woodcrest to Sandy Bridge architectures and 4 - 16 cores/node, and 1 - 4 GB/core RAM. All nodes have Gigabit ethernet interconnects, some have DDR, QDR, or FDR infiniband. There is about 100 TB of lustre storage available.

Comparison of Clusters

The following table tries to compare the HPC resources:

Cluster Number of
compute nodes
Cores/node Processor
Nodes with GPUs Interconnect Disk space Licensed
Deepthought2 488 20
(1TB nodes have 40)
2.8 GHz
(1TB nodes are 2.2 GHz)
(4 nodes have 1 TB)
40 (dual K20m) FDR Infiniband 1 PB lustre Intel compiler suite
Matlab Distributed Compute Server
MARCC/Bluecrab 846 most 24
some 28
(1TB nodes have 48)
2.5-3.0 GHz 128
(50 nodes have 1 TB)
72 (dual K80) FDR Infiniband 2 PB lustre
Intel compiler suite
376 Varies
2.0-2.9 GHz
4-64 GB
None Varies from
100 TB lustre Intel compiler suite

Click on the clustername above for more detailed information about a particular cluster.