Tensorflow

Contents

Summary and Version Information

Package Tensorflow
Description TensorFlow library for Machine Intelligence
Categories Computerscience,   Research
Version Module tag Availability* GPU
Ready
Notes
1.0.0 tensorflow/1.0.0/python/2/nocuda Deepthought2 HPCC
RedHat6
N No GPU support
1.3.0 tensorflow/1.3.0/python/2/nocuda Deepthought2 HPCC
RedHat6
N No GPU support
1.4.1 tensorflow/1.4.1/cuda Deepthought2 HPCC
RedHat6
Y Supports GPUs
1.4.1 tensorflow/1.4.1/nocuda Deepthought2 HPCC
RedHat6
N No GPU support
1.11.0 tensorflow/1.11.0/cuda Deepthought2 HPCC
RedHat6
Y Supports GPUs
1.11.0 tensorflow/1.11.0/nocuda Deepthought2 HPCC
RedHat6
N No GPU support
1.14.0 tensorflow/1.14.0/cuda Deepthought2 HPCC
RedHat6
Y Supports GPUs
1.14.0 tensorflow/1.14.0/nocuda Deepthought2 HPCC
RedHat6
N No GPU support

Notes:
*: Packages labelled as "available" on an HPC cluster means that it can be used on the compute nodes of that cluster. Even software not listed as available on an HPC cluster is generally available on the login nodes of the cluster (assuming it is available for the appropriate OS version; e.g. RedHat Linux 6 for the two Deepthought clusters). This is due to the fact that the compute nodes do not use AFS and so have copies of the AFS software tree, and so we only install packages as requested. Contact us if you need a version listed as not available on one of the clusters.

In general, you need to prepare your Unix environment to be able to use this software. To do this, either:

  • tap TAPFOO
OR
  • module load MODFOO

where TAPFOO and MODFOO are one of the tags in the tap and module columns above, respectively. The tap command will print a short usage text (use -q to supress this, this is needed in startup dot files); you can get a similar text with module help MODFOO. For more information on the tap and module commands.

For packages which are libraries which other codes get built against, see the section on compiling codes for more help.

Tap/module commands listed with a version of current will set up for what we considered the most current stable and tested version of the package installed on the system. The exact version is subject to change with little if any notice, and might be platform dependent. Versions labelled new would represent a newer version of the package which is still being tested by users; if stability is not a primary concern you are encouraged to use it. Those with versions listed as old set up for an older version of the package; you should only use this if the newer versions are causing issues. Old versions may be dropped after a while. Again, the exact versions are subject to change with little if any notice.

In general, you can abbreviate the module tags. If no version is given, the default current version is used. For packages with compiler/MPI/etc dependencies, if a compiler module or MPI library was previously loaded, it will try to load the correct build of the package for those packages. If you specify the compiler/MPI dependency, it will attempt to load the compiler/MPI library for you if needed.

Singularity/usage information

The keras, tensorflow, and theano packages are not natively installed on the Deepthought2 cluster for various technical reasons. What is provided instead are Singularity containers which have versions of both python2 and python3 installed with these packages.

To use these python packages, you must load the appropriate environmental module (e.g. module load tensorflow) and then launch the python interpretter inside the Singularity container. To help with this, the following helper/wrapper scripts have been provided:

  1. tensorflow or tensorflow-python2 will invoke a Tensorflow-enabled python2 interpretter within the container. Any arguments given will be passed to the python command, so you can do something like tensorflow myscript.pl
  2. tensorflow-python3 will behave as above, but invoke a Tensorflow-enabled python3 interpretter within the container.
  3. tensorboard, tensorboard-python2 will run the tensorboard command (using python2) inside the container. Any arguments provided will be passed to the tensorboard command.
  4. saved_model_cli, saved_model_cli-python2 will run the saved_model_cli command (using python2) inside the container. Any arguments provided will be passed to the saved_model_cli command.
  5. tensorboard-python3, saved_model_cli-python3 will behave similarly to the tensorboard and saved_model_cli commands above, but will use python3 variants of the scripts.

In all cases, any arguments given to the wrapper scripts are passed directly to the python interpretter running within the container. E.g., you can provide the name of a python script, and that script will run in the python interpretter running inside your container. Your home and lustre directories are accessible from within the container, so you can read and write to files in those directories as usual.

Note that if you load the keras environmental module and then issue the python command, you will start up a natively installed python interpretter which does NOT have the tensorflw, etc. python modules installed. You need to start one of the python interpretters inside the container to get these modules --- you can either do that using the correct singularity command, or use the friendlier wrapper scripts described above.

It is hoped that for most users, the "containerization" of these packages should not cause any real issues, and hopefully not even really be noticed. However, there are some limitations to the use of containers:

  1. In general, you will not have access to natively installed software, just the software included in the container. So even if some package foo is installed natively on Deepthought2, it is likely not accessible from within the container (unless it was also installed inside that container).
  2. You will not likely be able to use python virtualenv scripts to install new python modules for use within the container, as that will be installing packages natively, which would not then be available inside the container.

However, you are permitted to create your own Singularity containers and use them on the Deepthought2 cluster. You will need to have root access on some system (e.g. a workstation or desktop) to create your own Singularity containers (we cannot provide you root access on the Deepthought2 login or compute nodes), but if you have such you can build your own containers. You can also copy the system provided containers and edit them. More details can be found under the software page for Singularity.