Skip to main content

singularity: Singularity container utilities

Contents

  1. Overview of package
    1. General usage
  2. Availability of package by cluster
  3. What is Singularity and why should I care?
  4. Using containerized applications
    1. Getting information about an existing container
    2. Working with third-party containers
      1. Base home directory does not exist error
      2. Kernel too old error
    3. Running Singularity containers from a script
  5. Building your own container
    1. For Singularity version < 2.4
    2. For Singularity version 2.4 and higher
  6. Useful links, more info re Singularity.

Overview of package

General information about package
Package: singularity
Description: Singularity container utilities
For more information: https://sylabs.io/singularity
Categories:
License: OpenSource (BSD-3 clause)

General usage information

Singularity is a container technology focused on building portable encapsulated environments to support \"Mobility of Compute\".

This module will add the singularity command to your path. You can use that to run container images that you built or obtained elsewhere. Building containers generally requires root access, which we do NOT give out, so you may need to install singularity on your desktop and create images there.

Available versions of the package singularity, by cluster

This section lists the available versions of the package singularityon the different clusters.

Available versions of singularity on the Zaratab cluster

Available versions of singularity on the Zaratab cluster
Version Module tags CPU(s) optimized for GPU ready?
3.9.8 singularity/3.9.8 x86_64 Y

What is Singularity and why should I care?

Singularity is a "containerization" system. Basically, it allows for an application to run within a "container" which contains all of its software dependencies. This allows for the application to come with its own version of various system libraries, which can be older or newer than the system libraries provided by the operating system. The Deepthought2 cluster, for example, is currently running a release of RedHat Enterprise Linux 6 (RHEL6). Some applications really want to run with libraries that are not available on that version of RedHat, and really want some version of Ubuntu or Debian instead. While one can sometimes get around this constraints by compiling from source, it does not always work.

Furthermore, sometimes applications are very picky about the exact version of libraries or other applications that they depend on, and will not work (or perhaps even worse, give erroneous results) if used with even slightly different versions. Containers allow each application to "ship" with the exact versions of everything it wants. They can even allow the RHEL6 system running on the Deepthoughts look like Ubuntu 16.04 or some other variant of Linux to the application.

There are limitations, of course. Containers of any type still share the same OS kernel as the system, including all the drivers, and so the container cannot change that. Fortunately, most end user applications are not very fussy about the OS version. The "containment" of containers can also be problematic at times --- containers by design create an isolated environment just for a particular application, containing all of its dependencies. If you need use libraries for a containerized package "foo" in another application "bar", basically you need a new container for "bar" which has "foo" as well as "bar" installed.

Using containerized applications

We currently have only a few packages distributed as containers, but that is likely to increase over the coming months. Especially in certain fields of research which tend to have more difficult to install software. So, depending on your field of study, you might find yourself dealing with containerized applications soon.

The good news, is that hopefully you won't notice much of a difference. The container should still be able to access your home directory and lustre directory, and we provide wrapper scripts to launch the program within the container for you that will behave very much like the application would behave in a native install. So with luck, you won't even notice that you are using a containerized application.

The biggest issues will arise if you need to have a containerized application interact with another application on the system (containerized or not, unless it just happens to exist in the same container image as the first application). In general, this will not work. In such cases, it would be best if you could break the process into multiple steps scuh that at most one application is needed in each step, if this is possible. Otherwise, contact us and we will work with you to try to get around this.

Getting information about an existing container

Sometimes one has a Singularity image for a container and would like to know more about how the container was actually built. E.g., you have a container that provides the "foo" application in python, but want to know if the "bar" python module is available in it. Although testing directly is always a possibility, it is not always the most convenient.

As of version 2.3.x of Singularity, there is an inspect subcommand which will display various metainformation about the image. If your container image is in the file /my/dir/foo.img, you can use the command singularity inspect -d /my/dir/foo.img to get the recipe that was used to generate the file.

The command singularity -e /my/dir/foo.img will show the environment that will be used when you a program in the container. And the commands singularity -l /my/dir/foo.img or singularity /my/dir/foo.img will list any labels defined for the container. The labels are defined by the creator of the container to document the container, and while are a good place to find information about a well-documented container, not all containers are as well documented as they should be.

Containers build for Singularity version 2.4 or higher have support for multiple applications to be housed in the same container using the standard container integration format (SCI-F). These various "applications" are accessed with the --app or -a flag to the standard singularity subcommands, followed by the application name. To get a list of all defined application names for a container, use the singularity apps /my/dir/foo.img comamnd. Different applications within the container can have different labels, environments, etc., so in the above examples you would want to look at both the environment/labels/etc of the container as a whole AND for the specific application.

The above discussion applies for all singularity containers, regardless of their origin. If you are interested in getting more information about a singularity container installed by Division of IT staff on one of the Deepthought Clusters, the following tips are provided.