Package: | matlab |
---|---|
Description: | Matlab numerical computing environment |
For more information: | https://www.mathworks.com/products/matlab.html |
Categories: | |
License: | Proprietary (ARRAY(0x56499d382678)) |
MATLAB (Matrix Laboratory) is a multi-paradigm numerical computing environment and proprietary programming language developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other languages.
This module will add the matlab command to your PATH.
MATLAB is proprietarily licensed software. It is made available to UMD users through a Total Academic Headcount provided by the Division of IT and funding from various colleges.
This section lists the available versions of the package matlabon the different clusters.
Version | Module tags | CPU(s) optimized for | GPU ready? |
---|---|---|---|
2021b | matlab/2021b | x86_64 | Y |
2022b | matlab/2022b | x86_64 | Y |
2020b | matlab/2020b | x86_64 | Y |
2024a | matlab/2024a | x86_64 | Y |
While most people use MATLAB interactively, there are times when you might wish to run a MATLAB script from the command line. Or from within a shell script. Usually in this situation, you have a file containing MATLAB commands, one command per line, and you want to start up MATLAB, run the commands in that file, and save the output to another file, and you do not want the MATLAB GUI starting up (often times, the process will be running in a fashion where there might not be a screen readily available to display the GUI stuff).
This can be broken down into several distinct parts:
The first part is handled with the following options to be passed to the
MATLAB command: -nodisplay
and -nosplash
. The
first disables the GUI, the latter disables the MATLAB splash screen that
gets displayed before the GUI starts up. You might or might not wish to
add a -nojvm
flag as well --- this will prevent the start
up of the Java Virtual Machine. It will speed up the start up of MATLAB,
but will prevent the use of Java commands.
The second step is handled using the -r
option, which specifies
a command which MATLAB should run when it starts up. You can give it
any valid MATLAB command, but typically you just want to tell it to read
commands from your file. And then you want to tell it to exit; otherwise
it will just sit at the prompt waiting for additional commands. One reason
to keep it simple like that is that the command string has to be quoted to
keep the Unix shell from interpretting it, and that can get tricky for
complicated commands.
Typically, you would give an argument like
matlab -r "run('./myscript.m'); exit"
(and you would include the -nodisplay
and -nosplash
arguments before the -r
if you wanted to disable the GUI as
well); where myscript.m
is your script file, and is located
in the current working directory. The exit
causes MATLAB to
exit once the script completes.
The third part is handled with standard Unix file redirection.
Putting it all together, if you had a script myscript.m
in the directory ~/my-matlab-stuff
, and you want to run it
from a shell script putting the output in myscript.out
in the
same directory, you could do something like
#!/bin/tcsh
module load matlab
cd ~/my-matlab-stuff
matlab -nodisplay -nosplash -r "run('~/myscript.m'); exit" > ./myscript.out
Mathworks currently provides two products to help with parallelization:
parfor
command), as well some CUDA
support for using
GPUs.
However, without the MATLAB Parallel Server (formerly named Distributed
Compute Server, there are limits
on the number of workers that can be created, as well as that all workers
must be on the same node.
In addition, some of the built-in linear algebra and numerical functions are multithreaded as well.
A number of the Matlab built-in functions, especially linear algebra and numerical functions, are multithreaded and will automatically parallelize in that way.
This parallelization is shared memory, via threads, and so is restricted to within a single compute node. So normally your job submission scripts should explicitly specify that you want all your cores on a single node.
For example, if your matlab code is in the file myjob.m
, you might
use a job submissions script like:
#!/bin/bash
#SBATCH -t 2:00
#SBATCH -c 12
#SBATCH -mem-per-cpu 1024
. ~/.profile
module load matlab
matlab -nodisplay -nosplash -nojvm -r "run('myjob.m'); exit" > myjob.out
and your matlab script should contain the line
maxNumCompThreads(12);
somewhere near the beginning. This restricts Matlab to the requested number of cores --- if it is omitted, Matlab will try to use all cores on the node.
The MATLAB Parallel Toolbox allows you to parallelize your MATLAB
jobs, to take advantage of multiple CPUs on either your desktop or on an
HPC cluster. This toolbox provides parallel-optimized built-in MATLAB
functions, including the parfor
parallel loop command.
A simple example matlab script would be
% Allocate a pool
% We use the default pool, which will consist of all cores on your current
% node (up to 12 for MATLABs before R2014a)
parpool
% For MATLAB versions before R2013b, use "matlabpool open"
%Pre-allocate a vector
A = zeros(1,100000)
xfactor = 1/100;
% Assign values in a parallel for loop
parfor i = 1:length(A)
A(i) = xfactor*i*sin(xfactor*i);
end
Assuming the above MATLAB script is in a file ptest1.m
in
the directory /lustre/payerle/matlab-tests
, we can submit it
with the following script to sbatch
:
#!/bin/tcsh
#SBATCH -c 20
module load matlab
matlab -nodisplay -nosplash \\
-r "run('/lustre/payerle/matlab-tests/ptest1.m'); exit" \\
> /lustre/payerle/matlab-tests/ptest1.out
You would probably want to add directives to specify other job submission paremeters, including
NOTE: It is important that you specify a single node
in all of the above, as without using Matlab Parallel Server/Distributed
Computing Server the parallelization above is restricted to a single node.
This can be done by using the #SBATCH -N 1
flag (note the
capital N). Alternatively, this could be done with the #SBATCH -c
flag, as shown above, which requests the specified number of CPU cores on
the same node per task (and if no #SBATCH -n
or
#SBATCH --ntasks=
flag is given, the number of tasks defaults
to 1. So in the above example, the number of tasks defaulted to 1, so we
effectively requested 20 cores on a single node.
The MATLAB Parallel Server (MPS) (known as Distributed Computing Server (MDCS) before 2019) extends the functionality of the Parallel Computing Toolbox discussed above to allow it to use distributed memory parallelism , which means that you MATLAB calculation. will be able to run across multiple nodes, allowing it to use more resources than might be available on a single node. It also provides tools if you wish to run MATLAB on your desktop and submit jobs from that MATLAB session to an UMD [5 INCLUDE glossary_term term="hpc" text="HPC" %] without the need to directly interact with the Unix environment on the clusters. Parallel Server/MDCS works with the Parallel Computing Toolbox discussed above, and extends the functionality to allow for jobs spanning multiple compute nodes.
In order to use MPS with an HPC cluster, you need to have MATLAB submit jobs to the HPC cluster. This can be done from:
Submitting MATLAB jobs from your local workstation or from an interactive session in the OnDemand portal provides users who are not comfortable with the Unix command line to submit jobs to the cluster using MATLAB; this does not eliminate all of the complexity, but at least allows for it to be dealt with in a presumably more familiar environment.
The "batch" mode paradigm of submitting a short MATLAB job to submit the longer job is actually somewhat convoluted, but that is currently required because currently to use MPS the job must be submitted from within MATLAB. Fortunately, in practice it should not be too onerous.
Although there are some differences in all of these scenarios, the general procedure is roughly the same, in outlined form:
parcluster
command, referencing a Cluster Profile
defined above.
batch
method of the cluster
to submit a job. The job instance
returned has methods which you can use to query the job status,
wait for the job to finish, and examine the results or error logs
after the job finishes.
We now proceed to discuss each of the steps above in more detail.
In order for MATLAB to send jobs to one of the UMD HPC clusters, a number of scripts needs to be installed on the system MATLAB is running on so that it knows how to talk to the cluster.
If the MATLAB process which is sending jobs to the UMD HPC cluster is actually running on one of the nodes in the cluster (i.e. in the interactive node and batch-like scenarios described above), we have already done this, and you can just proceed to the next step, creating a Cluster Profile.
If you plan to run MATLAB on your local workstation and have it submit jobs to the UMD HPC cluster, you will need to install the scripts on your workstation as we cannot do so in this case. However, we have tried to make the process as simple as possible.
The various scripts needed are contained in the following zip file (click on the filename to download it):
Filename: (download link) | matlab-mps-slurm-files-2023-06-21.zip |
---|---|
Version: | 2023-06-21 |
SHA256 checksum: | e8564ae3d3899ba24ef664ed9aa285e81dc94e908602bbd65c38cc740627f986 |
Filename: (download link) | matlab-mps-slurm-files-2023-02-09.zip |
Version: | 2023-02-09 (DEPRECATED) |
SHA256 checksum: | c214f5413656da82cc8b98d8211081d0f6a1b3e1d22c11bb87db473e0662f897 |
Download this file to your local workstation, and verify the checksum
if desired. The file is provided as a zipfile as that should be readily
usable on Windows PC, Mac, and Linux systems. This zipfile will need
to be extracted on the system you intend to run MATLAB on, with MATLAB
submitting jobs to the HPC cluster. Traditionally, it should be extracted
under a directory named for the version of MATLAB you are running (with
an R prefix, e.g. R2022a), under a directory named "SupportPackages"
in your MATLAB userpath directory. On Windows, this would typically be
%USERPROFILE%\Documents\MATLAB\SupportPackages\RMATLAB
version
, which normally is
Documents\MATLAB\SupportPackages\RMATLAB version
in your home directory. On Macs, this would typically be
$home/Documents/MATLAB/SupportPackages/RMATLAB
version
, and on Linux systems
~/Documents/MATLAB/SupportPackages/RMATLAB
version
. You could unzip it there (creating any needed
parent directories), or any other location on your system. If you expand
it to another location, you need to remember that path as you will need
to provide it as input to the configCluster
in the next step
(creating a Cluster profile).
Generally, you only need to do this step once per workstation for which you intend to run MATLAB on (and submit jobs to the HPC cluster). We will periodically be making updates to the scripts, but unless we tell you that this is a bug that requires downloading a later version, you should with just a single installation. The only exception to this would be if you upgrade MATLAB on your workstation; in such a case it probably best to download the latest version of these scripts and reinstall them. The default path under which to extract the files will have changed, and one of the UMD scripts contains information about the currently supported versions of MATLAB on the UMD clusters, and that likely was updated since you last installed the scripts.
The next step for having MATLAB be able to submit jobs to an UMD HPC cluster is to define one or more Cluster Profiles. The Cluster Profile defines various properties of the cluster, including information about how to connect to the cluster, where various files are, and information about how to submit jobs to the cluster.
You will need at least one Cluster Profile for each cluster to which you want MATLAB to submit jobs; however you can have multiple profiles for the same cluster if you wish. When it comes time to actually submit jobs from MATLAB to the HPC cluster, you will need to instantiate a cluster object in MATLAB, and this is done using a Cluster Profile as the template. Atlhough you can set and/or override many if not all of the Cluster Profile settings in the cluster object, it can be convenient to set up multiple Cluster Profiles for the same HPC cluster which are templates for the various types of jobs you expect to run. E.g., you might have a standard profle for your typical job, along with additional profiles for jobs which need longer wall times or need special resources like GPUs or extra memory.
The creation of a Cluster Profile is done by starting up MATLAB and
running the MATLAB function configCluster
.
You can view previously created Cluster Profiles in MATLAB
from the Parallel
drop down in the Environment
section of the menu bar. The Select Parallel Environment
option will show (and allow you to choose one of ) the existing
parallel environments --- until you define a Cluster Profile your
options will be just Processes
and Threads
.
The Create and Manage Clusters...
option will open a
new window allowing you to view, create, and/or edit Cluster
Profiles from the MATLAB GUI.
Cluster Profiles can be created interactively (i.e. from an interactive MATLAB prompt in the GUI or command line) or in a MATLAB script. The process is similar either way, and we recommend the interactive approach for most users. We discuss the interactive approach first.
Start MATLAB on the system on which you intend to run MATLAB and
have it submit jobs to the cluster. For the local workstation scenario,
this will be your local workstation; for the interactive node or
batch-like scenarios, this can be any node on the cluster (we recommend
launching an
interactive MATLAB session
in the OnDemand portal), or via an
interactive job launched via the
sinteractive command.
In the latter case, you should do a
module load of the
correct MATLAB version and start matlab with the options -nosplash
-nodisplay
). NOTE: Although configCluster will
work in text mode, it requires MATLAB's java, so you must not invoke
MATLAB with the -nojvm flag.
From the MATLAB prompt, type the command configCluster
.
If the command is not found on your local workstation, that suggests
that either you did not download and install
the MPS Slurm integration scripts and/or that you did not copy
the contents of the script directory to your userpath directory.
Please issue the command userpath
in MATLAB, and be
sure the configCluster.m and other scripts in the scripts directory
of the downloaded zipfile are copied to that directory.
The configCluster command should prompt you for the answers to
some questions, as discussed in more detail below. The answers to
the questions can also be passed as function arguments in the
configCluster command if desired (in which case you will not be
prompted for the answer to parameters passed that way); e.g. to
set the name of the cluster to zaratan on the command line, you
could do something like configCluster(clustername='zaratan')
.
Which cluster do you wish to connect to:
. This
question wants the name of the cluster to which MATLAB should be
submitting jobs. If the configCluster script is being run on a
node of an UMD cluster supported by DIT, it will default to that
cluster. Otherwise it will default to "zaratan". To specify it
on the command line, use the parameter clusterName
.
configCluster
script maintains a list of
MATLAB versions known to be installed on UMD clusters. It will
compare the version of MATLAB currently being run to the list of
versions for the cluster you entered above. If a match is found,
the script just continues with the next step. If a match is
not found, this can be a problem and the script
will print out a warning listing the versions of MATLAB it believes
are available on the specified HPC cluster. The MATLAB
running on your local workstation must exactly
match a version running on the HPC cluster for MPS to work.
allowUnknownMatlab
can be set to either true or false to bypass being prompted
for an answer to that question.
Username on cluster:
. This question wants your
username on the cluster provided in the previous question. It will
try to default using your username on the system you are running
MATLAB on, which should be correct in the interactive node and
batch-like scenarios. It might or might not be correct in the
local workstation scenario. The correct answer should normally be
the name to the left of the at sign (@) in your @umd.edu
or @terpmail.umd.edu email address. Please note that usernames
are case-sensitive, and normally should be all lowercase. The
parameter clusterUsername
can be used to set this
value from the command line.
Please provide the path to the directory containing the
MATLAB-Slurm integration scripts; ...
. This question is
asking where the various MATLAB-Slurm integration scripts are.
For the interactive node and batch-like scenarios, this will default
to the location where HPC staff have installed the scripts, and you
should just accept the default value. For the local workstation
scenario, this will default to the traditional location,
something like
Documents\MATLAB\SupportPackages\RMATLAB version
under your home directory. If that is where you extracted the
zipfile downloaded in the previous step (
download and install the MPS Slurm integration scripts)
you can use the default; otherwise use the path to which you extracted
the zipfile. The path given should have a parallel
subdirectory
with a slurm
subdirectory beneath that. To set this
from the command line, use the parameter pluginScriptDir
.
The script will do some checking that the correct files are found
underneath the path specified.
Where should MATLAB store its job files on the cluster?
.
MPS needs to store some files on the HPC cluster; this question asks
where they should be stored. This should point to a directory that
exists on the HPC cluster, and to which you have write permission. (If
the last component in the path does not exist but the parent directory
does, a new directory with that name will be created for you.) It
should be a directory which is accessible from all of the compute nodes
of the cluster. aThe scratch/high-performance file system is a good
choice for this, and by default, we choose a directory named
MatlabParallelServer
in your scratch
(lustre
on Juggernaut) directory. You can use the parameter
removeJobStorageLocation
to set this from the command line.
What should this profile be named?
. This asks you
for the name of the Cluster Profile. The configCluster
script is unable to overwrite an existing Cluster Profile, so this name
must be distinct from any existing Cluster Profiles. The default value
is "Clustername HPC. You can set it from the command
line with the parameter profileName
requireConfirmation
with a value of false
on the command line to bypass the
confirmation process (the script will create the Cluster Profile without
confirmation); this also happens if the configCluster script is not invoked
in interactive mode.
When finished, the configCluster command will print out some information about the Cluster Profile created. This is still a rather generic Cluster Profile --- it knows how to connect to the specified cluster for submitting jobs, but it does not have customized values for the various sbatch/job parameters and so will just use defaults. You will generally need to set the various job parameters, either after you instantiate a cluster, or in the Cluster Profile, and you might wish to have multiple Cluster Profiles for the same cluster with the common job settings.
The basic process of having MATLAB submit jobs to an HPC cluster is to instantiate a Cluster object in MATLAB from a Cluster Profile, and then invoke methods on that Cluster object to submit jobs. These jobs, through a somewhat complicated series of scripts included in the MPS Slurm Integration scripts, actually are submitted in the end via the sbatch command. And while MATLAB builds the job script and tries to hide much of the complexity of the sbatch command, it cannot hide everything. Although the default values for many of the parameters for the sbatch command are reasonable, you will likely need to set at least some (like the wall time limit) for production jobs.
Most parameters can be set in the Cluster Profile, in which case any Cluster instance created from that profile will inherit those values, and/or they can be set (or overridden) in the actual Cluster instance. We split these parameters into two categories; a group of "top level" parameters and a set of parameters which are fields of the top-level AdditionalProperties parameters. We discuss the various parameters in each category first, and then discuss how to set/modify them.
The following table lists the "top level" parameters. The "Class" column lists what classes of objects the parameter applies to:
Parameter Name | Class | Type | Description |
---|---|---|---|
Profile | C+CP | string | The name of the Cluster Profile object, or the Cluster Profile object from which the Cluster was created. |
Description | CP | string | A description of the Cluster Profile. This field is simply a comment about the Cluster Profile for human consumption; it does not impact MATLAB performance. |
NumWorkers | C+CP | number | This is the maximum number of workers (
MPI tasks
) from this
Cluster (or a Cluster derived from this Cluster Profile) that can be
use. This is left at infinite by the configCluster
but you should probably put a reasonable upper limit.
Note: a finite upper limit must be in place
before validating the Cluster Profile.
Increasing this number will mean that calculations will
use (and be charged for) more resources on the HPC cluster,
at least if MATLAB thinks more resources would be helpful. You
are free to increase this value if you think it will be helpful.
Note that increasing this value increases the number of workers
MATLAB is allowed to use; MATLAB might still decide to use
less workers for your calculation if it cannot parallelize the
calculation to use more workers. Even if MATLAB uses all of
the workers allotted, that may or may not improve performance; there
usually is a threshold on the number of workers (the value of which
depends on the calculation) beyond which performance improves
only very little (or might even degrade).
|
NumThreads | C+CP | number | The number of computational threads to use with each worker.
configCluster leaves this at 1.
Whereas the NumWorkers parameter above controls the maximum
number of
workers/MPI tasks
)
that will be used for a calculation, this parameter controls
the maximum number of
threads
) that will be used by each task. Generally,
the number of
CPU cores
used by the calculation will be the product
of the two. MATLAB uses different parallelization techniques
to parallelize different calculations; most of the time it
uses worker/task based parallelization, but sometimes it
prefers to use multithreading. If you have a calculation
that is tending to do more multithreading, you can increase this
parameter.
|
JobStorageLocation | C+CP | string | This is where job data is stored on the client MATLAB system (i.e. the system MATLAB is submitting jobs from) |
AdditionalProperties | C+CP | cell array | This field structure contains all of the "AdditionalProperty" type parameters mentioned above. This is where most of the Slurm/sbatch specific parameters go, and due to the large number of possible fields is discussed in its own table below this. |
These should be set properly by configCluster and should generally not be changed | |||
Modified | C | logical | Whether or not the Cluster object has been modified since being created from the Cluster Profile. This is a read only parameter; you cannot modify it directly. |
ClusterMatlabRoot | C+CP | string | This is the path to the MATLAB installation on the HPC cluster. You should not need to change this. |
Host | C | string | The hostname of the client system (i.e. the system MATLAB is submitting jobs from) This is a read-only parameter. |
RequiresOnlineLicensing | C+CP | logical | This specifies whether the cluster requires Online licensing for MATLAB. This should be false for UMD HPC clusters. |
LicenseNumber | C+CP | string | If the cluster requires Online licensing for MATLAB, this
is the license number it should use. As UMD HPC clusters do
not require Online licensing, this should be left set to |
OperatingSystem | C+CP | string | The type of Operating system on the HPC cluster. For UMD HPC clusters this should be set to "unix". |
HasSharedFilesystem | C+CP | logical | This specifies whether or not there is a filesystem shared between the client MATLAB process (i.e. the one HPC jobs are submitted from) and the MATLAB processes running on the HPC compute nodes. You should not change this; it should be false for the local workstation scenario, and true for the interactive node and batch-like scenarios. |
PluginsScriptsLocation | C+CP | string | This specifies the path to the directory (on the client system) where the MPS Slurm plugin scripts are to be found. It should be properly set up by the configCluster command so you should not be changing it. |
These are generally only needed for more advanced use cases. | |||
AutoAttachFiles | CP | logical | This determines whether the client MATLAB system should automatically
send the code files listed in the AttachFiles parameter to
the HPC cluster. This defaults to true (but since AttachFiles
defaults to an empty list that does not do anything). Generally you should
not change this.
|
AttachFiles | CP+J | cell array | This is a list of files which the client MATLAB system should send to the HPC cluster with the job. By default it is empty. While many users will not need to use this, it can be useful in some cases. See Mathwork's MATLAB documentation for more information. |
AdditionalPaths | CP | cell array | This is a list of directories (on the HPC cluster) which should be added to the search path for MATLAB workers. By default it is empty. While many users will not need to use this, it can be useful in some cases. See Mathwork's MATLAB documentation for more information. |
NumWorkersRange | CP | list of integers | This specifies the range of the number of workers for jobs submitted via Cluster objects derived from this Cluster Profile. By default it is 1 to infinity. Generally does not need to be changed. |
CaptureDiary | CP | logical | This defaults to false. It controls whether the command window output is returned. |
As noted in the table above, there is a top-level parameter named
AdditionalProperties
which is a structure containing potentially
a large number of additional parameters. Many of these parameters directly
correlate with parameters passed to the
Slurm sbatch command, and so this field is typically the one which gets
the most customization. As the AdditionalProperties
property
is visible in both Cluster Profile and Cluster objects, these parameters
can also be set/modified for both object trypes. The recognized parameters
for AdditionalProperties are:
Parameter Name | type | Description |
---|---|---|
These parameters are not set by configCluster, and basically correspond to various sbatch parameters. You might need to add/modify these to have MATLAB submit jobs with the correct parameters. | ||
AccountName (or Account) | string | The name of the Slurm allocation account the job should be charged against. If omitted, Slurm will use your default allocation account. You might need to set this if there are multiple Slurm allocation accounts you have access to. |
EnableDebug | logical | Defaults to false. If set, the various MPS Slurm integration scripts will produce some debugging output. This is usually only needed for debugging issues with the integration scripts. |
EmailAddress | string | If EmailType is set, this is the email address
which Slurm will email about job status changes. If omitted, it
will default to your username with @umd.edu appended. It is ignored
unless EmailType is set. This value is passed to the sbatch command
with the --mail-user flag; see the
documentation on sbatch
email flags for more information.
|
EmailType | string | This causes Slurm to send email (to the address set in the
EmailAddress parameter) on certain changes in job
status. This parameter takes a string of comma delimited email
states, without spaces (e.g. 'BEGIN,END') as discussed in the
documentation on sbatch
email flags; the values specified here are passed to sbatch
via the --mail-type flag.
|
Features (or Feature or Constraint) | string | This should be a comma delimitted list of features, without
spaces, which get passed to sbatch via the --constraint .
Slurm will only assign nodes with the specified features to the job.
See the documentation
on the sbatch --constraint flag for more information.
|
GpusPerNode | number | This specifies the number of GPUs that MATLAB should instruct
Slurm to request per node assigned to the job. If set and greater
than zero, we will add a --gpus-per-node to the
sbatch command requesting this many GPUs (or the type specified
by the GpuCard parameter, or any type if that parameter
is not specified). The Partition parameter will
also default to gpu if this is greater than
zero. More information can be found in the
documentation on requesting GPUs.
|
GpuCard (or GpuType) | string | If this and GpusPerNode are set, then the
appropriate flags are passed to sbatch to request that many
GPUs per node of the type specified in this parameter. If only
GpusPerNode is set, then the flags only requests
the specified number of GPUs per node, and they can be any
type (which is generally not what is wanted).
This should take a GPU type name, e.g. 'a100' or 'a100_1g.5gb'.
See the
documentation on requesting GPUs for more information.
|
MemUsage (or MemPerCpu) | string | If this parameter is provided, MATLAB will pass this value
to sbatch via the --mem-per-cpu flag. It should
take either a positive integer (unit defaults to MB) or a
positive integer followed by M or G for MB or GB, respectively.
For more information, see the
documentation on requesting memory.
|
NumNodes (or Nodes) | number | If this parameter is given, its value will be passed to
sbatch via the --nodes flag, as described in
the documentation
on specifying node requirements. Normally this parameter
should be omitted and you should just let the scheduler choose
the number of nodes to request.
|
ProcsPerNode | number | If given, this value will be passed to the sbatch command
as the argument of the --ntasks-per-node flag,
which will set the maximum number of tasks (i.e. MATLAB workers)
which will be assigned to a single node. Normally it is recommended
to leave this unset and let the scheduler figure this out.
|
Partition (or QueueName) | string | This parameter will, if set, cause the sbatch command to
be passed its value with the --partition flag.
You should normally leave this unset unless you are trying
to use the debug partition.
If it is not set and the parameter GpusPerNode is
unset or set to zero, no --partition flag will
be passed to sbatch, and Slurm will default the partition with
its logic. If this parameter is not set and GpusPerNode
is greater than 0, then the partiton will be defaulted to "gpu".
See the
documentation on specifying partitions for more information.
|
RequiresExclusiveMode | logical | If this parameter is given and true, the --exclusive
flag will be passed to the sbatch command, causing Slurm to schedule
this job on a node with no other jobs running on it, as described in
the documentation on
exclusive mode. You are charged for the entire node in this case,
which on Zaratan is 128 cores, so it is not advisable to use
exclusive mode in most cases.
|
Reservation | string | If the parameter is provided, its value will be passed to sbatch
as the argument of the --reservation flag, as discussed
in the documatent
on reservations. You should only use this parameter if you were
instructed to do so by your professor or system staff; if you specify
a reservation which is not active or you do not have access to, either
your job will not be submitted or it will wait forever in the queue.
|
Resources (or Resource) | string | The value of this parameter, if set, will be passed to sbatch
as the argument of the --gres flag, which as discussed
in the documentation
on the features and resources will request specific resources
from the nodes it is running on. For requesting GPUs, it is
preferable for you to use the GpusPerNode and
GpuCard parameters instead.
|
WallTime | string | This parameter is used to specify the maximum amount of time
a job can run. It is passed to sbatch as the argument of the
--time flag, and should either be a integer number
of minutes, or hours:minutes or days-hours:minutes. You will
generally need to set this, as the default walltime is only
15 minutes. See the
documentation on setting walltime for more information.
|
AdditionalSubmitArgs | string | This parameter provides a catch-all. The value of this parameter should be valid flags to the sbatch command, and it is simply passed to the sbatch command unmodified. Although any valid sbatch flag can be set using this parameter, the intended use is to allow you to set any of the myriad of sbatch flags which are not covered by the other parameters listed here. |
These parameters should be set properly by configCluster but might be changable. | ||
Username | string | This should be your username on the HPC cluster. |
RemoteJobStorageLocation | string | This is the path to a directory on the HPC cluster where MATLAB
can store job data. It is set based on user input or defaults by
the configCluster command. You can change this if you want to use
a different directory, but doing so might cause MATLAB not to find
the results, etc. of previous jobs. The default value is in the
MatlabParallelServer directory of your main scratch
(Zaratan) or lustre (Juggernaut) directory. Please remember to
periodically delete folders under this directory for old jobs that
are no longer needed.
|
UseUniqueSubfolders | logical | This instructs MATLAB to store job data on the HPC cluster in
subfolders of RemoteJobStorageLocation based on your
username and the MATLAB version. We default this to true as doing
such reduces conflicts due to different MATLAB versions.
|
UseIdentityFile | logical | This instructs MATLAB to use SSH public key authentication when ssh-ing to the remote HPC cluster's login node. It is only used in the local workstation scenario. Unfortunately, this does not appear to work with Zaratan at this time. If not set, MATLAB will prompt you for a password to access the HPC cluster when needed, at most once per MATLAB session. |
These parameters should be set properly by configCluster and generally should not be changed. | ||
AuthenticationMode | string | This is used by the MPS Slurm scripts to determine how to
authenticate to the remote cluster in the local workstation scenario.
It should be ignored in the other scenarios. Based on our experimentation,
for submitting jobs to the Zaratan HPC cluster, this must
be set to Multifactor , even when using the campus VPN (in which
case no explicit second factor is requested). Furthermore, even with
this setting MATLAB does not seem to support the case when you are
trying to connect to Zaratan and you are not on the VPN,
so that ssh would normally ask for a second factor. So just leave
this set to Multifactor, and always use the campus VPN.
|
ClusterHost | string | This is the hostname to which MATLAB must ssh to in order to use the sbatch command in the local workstation scenario. It should be ignored in the other scenarios. It sould normally be the hostname for the cluster's login nodes. |
Now that we have described the various Cluster and Cluster Profile parameters, how does one go about setting or modifying them? The actual steps are a bit different depending on whether you wish to set/modify parameters in a Cluster Profile or in an actual Cluster instance. Modifying parameters in a Cluster instance does not directly affect the Cluster Profile (although as we will see later, there is a process by which you can save such changes back to the Cluster Profile), but will affect jobs submitted from that Cluster object. Modifying a Cluster Profile will affect future Cluster instances derived from that Cluster Profile, but will not affect previously instantiated Cluster objects.
Generally, we recommend setting up multiple Cluster Profiles, one
for each basic job type you commonly use (e.g. maybe a Cluster Profile
for a standard CPU job, another for jobs that need longer runtime
and/or more memory, another for standard GPU jobs, etc). You can
create the base Cluster Profile using
the configCluster function, and/or duplicate existing Cluster
Profiles within the MATLAB Cluster Profile Manager
GUI,
and then edit and customize the Cluster Profiles in the Cluster
Profile Manager
or from the MATLAB command prompt.
When you have custom settings needed only for a small number of jobs,
these customizations can be made in the Cluster object, after
instantiating it from the closest matching Cluster Profile. Generally,
modifications to the Cluster object need to be done at the MATLAB
command prompt. If you really prefer GUIs, you could create a
temporary Cluster Profile in the Cluster Profile Manager
by duplicating the closest normal Cluster Profile, make the customizations
in the temporary profile, and then instantiate the cluster from the
temporary profile, and delete the temporary profile when it is no
longer needed.
If you have access to the MATLAB GUI, the easiest way to modify a
Cluster Profile is via the Cluster Profile Manager
.
For parameters that are only present in Cluster Profiles and not in
Clusters, this is the only way I know of to modify the parameter. It
can be opened up via the Create and Manage Clusters ...
entry in the Parallel
drop down in the ENVIRONMENT
section of the top menu bar of the GUI. Once opened, you should see
a list of Cluster Profiles in the left panel, and the right panel should
contain a listing of the various parameters and values.
Set as Default
button in the MANAGE PROFILE
section of the top menu bar.Delete
button in the MANAGE PROFILE
section
of the top menu bar.Duplicate
button in the MANAGE PROFILE
section
of the top menu bar.Profile
top level parameter) using the Rename
button
in the MANAGE PROFILE
section of the top menu bar.Edit
button
in the MANAGE PROFILE
section of the top menu bar. This
will cause the right side panel to change from display mode to edit
mode. The top-level parameters can just be changed by entering
the new value in the input box and/or choosing a new option
from the drop-down. For the parameters which are fields of the
AddtionalProperties parameter, you can
Value
column for the appropriate field and either typing the new
value in or selecting a new value from the dropdown.
Remove
button.
Add
button,
and then enter the (case-sensitive) name of the new parameter,
select the correct Type
from the drop-down, and
either type in the new Value
or select it from
the dropdown.
Done
button to save it. You can click
on the Cancel
button if you wish to abort your changes
without saving them.
At this time, I am unaware of any way to directly modify the
parameters of a Cluster instance in a GUI fashion. You could
use the Cluster Profile Manager
to modify the base
Cluster Profile for the Cluster (or a copy of this) to set the
parameters as you desire them to be in the Cluster, then derive
the Cluster from this modified Cluster Profile, and then either
reset the Cluster Profile (or possibly delete it if it was just
a temporary copy).
To directly modify a Cluster instance after it is created, you
can do so in the MATLAB command line (or in a script), and it
is reasonably simple. The
"top level" parameters can simply be set in assignment commands;
e.g. if the Cluster instance is in a MATLAB variable named
cl
and you wish to change the NumWorkers parameter
to 64, you can use a code snippet like:
cl.NumWorkers = 64
For parameters which are fields under the
AdditionalProperties
, the process is similar, but
you need to add the AdditionalProperties
property in the assignment. E.g., using the same Cluster instance
cl
again, to set the WallTime parameter to 4
hours and 30 minutes, you could use a code snippet like
cl.AdditionalProperties.WallTime = '4:30:00'
To set or modify parameters in a Cluster Profile from
the command line, the process is to create a Cluster instance
that derives from the profile, and set the parameters for the
Cluster as described above. Once everything is set as desired,
you can save the settings of this Cluster instance back to the
Cluster Profile with the saveAsProfile
method on
the Cluster instance,
passing it the name of the Cluster Profile to save to (this can
be the name of the Cluster Profile the Cluster was derived from,
the name of a different existing Cluster Profile, or the name of
a new Cluster Profile to be created). E.g., if you used the
configCluster
command to create a standard profile
named Zaratan HPC
, you can create a new profile with
the same basic set up for submitting jobs to the debug partition,
with up to 64 workers and a 15 minute walltime, with the commands:
% Create a Cluster instance from 'Zaratan HPC' Profile
cl = parcluster('Zaratan HPC')
% Set max walltime to 15 minutes, number of workers to 64, and
% partition to debug
cl.NumWorkers = 64
% NOTE: we set WallTime to a string value to preserve type
cl.AdditionalProperties.WallTime = '15'
cl.AdditionalProperties.Partition = 'debug'
% And save as a new Cluster Profile
cl.saveAsProfile('Zaratan: Debug partition, 64 cores')
Note that the command line/scripted approach to setting or modifying parameters for a Cluster Profile is restricted to setting/modifying parameters that are common to both Cluster and Cluster Profile instances.
When using the MATLAB GUI, you can access the Cluster Profile
Manager
, which can be opened up via the Create and
Manage Clusters ...
entry in the Parallel
drop
down in the ENVIRONMENT
section of the top menu bar of the
GUI. Once opened, you should see a list of Cluster Profiles in the
left panel, and the right panel should contain a listing of the various
parameters and values. Once you select a Cluster Profile, you can
click on the Validate
button to cause MATLAB to run some
tests on the Cluster Profile.
Note: The validation process will only run if the
NumWorkers
parameter is set to a finite value. Generally, the
validation process is faster with smaller values of NumWorkers.
|
When using MPS to connect MATLAB to the Zaratan HPC cluster from your
local workstation (e.g. your laptop or desktop), please ensure that you are
connected to the
campus VPN.
Otherwise, multifactor authentication will be required and the MPS code
to connect to the cluster has trouble handling that.
|
When the Validate
button is clicked, MATLAB will start a series
of five tests. Most of these tests involve the submission of jobs to the
configured HPC cluster, and so these tests can take a little while. In
addition to the time needed for MATLAB to compose and submit the job, there
will be the time the job spends in the queue waiting for the scheduler to
allocate resources to it. And the jobs themselves take some minutes to run.
The time spent waiting in the queue will general increase with the resources
requested, and in my experience the MATLAB overhead of collecting data from
a large number of workers can sugnificantly increase the actual run time.
For jobs submitted from MATLAB running on a node of the HPC cluster
(e.g. the interactive-node or batch-like scenarios), all five tests should
pass. For jobs running on your local workstation, the first four tests should
pass. The final test (Parallel pool test (parpool)
) requires
that the MATLAB workers be able to initiate network communications with the
MATLAB process from which the jobs were launched; which in this case means
that the HPC compute nodes will need to be able to initiate network
communications with your local workstation. Even when the "local workstation"
is on campus with a wired connection, there are often network address
translation and other networking "tricks" make such communications tricky.
The situation is greatly exascerbated when the local workstation is off campus
or on a home network.
The failure of the Parallel pool test (parpool)
does indicate
a real limitation in the functionality of MPS, namely that certain parpool
features will not work. However, much of the MPS functionality still works,
and the failing parpool features are not used by most users. So for the
local workstation scenario we usually consider the Cluster Profile as working
properly if the first four validation tests pass.
The following is a quick guide to using Matlab DCS to submit jobs to
one of the DIT maintained HPC clusters at UMD. It is assumed that you
have already done the process of
setting up and configuring the
MPS Slurm integration scripts on the system on which you will be running the
MATLAB process from which you wish to submit jobs to the HPC cluster. If
you are planning to submit jobs to the HPC cluster from MATLAB running on
your local workstation (local workstation scenario), the scripts
need to be downloaded and installed and the configCluster
function needs to be run on your local workstation. If you are
planning to run MATLAB from the cluster, e.g. using the
interactive MATLAB
desktop of the OnDemand portal
(the interactive node scenario) or from within a batch job
(batch-like scenario), then you only need to run the
configCluster
command once on any of the nodes of the HPC
cluster. You can run configCluster
more often, especially
to define Cluster Profiles for specific classes of jobs you plan to
submit, but at a minimum it must be run once on each local workstation
you plan to submit jobs to the HPC from, and once on a node of the HPC
cluster if you plan to submit jobs from MATLAB running on the cluster
to the cluster.
|
When using MPS to connect MATLAB to the Zaratan HPC cluster from your
local workstation (e.g. your laptop or desktop), please ensure that you are
connected to the
campus VPN.
Otherwise, multifactor authentication will be required and the MPS code
to connect to the cluster has trouble handling that.
|
This section discusses the various aspects of submitting jobs to the HPC cluster from MATLAB, as well as some related tasks like checking the status of the job, retrieving output, diagnosing problems, etc. The basic process is similar for each of the scenarios.
To submit a job, the basic process is:
parcluster
command, saving the Cluster instance it produces
to variable, e.g. cl
. The parcluster command can take as an
argument the name of a Cluster Profile from which the Cluster will be
derived; if omitted the default Cluster Profile will be used.
batch
function; it is
also possible to explicitly create a job with the createJob
(or createCommunicatingJob
) function, followed by the
createTask
and submit
functions. While the
latter approach gives you much more control over the job, it is also
significantly more complicated to use than the batch
function.
For this reason, we only discuss the batch
function approach
in this documentation; see the reference on Detailed Job and Task
Control in the Mathworks MPS
links section for more information on the other approach.
First, you need to define a "cluster" to submit jobs to. This holds the information about the parallel workers, etc. For most cases, it will suffice to enter a command like:
>> cl = parcluster;
>> cl = parcluster('Name of Cluster Profile')
You can choose whatever variable you like instead of cl
, but if
so be sure to change it in the following examples as well. The first case
will create a cluster instance using the default cluster profile, while the
second case will use the cluster profile with the name specified as the
argument to the parcluster
function.
Depending on the requirements of your calculation and how much customization of the Cluster Profile you did previously, you might or might not need to configure additional parameters in this Cluster object. If you typically only have a handful of types of standard jobs that you are going to submit from MATLAB to the cluster, we recommend defining a separate Cluster Profile for each type of job; this way you can just specify the name of the Cluster Profile for the type of job in the previous step, and you can skip this step.
Some typical parameters that might need to be configured are:
debug
partitionSetting these and other parameters, both for the specific Cluster instance and/or for the Cluster Profile, was discussed in the section on setting Cluster/Cluster Profile parameters.
E.g., to set our Cluster instance cl
from the previous
example to use 12 workers for a maximum wall time of 8 hours
and to request 6 GB of RAM per worker, we could use a code snippet like:
>> cl.NumWorkers = 12
>> cl.AdditionalProperties.WallTime = '8:00:00'
>> cl.AdditionalProperties.MemUsage = '6G'
Finally, you should create a job for your MATLAB calculation and submit
it to the cluster. This is most easily done with the MATLAB batch
function. You can invoke the batch
function as a method of
the Cluster instance you created previously, or as a global function passing
the Cluster instance as the first argument. (If you call it as a global
function without passing the Cluster instance, MATLAB will create a Cluster
instance from the default Cluster Profile and use that.) The next
arguments tell MATLAB the calculation you wish to submit to the cluster; this
can be specified as any of:
cl
from the
previous example, to submit the script myscript.m
use
either batch(cl, 'myscript')
or cl.batch('myscript').
Note: myscript.m
must be in your MATLAB
search path on the client MATLAB system; it will be copied if needed
to the cluster.
batch(cl, 'y = magic(3)')
myFunction
in the file myFunction.m
which returns two values and you
wish to evaluate it for the parameters 1, 7.2, 3.5
, you
could use a snippet like batch(cl, @myFunction, 2, {1, 7.2, 3.5})
or cl.batch(@myFunction, 2, {1, 7.2, 3.5})
.
If needed, the above arguments can be followed by a list of name, value pairs to specify options to control even more behavior of the submitted job. When submitting jobs from your local workstation to the Cluster, the following options might be helpful:
'CurrentFolder', '.'
. This specifies the the directory
where the MATLAB processes on the cluster will start. If omitted, it
will default to the current directory for the MATLAB process from which
the jobs were submitted. In the local workstation scenario, that
directory is unlikely to exist on the HPC cluster and will result in
a warning being produced when the job runs. Although it is only a
warning and likely not to cause problems, setting the CurrentFolder
to a path that exists on the HPC cluster will suppress the warning.
(Setting this to '.'
will cause the MATLAB process to
run from the directory the ssh process ended up in, which should
always exist.)
'AutoAddClientPath', false
. This boolean flag
determines whether the user-added paths on the client MATLAB process
should be added as paths to the worker MATLAB processes running
on the HPC cluster. Again, in the local workstation scenario, these
paths are unlikely to exist on the HPC cluster, and if they do not
exist it will result in warnings being produced when the job runs.
These are just warnings, and MATLAB should just ignore the non-existant
paths and this should not cuase problems, but setting this
parameter to false will prevent the paths from being adding effectively
suppressing the warnings.
In the local workstation scenario, the MATLAB process on your local workstation needs to ssh into one of the login nodes of the HPC cluster. If this is the first time you needed to connect to the login node during this MATLAB session, it will prompt you for your password on the cluster. Subsequent jobs submitted from the same session will re-use the same ssh session and/or cache the password for the duration of this MATLAB session.
Combining all of these snippets, a typical sequence for submitting a job to the Zaratan HPC cluster from MATLAB running on your local workstation to run your myFunction function with the parameters as described above would be:
>> % define the cluster
>> cl = parcluster('Zaratan HPC');
>>
>> % Adjust the cluster parameters
>> cl.NumWorkers = 12;
>> cl.AdditionalProperties.WallTime = '8:00:00';
>> cl.AdditionalProperties.MemUsage = '6G';
>>
>> % Submit the job
>> job = cl.batch(@myFunction, 2, {1, 7.2, 3.5}, ...
'CurrentFolder', '.', ...
'AutoAddClientPath', false, ...
);
>>
>> % Display the (MATLAB) job ID and state
>> job.ID
>> job.State
The above sequence of MATLAB commands submitted a job to the
Zaratan HPC cluster to run your function myFunction
with parameters 1, 7.2, 3.5
. The batch
function returns a job instance to the variable job
,
and we then print out some information about the job, namely its
(MATLAB) ID number and the state of the job. The MATLAB ID number
is a number used within MATLAB to identify the job; this is
not the same as the Slurm job ID number, and
will typically be an integer incrementing with every job you
submit from MATLAB. The State parameter returns the state of
the job; common values are:
queued
: the job is in the Slurm queue, waiting
to be assigned resources.running
: the job is running on the HPC cluster.
finished
: the job completedNote that MATLAB will return the job instance and allow you to continue your MATLAB session, running other MATLAB commands, and even submitting other jobs to the HPC cluster while the first job is waiting to run or running (or finished running). You could actually even disconnect from and shutdown your MATLAB session, and even turn off the system (if it is your local workstation) or log off (if you submitted the job. The job on the HPC compute node will not be affected by such and will continue to run. Sometime later you can start up a new MATLAB session and re-load the job instance, and use that to check the status of the job and, if finished, look at the results.
To find and/or re-load job instances that we submitted in
a previous MATLAB session (or from the current MATLAB session
that your forgot to save to a variable or otherwise misplaced),
you can use the Jobs
parameter of the Cluster
instance or the findJob
method. The first step
is to get or re-create the Cluster instance. You can just
instantiate a new Cluster instance from the same Cluster
Profile as you used previously. (I believe it does not even
need to be the same Cluster Profile, only that the local
JobStorageLocation has to be the same as was used previously.)
Once you do such, you can look at all jobs that you submitted
by examining the Jobs
parameter of the Cluster
instance. This will show all jobs you have submitted, until
you explicitly delete the job from the list. The list will
show the index number into the array, the Job ID, the type
and state of the job, along with some other information.
You can select a specific job from this array by index.
This is probably the easiest way to select a job if there
are only a small number of them.
Alternatively, you can use the findJob
function.
The first and only required parameter to the findJob
function is the Cluster instance. If that is the only parameter
given, it will return the same list of jobs as the Jobs
parameter of the Cluster instance. You can give additional pairs
of parameters, the first in the pair being a string giving the
name of a field of the Job object, and the second being a value
to match against. This will filter the list of jobs so that only
jobs with the specified field having the specified value pass the
filter. If more than one pair of field names and values are given,
each filter is applied in turn, effectively 'and'-ing the result.
So you could use something like
findJob(cl, 'State', 'running')
to show all jobs in the running state, or
findJob(cl, 'ID', 9)
to return the job with ID of 9.
Once you have the MATLAB job instance (we will assume in a MATLAB
variable named job
, there are a number of
operations you can perform:
job.ID
will return the MATLAB ID for the jobjob.State
will return the state of the jobjob.Tasks
will return a list of tasks for the job.
Of particular interest is the field SchedulerID
for the
task, which lists the Slurm job number associated with the task.job.RunningDuration
will tell you how long the job
took to runwait(job)
: As mentioned above, MATLAB will return
immediately after the job is submitted, allowing you to do other
things while the job is queued and/or running. If instead of doing
something else and periodically checking if the job is finished, you
can issue the wait(job)
command, and MATLAB will not
return the prompt until the job job
finishes. It will
not finish any more quickly, but this could be useful if you
submitted a job from your local workstation and you do not want to
do anything else until the job finishes. You probably should not
be using this if submitting a job from MATLAB running on the cluster;
if you are not planning to do anything until the job finishes and it
is not a very short job, you should probably relinquish the compute
node you are running on.
load(job, 'VariableName')
: if the
job represented by job
ran a script (as opposed
to a function), the load
function will transfer the
variable named VariableName
from the worker
name space to the local MATLAB workspace. This only works for
jobs calling a script; see fetchOutputs
for jobs
running a MATLAB function.
x = fetchOutputs(job)
: if the job
represented by job
ran a MATLAB function (as opposed
to a script), the fetchOutputs
function will return
the return value of the evaluated function.
job.delete()
: This will delete the job, removing
all of its data from disk and causing it to no longer appear in
Jobs
array for the Cluster instance. You should
periodically clean up the storage in the
RemoteJobStorageLocation
on the cluster.
clear job
: This will delete the job variable
from MATLAB
Putting the above together, if you previously submitted a
MATLAB function as a job to the Zaratan cluster, where the
MATLAB ID of the job was 117, and wish to check on the job's
status and if finished save the the return value to a file
matlab-job117.out
, you could use a code snippet
like:
>> jobid = 117;
>> savefile = sprintf("matlab-job%d.out", jobid);
>>
>> cl = parcluster('Zaratan HPC');
>> job = findJob(cl, 'ID', jobid);
>> if isempty(job)
>> % Job not found, warn user and exit
>> error("No job with id %d found", jobid);
>> end
>>
>> if job.State == "finished"
>> % Job not finished, warn user and exit
>> warning("Job with id %d not finished, state=%s", jobid, job.State);
>> exit
>> end
>>
>> % Output job state and diary
>> job.State
>> diary(job)
>>
>> % Get and save its return value
>> return_value = fetchOutputs(job);
>> save(savefile, 'return_value');
>> exit
After this snippet saves the return value of the function
called in the main job, you could use the MATLAB load
command to load the variables saved to matlab-job117.out
for further processing.
As shown above, MATLAB makes it possible to use MPS to submit jobs to the HPC cluster from the MATLAB command prompt, and such jobs can leverage MPI to do distributed memory parallelism (which allows for a single MATLAB job to span multiple compute nodes, thereby leveraging more power than a single node can bring to bear), but how can batch users avail themselves of this? The answer is unfortunately a bit convoluted, as if one does not wish to start an interactive job to launch the MATLAB jobs, the alternative is to submit a MATLAB job via sbatch which simply submits the real, production job from within MATLAB. In practice, this should only be a small amount of overhead, as the MATLAB job submitted by sbatch should be rather small and short lived (so it can be submitted to the debug partition ) and has a fairly simply structure. You could use a job script like:
#!/bin/bash
#SBATCH -p debug
#SBATCH -n 1
#SBATCH -t 10
module load matlab
cd ~/my-matlab-stuff
MatlabCmd=$( cat <<EOF
cl = parcluster('Zaratan HPC');
cl.AdditionalProperties.WallTime = '8:00:00';
job=cl.batch(@myFunction, 2,{1, 7.2, 3.5});
job.ID;
exit
EOF
)
matlab -nodisplay -nosplash -nojvm -r "$MatlabCmd"
exit
You can submit this job script with sbatch, and because it is using the debug partition it should start up quickly even if the cluster is fairly loaded. Once this short job starts, it will start up MATLAB just to submit the production job and then exit. You should see the real production MATLAB job in the queue. You can grab the MATLAB job ID from the Slurm output of this wrapper job, and use that to check the job status and get the results in either interactive MATLAB sessions, or using a script similar to above but with MATLAB code like the previous one to check the status of the job and if finished save the results to a file for later processing.
The campus MATLAB license includes a fair number of licensed toolboxes. However, there are also has a large number of free and community provided toolboxes --- far too many for the Division of Information Technology to install all of them. For the most part, any individual toolkit/toolbox/package add-on is only used by at most a handful of people, so it is more efficient for the users to install these themselves.
This is relatively simple to do in the more recent MATLAB versions; from your main MATLAB screen, click on the "Add-Ons" drop down, and select "Get Add-ons". This might take a little while to open up due to the large number of add-ons available, but once open there are a number of ways to look for add-ons. If you know what add-on you want, the search bar on the top right might be the easiest way to find the add-on. Find the add-on you desire and click on it.
Once the window for the particular add-on opens, there should be a button labeled "Install" in the upper right. Click on that, and the add-on should be installed into the appropriate location in your home directory.
You will likely need an account with Mathworks/Matlab in order to download the add-ons. You can create such an account at https://www.mathworks.com/mwaccount/register; it is advised that you register with your "@umd.edu" email address to get the full benefits of your association with the University.
For your convenience, we provide some links to some additional resources about MATLAB which you might find useful.
These are free tutorials and other documentation from Mathworks, the company which produces MATLAB.
Documentation and tutorials from Mathworks on general MATLAB usage, etc.:
Documentation and tutorials from Mathworks on the general subject of parallelizing workloads in MATLAB and the Parallel Computing Toolkit:
Documentation and tutorials from Mathworks on using MATLAB Parallel Server to submit jobs from MATLAB to an HPC cluster:
MathWorks also has a significant amount of web documentation on Parallel Server/MDCS available at https://www.mathworks.com/help/mdce/.
In the fall of 2014, we had a tutorial on MATLAB Distributed Compute Server (MDCS) (which is what the product which became MATLAB Parallel Server (MPS) was called then) led by an instructor from MathWorks.
The documentation from that is provided below --- although it is a little dated and there have been some minor changes since then, the basic concepts might still be useful.
Documentation and tutorials from Mathworks on integrating MATLAB
Parallel Server (MPS) with the Slurm HPC scheduler. This is mainly
for those who wish to better understand what the configCluster
command is doing, and/or who wish to do some advanced customization
of the Cluster Profiles: