MATLAB

MATLAB jobs work well as serial (single-threaded) jobs. But if your application/code uses MATLAB’s Parallel Computing Toolbox (e.g., parfor) or MATLAB’s BLAS libraries, then you can script your jobs to run over multiple CPUs.

At present multi-node MATLAB jobs are not possible. So, your Slurm script should always use #SBATCH --nodes=1.

Here we take the example from MathWorks website which uses multiple cores in a for loop.

for_loop.m

poolobj = parpool;
fprintf('Number of workers: %g\n', poolobj.NumWorkers);

tic
n = 200;
A = 500;
a = zeros(n);
parfor i = 1:n
    a(i) = max(abs(eig(rand(A))));
end
toc

We then use the following SLURM script to run the above MATLAB code via the scheduler.

#!/bin/bash
#
#SBATCH --job-name="Matlab" 	    # a name for your job
#SBATCH --partition=peregrine-cpu	# partition to which job should be submitted
#SBATCH --qos=cpu_debug				# qos type
#SBATCH --nodes=1                	# node count
#SBATCH --ntasks=1               	# total number of tasks across all nodes
#SBATCH --cpus-per-task=4        	# cpu-cores per task 
#SBATCH --mem-per-cpu=4G         	# memory per cpu-core

module purge
module load matlab

matlab -nodisplay -nosplash -r for_loop

Save the script as matlab.sh and submit it as

sbatch matlab.sh

The output would go to a file slurm-######.out, named after the job id.
It should look like:

                            < M A T L A B (R) >
                  Copyright 1984-2022 The MathWorks, Inc.
                  R2022a (9.12.0.1884302) 64-bit (glnxa64)
                             February 16, 2022

 
To get started, type doc.
For product information, visit www.mathworks.com.
 
Starting parallel pool (parpool) using the 'local' profile ...
Connected to the parallel pool (number of workers: 4).
Number of workers: 4
Elapsed time is 9.388137 seconds.

Notice the time taken to finished the task in the last line. You can change the value of cpus-per-task and see how this time changes!