In this example we’ll use the numpy library in Python to demonstrate a multi-threaded Python job.
Save the following code as numpy-demo.py
import os
num_threads = int(os.environ['SLURM_CPUS_PER_TASK'])
import mkl
mkl.set_num_threads(num_threads)
N = 2000
num_runs = 5
import numpy as np
np.random.seed(42)
from time import perf_counter
x = np.random.randn(N, N).astype(np.float64)
times = []
for _ in range(num_runs):
t0 = perf_counter()
u, s, vh = np.linalg.svd(x)
elapsed_time = perf_counter() - t0
times.append(elapsed_time)
print("execution time: ", min(times))
print("threads: ", num_threads)
Now save the following SLURM script as numpy-demo.sh
#!/bin/bash
#
#SBATCH --job-name="NumPY Demo" # a name for your job
#SBATCH --partition=peregrine-cpu # partition to which job should be submitted
#SBATCH --qos=cpu_debug # qos type
#SBATCH --nodes=1 # node count
#SBATCH --ntasks=1 # total number of tasks across all nodes
#SBATCH --cpus-per-task=8 # cpu-cores per task
#SBATCH --mem-per-cpu=1G # memory per cpu-core
#SBATCH --time=00:01:00 # total run time limit (HH:MM:SS)
#
module purge
module load python/anaconda
srun python numpy-demo.py
Then submit the job as
sbatch numpy-demo.sh
The output should be in a file name slurm-####
.out` and should look like:
execution time: 1.5503690890036523
threads: 8