Serial Jobs

Serial jobs use only a single CPU-core.

First, let us write a simple Python program.

# This program prints Hello, world!

print('Hello, World!')

Save it as hello.py.

Now we will write a SLURM script to run our serial Python code as a job:

#!/bin/bash
#SBATCH --job-name="Hello World" 	# a name for your job
#SBATCH --partition=peregrine-cpu	# partition to which job should be submitted
#SBATCH --qos=cpu_debug			    # qos type
#SBATCH --nodes=1                	# node count
#SBATCH --ntasks=1               	# total number of tasks across all nodes
#SBATCH --cpus-per-task=1        	# cpu-cores per task
#SBATCH --mem-per-cpu=2G         	# memory per cpu-core
#SBATCH --time=00:01:00          	# total run time limit (HH:MM:SS)


module purge
module load python/anaconda
srun python3 hello.py

Save it as helloworld-python.sh and submit using the command

sbatch helloworld-python.sh

The result will be saved in a file named slurm-####.out and should look like

Hello, World!