In this example, we will use a simple OpenMP C++ program and run it via SLURM. Here is the C++ code we will be using:
#include <iostream>
#include <omp.h>
int main(int argc, char* argv[]) {
using namespace std;
#pragma omp parallel
{
int id = omp_get_thread_num();
int numthrds = omp_get_num_threads();
cout << "Hello from thread " << id << " of " << numthrds << endl;
}
return 0;
}
Save the code as omp.cpp
.
Now compile it into a binary named omp
using g++:
g++ -fopenmp -o omp omp.cpp
We will now run the binary omp
using the following SLURM script
#!/bin/bash
#
#SBATCH --job-name="Hello World OMP" # a name for your job
#SBATCH --partition=peregrine-cpu # partition to which job should be submitted
#SBATCH --qos=cpu_debug # qos type
#SBATCH --nodes=1 # node count
#SBATCH --ntasks=1 # total number of tasks across all nodes
#SBATCH --cpus-per-task=8 # cpu-cores per task
#SBATCH --mem-per-cpu=2G # memory per cpu-core
#SBATCH --time=00:01:00 # total run time limit (HH:MM:SS)
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
module purge
./omp
Note how we are using cpus-per-task
as 8, so we are using 8 CPU cores.
Save the script as omp.sh
and submit the job by running
sbatch omp.sh
The output should go in a file named slurm-####
.out and should look like
Hello from thread Hello from thread Hello from thread 6 of 8Hello from thread Hello from thread
0 of 8
4 of 8
53 of 8
Hello from thread 1 of 8
Hello from thread 2 of 8
Hello from thread 7 of 8
of 8