3. Slurm

Slurm, which stands for “Simple Linux Utility for Resource Management,” is an open-source cluster management and job scheduling system. Its primary purpose is to efficiently manage and schedule the allocation of computing resources in high-performance computing (HPC) and cluster computing environments.

It is widely used in HPC and cluster computing environments to optimize resource utilization and manage the complex scheduling requirements of large-scale computing clusters. It provides a flexible and extensible framework for cluster management and job scheduling, making it an essential tool for researchers, engineers, and system administrators in such environments.

3.1. Slurm Commands

Check SLURM Version

To verify that SLURM is installed and running, you can check the version with:

sacct --version

Viewing Cluster Information

This command provides an overview of the cluster’s current state, including node availability, partitions, and node status.

sinfo

Node States

The --states=<flag> option in Slurm’s sinfo command allows you to filter nodes based on their state.

  • all: Displays nodes in all states (default if --states is not specified).

  • idle: Shows nodes that are currently available for running jobs.

  • alloc: Displays nodes that are currently allocated to jobs.

  • drain: Lists nodes that are marked for maintenance or have a drain state due to issues.

  • fail: Shows nodes in a failed state.

  • completing: Lists nodes that are finishing the execution of a job.

  • mix: Displays nodes in a combination of states, useful for complex filtering.

  • down: Lists nodes that are marked as down, which may be due to hardware or network issues.

  • unkn: Shows nodes in an unknown state, typically because Slurm cannot determine their status.

Allocating Resources

Request o allocate resources and create an interactive job session on a compute node.

salloc [OPTIONS]

After allocation of a node you have to log in into it.

ssh -X <username>@ara-login01.rz.uni-jena.de
salloc -p <partiotion>
ssh <allocated_node_name>

Options

  • -n, --ntasks=<number>: Specifies the number of tasks (processes or threads) you want to run. This option is particularly useful for parallel or multi-threaded applications.

  • --cpus-per-task=<number>: Defines the number of CPU cores or threads per task.

  • -p, --partition=<partition_name>: Specifies the cluster partition or queue where you want to allocate resources. Different partitions may have varying resource configurations.

  • --time=<time>: Sets the maximum time for which the allocated resources will be available. You can specify the time in various formats, such as minutes, hours, or days. For example --time=1:00:00 allocates resources for 1 hour

  • -N, --nodes=<number>: Defines the number of compute nodes you want to allocate. This option is useful when you need to distribute your tasks across multiple nodes.

  • --mem=<memory>: Specifies the amount of memory required for each task. You can specify memory in various units (e.g., MB, GB). For example, --mem=4G allocates 4 GB of memory per task

  • --output=<output_file>: Redirects the standard output of the interactive session to the specified file.

  • --error=<error_file>: Redirects the standard error output of the interactive session to the specified file.

  • --mail-user=<your_email@example.com>: Redirects the standard error output of the interactive session to the specified file.

  • --mail-type=<option1,option2,...>: Redirects the standard error output of the interactive session to the specified file.

    • NONE: No email notifications will be sent. This is the default if you don’t specify –mail-type.

    • BEGIN: An email will be sent when the job starts (begins execution).

    • END: An email will be sent when the job completes successfully (reaches its natural end).

    • FAIL: An email will be sent if the job fails to complete (exits with an error).

    • ALL: This option will trigger email notifications for all of the above events (BEGIN, END, FAIL, REQUEUE).

These options can be combined to tailor the resource allocation according to your specific needs when requesting an interactive session with salloc. Remember that the availability of some options may depend on your cluster’s configuration and policies.

Monitoring Job Status

To check the status of all submitted jobs. It displays information about running, pending, and completed jobs. When you want to check just your submitted jobs you can use -u <username> or in some cluster configurations --me flags:

squeue

To view your submitted job accounting information, such as start and end times, CPU usage, and more:

sacct

In order to obtain detailed information about a specific job:

scontrol show job <job_id>

To check tha active node:

scontrol show hostname $SLURM_NODELIST

Canceling Jobs

Cancel a job that you’ve submitted by specifying its job ID:

scancel <job_id>

Task

  1. Login to the ARA cluster.

  2. Create a .log file and record the last names of team members. Append the outcome and the command used for each of the following tasks.

  3. Check the SLURM version.

  4. List the idle nodes.

  5. List all jobs in the queue.

  6. Allocate a node from either the s_hadoop or s_standard partition with the following specifications: 10 minutes runtime, 2MB Memory, 2 nodes, 3 tasks, 2 CPUs per task, and specify a desired job name.

  7. List all jobs in the queue which are specifically related to you.

  8. Cancel the previously allocated job.

  9. List all jobs in the queue again, but only those that are related to you.

3.2. Module Loading

Modules are tools for modifying your shell environment. They enable you to load and unload software packages, libraries, and set environment variables as needed.

View Available Modules

  • module avail: List all available modules that can be loaded.

  • module avail <keyword>: Filter modules based on a keyword.

Load Modules

  • module load <module_name>: Load a specific module.

  • module load <module_name>/<version>: Load a module with a specific version.

Unload Modules

  • module unload <module_name>: Unload a loaded module.

List Loaded Modules

  • module list: Display the list of currently loaded modules.

3.3. SLURM Script

Here’s a basic example of a SLURM script that you would submit using sbatch:

#!/bin/bash
#SBATCH --job-name=myjob
#SBATCH --output=myjob.out
#SBATCH --error=myjob.err
#SBATCH --partition=
#SBATCH --nodes=
#SBATCH --ntasks=
#SBATCH --time=
#SBATCH --cpus-per-task=

# Load any necessary modules (if needed)
# module load module_name

# Enter your executable commands here
# Execute the compiled program
echo "Hello, SLURM job!"

This file should be store with .sh extension. Assuming your job script is saved in a file named myjob.sh, open a terminal and run the following command:

sbatch myjob.sh

This command instructs SLURM to submit the job script myjob.sh for execution. The script will be processed by SLURM, and the job will be placed in the queue, waiting for available resources. When the job reaches the front of the queue, it will be allocated the resources specified in your job script, and the job will begin execution and you can monitor the job’s status using commands like squeue or sacct.

The options you defined in your job script (#SBATCH lines) will be used to configure the job’s properties, such as the job name, output file, error file, partition, number of nodes, number of tasks, and the time limit. The actual job commands that perform the work you want to execute should be included at the end of your job script, following the # Your job commands here comment like echo "Hello, SLURM job!" in the previous example.

Task

  1. Log in to the Ara cluster.

  2. Get more information about the number of CPUs per node in the s_hadoop or s_standard partition.

  3. Write a C++ code which calculates the Fibonacci sequence using recursion until 100.

  4. Create a SLURM job script to submit the job to the SLURM job scheduler. Submit the job to the s_hadoop or s_standard partition, request 1 node with whole number of CPUs and 1 task, and set a time limit of 10 minutes. Choose a specific job, error file and output file name. Load any necessary modules (if needed).

  5. Print out the consumed time for running this job in output file.

  6. Monitor the status of your submitted job.

  7. Once the job is completed, review the output and error files to ensure that your C++ program ran successfully.