1. Home
  2. Docs
  3. Jobs Management
  4. Good Practices
  5. Jobs Arrays

Jobs Arrays

Slurm job arrays provide a simple way to submit a large number of independent processing jobs. For example, Slurm job arrays can be useful for applying the same or similar computation to a collection of data sets.

The following examples demonstrate the use of slurm job arrays.

EXAMPLE1
#!/bin/sh
#SBATCH --job-name="array_job1"
#SBATCH --array=1-3
#SBATCH --ntasks=1
#SBATCH --mem=2000

echo "This is job #"$SLURM_ARRAY_TASK_ID" of the array "$SLURM_ARRAY_JOB_ID >> results.txt

exit 0

The file results.txt might be as follows

This is job job #2 of the array 1250
This is job job #1 of the array 1250
This is job job #3 of the array 1250

In the following example 5 single core are spawned and submitted. At each job the same single core command is executed but with a different input file. Input files are named file1, file2 etc.

EXAMPLE2
#!/bin/sh
#SBATCH --job-name="array_job2"
#SBATCH --array=1-5
#SBATCH --ntasks=1
#SBATCH --mem=2000

single_core_command "file"$SLURM_ARRAY_TASK_ID
exit 0

The third example demonstrates how output files can be named and how the environment variable $SLURM_ARRAY_TASK_ID can be manipulated so as different values of job id’s to be available (related to the example1).

EXAMPLE3
#!/bin/sh
#SBATCH --job-name="array_job3"
#SBATCH --array=1-7:2
#SBATCH --output=outfile%A_%a
#SBATCH --ntasks=1
#SBATCH --mem=2000

echo $SLURM_ARRAY_TASK_ID
exit 0

For an array job id 500 the output files will be outfile500_1, outfile500_3,outfile500_5, and outfile500_7. Their content will be 1,3,5 and 7 correspondingly.

Users are prompted to be carefull with the use of job arrays. The limitation of maximum 160 array job tasks is enforced.

For more details on the use of job arrays, users can see here.