Difference between revisions of "Job Arrays"

From Carl R. Woese Institute for Genomic Biology - University of Illinois Urbana-Champaign
Jump to navigation Jump to search
(Submitting Jobs Effectively)
(Default vs Highthroughput Queue)
Line 61: Line 61:
  
 
The default queue only allows you to submit 80 jobs, but they do not use a walltime limit.
 
The default queue only allows you to submit 80 jobs, but they do not use a walltime limit.
 +
 
This queue is most appropriate for:
 
This queue is most appropriate for:
  
 
The high throughput queue allows you to submit 500 jobs, but they have a walltime limit.
 
The high throughput queue allows you to submit 500 jobs, but they have a walltime limit.
 +
 
This queue is most appropriate for:
 
This queue is most appropriate for:
  

Revision as of 09:54, 14 March 2014

Job Array Introduction[edit]

Making a new copy of the script and then submitting each one for every input data file is time consuming. An alternative is to make a job array using the -t option in your QSUB submission script. The -t option allows many copies of the same script to be queued all at once. You can use the $PBS_ARRAYID environmental variable to differentiate between the different jobs in the array. The amount of resources you specify in the QSUB submission script is the amount of resources the job script gets each time it is called.

In this tutorial, we will be using three files:

array.sh
job.pl
job.conf

Lets say you want to run 16 jobs. Instead of submitting 16 different jobs, you can submit one job, but use the '-t' parameter and the PBS_ARRAYID variable. You can read more about the  '-t'  parameter at http://docs.adaptivecomputing.com/torque/4-1-4/Content/topics/commands/qsub.htm

#PBS -t 0-15

The -t parameter sets the range of the PBS_ARRAYID variable. So setting it to

#PBS -t 0-4

will cause the qsub script to call the script 5 times, each time updating the PBS_ARRAYID, from 0 to 4 , which results in

( perl job.pl $PBS_ARRAYID )

perl job.pl 0 
perl job.pl 1
perl job.pl 2
perl job.pl 3
perl job.pl 4

array.sh (Example submission script)[edit]

This submission script changes to the current working directory, submits 16 jobs, and reserves 2 processors and 1gb of ram for each job.

It redirects the stderror and stdout into one file, andemails the job owner on completion or abort.

For each job , it passes the '-t' parameter to the job.pl script, which in this case is 0 to 15

#!/bin/bash
# ----------------QSUB Parameters----------------- #
#PBS -q default
#PBS -l nodes=1:ppn=2,mem=1000mb
#PBS -M youremail@illinois.edu
#PBS -m abe
#PBS -N array_of_perl_jobs
#PBS -t 0-15
#PBS -j oe
# ----------------Load Modules-------------------- #
module load perl/5.16.1
# ----------------Your Commands------------------- #
cd $PBS_O_WORKDIR
perl job.pl $PBS_ARRAYID

job.pl (Example Perl script )[edit]

#!/usr/bin/env perl
#This script echos the job array element that has been passed in

use strict;
my $argument = shift @ARGV;
my $experimentID = $argument + 1;
my $experimentName = `head -n $argument job.conf | tail -n1`;

print "This is job number $argument \n";
print "About to perform experimentID: $argument experimentName:$experimentName\n";

Effectively using the Job Array[edit]

You will need to have an additional script or configuration file to use the PBS_ARRAYID effectively. Otherwise you are simply passing an integer into your tool, which may not have much meaning. Below is an example of a configuration file that specifies an experiment to run for job.pl . As the PBS_ARRAYID variable increments, the script is instructed to perform its action on the next experiment.

Default vs Highthroughput Queue[edit]

The default queue only allows you to submit 80 jobs, but they do not use a walltime limit.

This queue is most appropriate for:

The high throughput queue allows you to submit 500 jobs, but they have a walltime limit.

This queue is most appropriate for:

Submitting Jobs Effectively[edit]

We frequently encounter users having difficulty submitting jobs in the right way to the right queue. Here are some of the common scenarios and our suggested resolutions.


Scenario A,

You have 600 jobs/experiments that run relatively quickly.

Scenario B,

You have 600 jobs/experiments that take many hours to run.

Solution: You will not be able to use the high throughput queue due to the walltime limit. You will need to iterate over the jobs in chunks of

job.conf (example configuration file)[edit]

experimentA
experimentB
experimentC
experimentD
experimentE
experimentF
experimentG
experimentH
experimentI
experimentJ
experimentK
experimentL
experimentM
experimentN
experimentO
experimentP
experimentQ
experimentR
experimentS
experimentT
experimentU
experimentV
experimentW
experimentX
experimentY
experimentZ