Biocluster
Jump to navigation
Jump to search
List All Jobs on Cluster With Nodes
Biocluster[edit]
Contents
- 1 Biocluster
Quick Links[edit]
- Main Site - http://biocluster.igb.illinois.edu
- Galaxy Interface - https://galaxy.igb.illinois.edu
- WebBlast Interface - https://biocluster.igb.illinois.edu/webblast/
- Cluster Accounting - https://biocluster.igb.illinois.edu/accounting/
- Cluster Monitoring - http://biocluster.igb.illinois.edu/ganglia/
Cluster Specifications[edit]
Default Queue
- 25 Nodes
- Dell Poweredge 1950
- 8 2.83Ghz E5540 Intel Xeon CPUs per Node
- 16 Gigabytes of RAM per Node
Computation Queue
- 11 Nodes
- Dell R410 Servers
- 8 2.4GHz E5530 Intel Xeon CPUs per Node
- 24 Gigabytes of RAM per Node
Large Memory Queue
- 2 Nodes
- Node 1 - Dell R900
- 16 2.4GHz E7440 Intel Xeon CPUs
- 256 Gigabytes of RAM
- Node 2 - Dell R910
- 24 2.0GHz E7540 Intel Xeion CPUs
- 1024 Gigabytes (1TB) of RAM
Usage Cost[edit]
Usage is charge by the second. The CPU cost and memory cost are compared and the largest is what is billed.
Queue Name | CPU Cost ($ per CPU per day) | Memory Cost ($ per GB per day) |
default | $1.37 | $0.46 |
classroom | $1.00 | $0.5 |
largememory | $11.42 | $0.27 |
How To Get Biocluster/Galaxy Access[edit]
- Please fill out the form at http://www.igb.illinois.edu/content/galaxy-and-biocluster-account-form to request access to Biocluster and Galaxy interface.
Cluster Rules[edit]
- Running jobs on the head node are strictly prohibited. Running jobs on the head node could cause the entire cluster to crash and affect everyone's jobs on the cluster. Any program found to be running on the headnode will be stopped immediately and your account could be locked. You can start an interactive session to login to a node to manual run programs.
- Installing Software Please email help@igb.illinois.edu for any software requests. Compiled software will be installed in /home/apps. If its a standard RedHat package (rpm), it will be installed in their default locations on the nodes.
- Creating or Moving over Programs: Programs you create or move to the cluster should be first tested by you outside the cluster for stability. Once your program is stable, then it can be moved over to the cluster for use. Unstable programs that cause problems with the cluster can result in your account being locked. Programs should only be added by CNRG personnel and not compiled in your home directory.
- Designating Memory Usage of Job: TORQUE allows the user to specify the amount of memory they want their program to use. The memory cap on the Computation Queue is XXX Gigabytes and memory cap on the Large Memory Queue is XXX Gigabytes. For example if you would like your script temp.sh to use 128 Gigabytes of memory on the Large Memory Queue you would execute the following command:
qsub -q largememory mem=128gb temp.sh
- Slots and Process Forking: For each process you create in a program you run through TORQUE, you must reserve a slot in TORQUE. Normally this is done automatically (one slot for the one process you are submitting), but if you are spawning (also called forking) other processes from that process you submitted, you must reserve one slot per process created. Normally this is done as a serial process rather than a parallel process. For example if you would like to reserve 8 or more processing slots for a qsub script called temp.sh in the computation queue you would execute the following command:
qsub -l nodes=1:ppn=8 -q computation temp.sh
If you are doing this serially and create (fork/spawn) more than 8 processes, you only need to request 8 processes at a time. Failure to comply with this policy will result in your account being locked.
How To Log Into The Cluster[edit]
- You will need to use an SSH client to connect.
On Windows[edit]
- You can download Putty from http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
- Install Putty and run it, in the Host Name input box enter biocluster.igb.illinois.edu
- Hit Open and login using your IGB account credentials.
On Mac OS X[edit]
- Simply open the terminal under Go >> Utilities >> Terminal
- Type in ssh username@biocluster.igb.illinois.edu where username is your NetID.
- Hit the Enter key and type in your IGB password.
How To Submit A Cluster Job[edit]
- The cluster runs the TORQUE queuing and resource mangement program.
- All jobs are submitted to TORQUE which distributes them automatically to the Nodes.
Create a Job Script[edit]
- You must first create a TORQUE job script file in order to tell TORQUE how and what to execute on the nodes.
- Type the following into a text editor and save the file test.sh ( Linux text editing )
#!/bin/bash #PBS -j oe #PBS -S /bin/bash sleep 20 echo "Test Script"
- You just created a simple PBS TORQUE Job Script.
- Line by line explanation
- #!/bin/bash - tells linux this is a bash program and it should use a bash interpreter to execute it.
- #PBS - are PBS parameters, for explanations of these please scroll down to PBS Parameters Explanations section.
- sleep 20 - Sleep 20 seconds (only used to simulate processing time for this example)
- echo "Test Script" - Output some text to the screen when job completes ( simulate output for this example)
- For example if you would like to run a blast job you may simply replace the last two line with the following
module load blast blastall -p blastn -d nt -i input.fasta -e 10 -o output.result -v 10 -b 5 -a 5
- Note: the module commands are explained under the Environment Modules section.
TORQUE PBS Parameters Explanations:[edit]
- These are just a few parameter options, for more type man qsub while logged into the cluster.
- #PBS -d /tmp/working_dir tells Torque to run the script from the /tmp/working_dir directory. This defaults to your home directory (/share/home/username). This could be problematic if the path on the head node is different than the path on the slave node.
- #PBS -j oe parameter tells Torque to join the errors and output streams together into one file. This file will be created in the working directory and will be named in this case test.sh.o# where # is the job number assigned by Torque.
- #PBS -S /bin/bash parameter tells Torque that the program will be using bash for its interpreter. Required.
- #PBS -N testJob3 parameter tells Torque to name the job testJob3.
- #PBS -M username@igb.illlinois.edu parameter tells Torque to send an e-mail to username@igb.illinois.edu when the job is done.
- #PBS -m abe parameter tells Torque to send an e-mail to the e-mail defined using the above -M parameter when a job is aborted,begins or ends.
- #PBS -l nodes=1:ppn=5 5 parameter tells Torque to reserve 5 processors on a node for this job. This is only allowed if your program can actually use multiple processors natively, otherwise this is considered cluster abuse (In the example script test.sh, blastall has the parameter -a 5 which tells blast to run using 5 processors in this case telling SGE to reserve 5 processors is justified!)
- #PBS -l mem=1024mb parameter tells Torque to reserve 1024 Megabytes of RAM for the job.'
- #PBS -q classroom parameter tells Torque to run the job on the classroom queue.
Submit Serial Job[edit]
- To submit the serial job you will use the qsub program. For example to submit test.sh TORQUE Job you would type:
qsub test.sh
- You may also define the TORQUE parameters from the section above as qsub parameters instead of defining them in the script file. Example:
qsub -j oe -S /bin/bash test.sh
Submit Parallel Job[edit]
- To submit the parallel job you will use the qsub program.
- For more information please refer to this page Submitting TORQUE Jobs
- To distribute the jobs evenly across the reserved nodes you will have to to use orte_rr instead of the orte parallel environment. This will distribute the MPI jobs to each node in round robin order. While the default orte will make sure to fill all slots on the node before moving to the next one.
Start An Interactive Login Session On A Compute Node[edit]
- Use interactive qsub if you would like to run a job interactively such as running a quick perl script or run a quick test interactively on your data.
qsub -I
- This will automatically reserve you a slot on one of the compute nodes and will start a terminal session on it.
- Closing your terminal window will also kill your processes running in your interactive qsub session, therefor it's better to submit large jobs via non-interactive qsub.
- To run an application with a user interface run
qsub -I -X
- For this to work you will need to setup xserver on your computer Xserver Setup
View/Delete Submitted Jobs[edit]
List Queues[edit]
qstat -q
List All Jobs on Cluster With Nodes
[edit]
qstat -a -n
Viewing Job Status[edit]
- To get a simple view of your current running jobs you may type:
qstat
- This command brings up a list of your current running jobs.
- The first number represents the job's ID number.
- Jobs may have different status flags:
- R = job is currently running
- W = job is waiting to be submitted (this may take a few seconds even when there are slots available so be patient)
- Eqw = There was an error running the job.
- S = Job is suspended (job overused the resources subscribed to it in the qsub command)
- For more detailed view type:
qstat -f
- This will return a list of all nodes, their slot availability, and your current jobs.
Deleting Jobs[edit]
- Note: You can only delete jobs which are owned by you.
- To delete a job by job-ID number:
- You will need to use qdel, for example to delete a job with ID number 5523 you would type:
qdel 5523
Troubleshooting job errors[edit]
- To view job errors in case job status shows Eqw or any other error in the status column use qstat -j, for example if job # 23451 failed you would type:
qstat -j 23451
Applications[edit]
Compute node paths[edit]
- Installed programs folder path:
/home/apps/
Environment Modules[edit]
- To automatically load the proper environment for some programs you may use the module command
- To list all available environments run module avail (please e-mail help@igb.illinois.edu for special requests):
bash-4.1$ module avail
<websiteframe> website=http://biocluster.igb.illinois.edu/apps.txt width=650 height=340 border=0 scroll=no </websiteframe>
- To load a particular environment for example qiime/1.5.0 simply run this command:
module load qiime/1/5.0
- If you would like to simply load the latest version, run the the command without the /1.50 (version number):
module load qiime
- To view which environments you have loaded simply run module list:
bash-4.1$ module list Currently Loaded Modulefiles: 1) qiime
- When submitting a job using a qsub script you will have to add the module load qiime/1.5.0 line before running qiime in the script.
- To unload a module simply run module unload:
module unload qiime
- Unload all modules
module purge
Transferring data files[edit]
Transferring from personal machine[edit]
- In order to transfer files to the cluster from a personal Desktop/Laptop you may use WinSCP the same way you would use it to transfer files to the File Server.
Transferring from file server (Very Fast)[edit]
- To transfer files to the cluster form the file-server you will need to first setup Xserver on your machine. Please follow this guide to do so Xserver Setup.
- Once Xserver is setup on your personal machine you will need to SSH into the cluster using putty as mentioned above.
- Then start gFTP by typing in the terminal:
gftp
- This will launch a graphical interface for gFTP on your computer, it should look like this
- Enter the following into the gFTP user interface:
- Host: file-server.igb.illinois.edu
- Port: leave this box blank
- User: Your IGB username
- Pass: Your IGB password
- Select SSH2 from the drop down menu
- Hit enter and you should now be connected to the file-server from the cluster.
- You may now select files and folders from your home directories and click the arrows pointing in each direction to transfer files both ways.
- Please move files back to the file-server or your personal machine once you are done working with them on the cluster in order to allow others to use the space on the cluster for their jobs.
- Note: You may also use the standard command line tool "sftp" to transfer files if you do not want to use gFTP.
Disk Usage[edit]
- Currently there are no disk usage specifications.
- If there is a special interest or recommendations please let us know at help@igb.illinois.edu