1. Introduction

On large-scale computers, many users must share available resources. Because of this, you cannot just log on to one of these systems, upload your programs, and start running them. Essentially, your programs (called batch jobs) have to "get in line" and wait their turn. And, there is more than one of these lines (called queues) from which to choose. Some queues have a higher priority than others (like the express checkout at the grocery store). The queues available to you are determined by the projects that you are involved with.

The jobs in the queues are managed and controlled by a batch queuing system, without which, users could overload systems, resulting in tremendous performance degradation. The queuing system will run your job as soon as it can while still honoring the following:

  • Meeting your resource requests
  • Not overloading systems
  • Running higher priority jobs first
  • Maximizing overall throughput

At AFRL, we use the PBS Professional queuing system. The PBS module should be loaded automatically for you at login, allowing you access to the PBS commands.

2. Anatomy of a Batch Script

A batch script is simply a small text file that can be created with a text editor such as vi or notepad. You may create your own from scratch or start with one of the sample batch scripts available in $SAMPLES_HOME. Although the specifics of a batch script will differ slightly from system to system, a basic set of components are always required, and a few components are just always good ideas. The basic components of a batch script must appear in the following order:

  • Specify Your Shell
  • Required PBS Directives
  • The Execution Block

Important: Not all applications on Linux systems can read DOS-formatted text files. PBS does not handle ^M characters well, nor do some compilers. To avoid complications, please remember to convert all DOS-formatted ASCII text files with the dos2unix utility before use on any HPC system. Users are also cautioned against relying on ASCII transfer mode to strip these characters, as some file transfer tools do not perform this function.

2.1. Specify Your Shell

First of all, remember that your batch script is a script. It's a good idea to specify which shell your script is written in. Unless you specify otherwise, PBS will use your default login shell to run your script. To tell PBS which shell to use, start your script with a line similar to the following, where shell is either bash, sh, ksh, csh, tcsh, or zsh:

#!/bin/shell

2.2. Required PBS Directives

The next block of your script will tell PBS about the resources that your job needs by including PBS directives. These directives are actually a special form of comment, beginning with "#PBS". As you might suspect, the # character tells the shell to ignore the line, but PBS reads these directives and uses them to set various values. Important: All PBS directives MUST come before the first line of executable code in your script, otherwise they will be ignored.

Every script must include directives for the following:

  • The number of nodes and processes per node you are requesting
  • The maximum amount of time your job should run
  • Which queue you want your job to run in
  • Your Project ID (defaults to $ACCOUNT on AFRL DSRC systems)

PBS also provides additional optional directives. These are discussed in Optional PBS Directives, below.

2.2.1. Number of Nodes and Processes Per Node

Before PBS can schedule your job, it needs to know how many nodes you want. Before your job can be run, it will also need to know how many processes you want to run on each of those nodes. In general, you would specify one process per core, but you might want more or fewer processes depending on the programming model you are using. See Example Scripts (below) for alternate use cases.

Both the number of nodes and processes per node are specified using the same directive as follows, where N1 is the number of nodes you are requesting and N2 is the number of processes per node:

#PBS -l select=N1:ncpus=NUM:mpiprocs=N2

NUM refers to the number of physical cores needed on each node and must be an integer between 1 and 128. An exception to this rule is the transfer queue, which uses the directive below:

#PBS -l select=1:ncpus=1

To request Warhawk visualization (1 GPU) nodes:

#PBS -l select=N1:ncpus=128:mpiprocs=N2:ngpus=1

To request Warhawk machine-learning automation [MLA] (2 GPU) nodes:

#PBS -l select=N1:ncpus=128:mpiprocs=N2:ngpus=2

To request Warhawk large-memory nodes:

#PBS -l select=N1:ncpus=128:mpiprocs=N2:bigmem=1

2.2.2. How Long to Run

Next, PBS needs to know how long your job will run. For this, you will have to make an estimate. There are three things to keep in mind.

  1. Your estimate is a limit. If your job hasn't completed within your estimate, it will be terminated.
  2. Your estimate will affect how long your job waits in the queue. In general, shorter jobs will run before longer jobs.
  3. Each queue has a maximum time limit. You cannot request more time than the queue allows.

To specify how long your job will run, include the following directive:

#PBS -l walltime=HHH:MM:SS

2.2.3. Which Queue to Run In

Now, PBS needs to know which queue you want your job to run in. Your options here are determined by your project. Most users only have access to the debug, standard, and background queues. Other queues exist, but access to these queues is restricted to projects that have been granted special privileges due to urgency or importance, and they will not be discussed here. As their names suggest, the standard and debug queues should be used for normal day-to-day and debugging jobs. The background queue, however, is a bit special because although it has the lowest priority, jobs that run in this queue are not charged against your project allocation. Users may choose to run in the background queue for several reasons:

  1. You don't care how long it takes for your job to begin running.
  2. You are trying to conserve your allocation.
  3. You have used up your allocation.

To see the list of queues available on the system, use the show_queues command. To specify the queue you want your job to run in, include the following directive:

#PBS -q queue_name

Note: on the AFRL DSRC systems, the job submission process may change the chosen queue depending on the project ID.

2.2.4. Your Project ID

PBS now needs to know which project ID to charge for your job. You can use the show_usage command to find the projects that are available to you and their associated project IDs. In the show_usage output, project IDs appear in the column labeled "Subproject." Note: Users with access to multiple projects should remember that the project they specify may limit their choice of queues.

To specify the Project ID for your job, include the following directive:

#PBS -A Project_ID

Note: on the AFRL DSRC systems, the job submission process will assign the value $ACCOUNT for the project ID if not specified.

2.3. The Execution Block

Once the PBS directives have been supplied, the execution block may begin. This is the section of your script that contains the actual work to be done. A well-written execution block will generally contain the following stages:

  • Environment Setup - This might include setting environment variables, loading modules, creating directories, copying files, initializing data, etc. As the last step in this stage, you will generally cd to the directory that you want your script to execute in. Otherwise, your script would execute by default in your home directory. Most users use "cd $PBS_O_WORKDIR" to run the batch script from the directory where they type qsub to submit the job.
  • Compilation - You may need to compile your application if you don't already have a pre-compiled executable available.
  • Launching - Your application is launched using the mpiexec or aprun command for Cray MPICH codes, or aprun for HPE SGI MPT codes. For small, short-duration non-PBS interactive debugging sessions, use the mpirun command on login nodes.
  • Clean up - This usually includes archiving your results and removing temporary files and directories.

3. Submitting Your Job

Once your batch script is complete, you will need to submit it to PBS for execution using the qsub command. For example, if you have saved your script into a text file named run.pbs, you would type "qsub run.pbs".

Occasionally you may want to supply one or more directives directly on the qsub command line. Directives supplied in this way override the same directives if they are already included in your script. The syntax to supply directives on the command line is the same as within a script except that #PBS is not used. For example:

qsub -l walltime=HHH:MM:SS run.pbs

4. Simple Batch Script Example

The batch script below contains all of the required directives and common script components discussed above. This example starts 256 processes. Each Warhawk node has 128 cores, so 256 processes require 2 nodes. The job is submitted to the standard queue to run for at most 12 hours.

#!/bin/bash
## Required PBS Directives --------------------------------------
#PBS -A Project_ID
#PBS -q standard
#PBS -l select=2:ncpus=128:mpiprocs=128
#PBS -l walltime=12:00:00
#PBS -j oe
#
## Execution Block -----------------------------------------------
# Environment Setup 
JOBID=`echo ${PBS_JOBID} | cut -d '.' -f 1`
# change directory to job-specific directory within scratch
# directory in /p/work1 
cd ${JOBDIR}
#
# Launching  ----------------------------------------------------
# copy executable from $HOME and submit it
cp ${HOME}/my_prog.exe .
mpiexec -n 256 ./my_prog.exe > my_prog.out
#
# Clean up  -----------------------------------------------------
# archive your results
# Using the "here document" syntax, create a job script
# for archiving your data.
cd ${JOBDIR}
cat > archive_job.$$ <<END
#!/bin/bash
#PBS -l walltime=12:00:00
#PBS -q transfer
#PBS -A Project_ID
#PBS -l select=1:ncpus=1
#PBS -j oe
#PBS -S /bin/bash
cd ${JOBDIR}
mkdir ${ARCHIVE_HOME}/${JOBID}
cp -r ${JOBDIR} ${ARCHIVE_HOME}
ls -l ${ARCHIVE_HOME}/${JOBID}
END
# Submit the archive job script.
qsub archive_job.$$
rm archive_job.$$

5. Job Management Commands

The table below contains commands for managing your jobs in PBS.

Job Management Commands
CommandDescription
cqstat List running and pending jobs, including estimated start times**
pbsnodes Display host status of all PBS batch nodes
qdel Delete a job
qhist Display a detailed history of a specific job
qhold Place a job on hold
qpeek Lets you peek at the stdout and stderr of your running job
qrls Release a job from hold
qstat Check the status of a job
qstat -q Display the status of all PBS queues
qsub Submit a job
qview A more user-friendly version of qstat
show_queues A more user-friendly version of "qstat -q"
tracejob Display job accounting data from a completed job

**Estimated start times are only available for a small number of jobs and are subject to frequent change as new jobs are queued.

6. Optional PBS Directives

In addition to the required directives mentioned above, PBS has many other directives, but most users will only use a few of them. Some of the more useful optional directives are listed below.

6.1. Job Identification Directives

Job identification directives allow you to identify characteristics of your jobs. These directives are voluntary but strongly encouraged. The following table contains some useful job identification directives.

Job Identification Directives
DirectiveOptionsDescription
-l application application_name Identify the application being used.
-N job_name Name your job.
6.1.1. Application Name

The "-l application" directive allows you to identify the application being used by your job. This helps the program to accurately assess application usage and to ensure that adequate software licenses and appropriate software are purchased. To use this directive, add a line in the following form to your batch script:

#PBS -l application=application_name
Or, to your qsub command
qsub -l application=application_name

A list of application names for use with this directive can be found in $SAMPLES_HOME/Application_Name/application_names on each HPC system.

6.1.2. Job Name

The "-N" directive allows you to designate a name for your job. In addition to being easier to remember than a numeric job ID, the PBS environment variable, $PBS_JOBNAME, inherits this value and can be used instead of the job ID to create job-specific output directories. To use this directive, add a line in the following form to your batch script:

#PBS -N job_20
Or, to your qsub command
qsub -N job_20...

6.2. Job Environment Directives

Job environment directives allow you to control the environment in which your script will operate. The following table contains a few useful job environment directives.

Job Environment Directives
DirectiveOptionsDescription
-I Request an interactive batch shell.
-V Export all environment variables to the job.
-v variable_list Export specific environment variables to the job.
6.2.1. Interactive Batch Shell

The "-I" directive allows you to request an interactive batch shell. Within that shell, you can perform normal Unix commands, including launching parallel jobs. To use "-I", append it to the end of your qsub request. For example, the qsub command below requests 2 nodes (total of 256 cores) for 1 hour.

qsub -A Project_ID -q debug -l select=2:ncpus=128:mpiprocs=128 -l walltime=01:00:00 -I

6.2.2. Export All Variables

The "-V" directive tells PBS to export all of the environment variables from your login environment into your batch environment. To use this directive, add a line in the following form to your batch script:

#PBS -V
Or, to your qsub command
qsub -V ...

6.2.3. Export Specific Variables

The "-v" directive tells PBS to export specific environment variables from your login environment into your batch environment. To use this directive, add a line in one of the following forms to your batch script:

#PBS -v my_variable
Or, to your qsub command
qsub -v my_variable

Using either of these methods, multiple comma-separated variables can be included. It is also possible to set values for variables exported in this way, as follows:

qsub -v my_variable=my_value, ...

6.3. Reporting Directives

Reporting directives allow you to control what happens to standard output and standard error messages generated by your script. They also allow you to specify e-mail options to be executed at the beginning and end of your job.

6.3.1. Redirecting Stdout and Stderr

By default, messages written to stdout and stderr are captured for you in files named x.ojob_id and x.ejob_id, respectively, where x is either the name of the script or the name specified with the "-N" directive, and job_id is the ID of the job. If you want to change this behavior, the "-o" and "-e" directives allow you to redirect stdout and stderr messages to different named files. The "-j" directive allows you to combine stdout and stderr into the same file.

Redirection Directives
DirectiveOptionsDescription
-e File name Define standard error file.
-o File name Define standard output file.
-j oe Merge stderr and stdout into stdout.
-j eo Merge stderr and stdout into stderr.
6.3.2. Setting up E-mail Alerts

Many users want to be notified when their jobs begin and end. The "-m" directive makes this possible. If you use this directive, you will also need to supply the "-M" directive with one or more e-mail addresses to be used.

E-mail Directives
DirectiveOptionsDescription
-m b Send e-mail when the job begins.
-m e Send e-mail when the job ends.
-M E-mail address(es) Set e-mail address(es) to be used.

For example:

#PBS -m be
#PBS -M joesmith@mail.mil,joe.smith@us.army.mil

6.4. Job Dependency Directives

Job dependency directives allow you to specify dependencies that your job may have on other jobs. This allows users to control the order jobs run in. These directives will generally take the following form:

#PBS -W depend=dependency_expression

where dependency_expression is a comma-delimited list of one or more dependencies, and each dependency is of the form:

type:jobids

where type is one of the directives listed below, and jobids is a colon-delimited list of one or more job IDs that your job is dependent upon.

Job Dependency Directives
DirectiveDescription
after Execute this job after listed jobs have begun.
afterok Execute this job after listed jobs have terminated without error.
afternotok Execute this job after listed jobs have terminated with an error.
afterany Execute this job after listed jobs have terminated for any reason.
before Listed jobs may be run after this job begins execution.
beforeok Listed jobs may be run after this job terminates without error.
beforenotok Listed jobs may be run after this job terminates with an error.
beforeany Listed jobs may be run after this job terminates for any reason.

For example, run a job after completion (success or failure) of job ID 1234:

#PBS -W depend=afterany:1234

Or, run a job after successful completion of job ID 1234:

#PBS -W depend=afterok:1234

For more information about job dependencies, see the qsub man page.

7. Environment Variables

7.1. PBS Environment Variables

While there are many PBS environment variables, you only need to know a few important ones to get started using PBS. The table below lists the most important PBS environment variables and how you might generally use them.

Frequently Used PBS Environment Variables
PBS VariableDescription
$PBS_JOBID Job identifier assigned to job or job array by the batch system.
$PBS_O_WORKDIR The absolute path of directory where qsub was executed.
$PBS_JOBNAME The job name supplied by the user.

The following additional PBS variables may be useful to some users.

Other PBS Environment Variables
PBS VariableDescription
$PBS_ARRAY_INDEX Index number of subjob in job array.
$PBS_ENVIRONMENT Indicates job type: PBS_BATCH or PBS_INTERACTIVE
$PBS_NODEFILE Filename containing a list of vnodes assigned to the job.
$PBS_O_HOST Host name on which the qsub command was executed.
$PBS_O_PATH Value of PATH from submission environment.
$PBS_O_SHELL Value of SHELL from submission environment.
$PBS_QUEUE The name of the queue from which the job is executed.

7.2. Other Important Environment Variables

In addition to the PBS environment variables, the table below lists a few other variables which are not specifically associated with PBS. These variables are not generally required but may be important depending on your job.

Other Important Environment Variables
VariableDescription
$OMP_NUM_THREADS The number of OpenMP threads per node (on AFRL DSRC systems, set to 1 by default)
$BC_CORES_PER_NODE The number of cores per node for the compute node on which a job is running.
$BC_MEM_PER_NODE The approximate maximum user-accessible memory per node (in integer MBytes) for the compute node on which a job is running.
$BC_MPI_TASKS_ALLOC The number of MPI tasks allocated for a job.
$BC_NODE_ALLOC The number of nodes allocated for a job.
$WORKDIR User's directory within /p/work1
$JOBDIR Job-specific directory in $WORKDIR – immune from scrubber until job exits.

8. Example Scripts

All of the script examples shown below contain a "Cleanup" section which demonstrates how to automatically archive your data using the transfer queue and clean up your $WORKDIR after your job completes. Using this method helps to avoid data loss and ensures that your allocation is not charged for idle cores while performing file transfer operations.

8.1. MPI Script

The following script is for a 256 core MPI job running for 20 hours in the standard queue. To run a 256 core job, we need 2 nodes with 128 cores each. The appropriate modulefile PrgEnv-* must be loaded for the proper run-time environment (modulefile PrgEnv-intel in the example).

#!/bin/bash
## Required Directives ------------------------------------
#PBS -l select=2:ncpus=128:mpiprocs=128
#PBS -l walltime=20:00:00
#PBS -q standard
#PBS -A Project_ID
#
## Optional Directives ------------------------------------
#PBS -N testjob
#PBS -j oe
#PBS -M my_email@mail.mil
#PBS -m be
#
## Execution Block ----------------------------------------
# Environment Setup
JOBID=`echo ${PBS_JOBID} | cut -d '.' -f 1`
#
# Changes to Intel Programming Environment
module swap PrgEnv-cray PrgEnv-intel
#
# change directory to job-specific directory within scratch
# directory in /p/work1 
cd ${JOBDIR}
#
# stage input data $HOME
cp ${HOME}/my_data_dir/*.dat .
#
# copy the executable from $HOME
cp ${HOME}/my_prog.exe .
#
## Launching ----------------------------------------------
mpiexec -n 256 ./my_prog.exe > my_prog.out
#
## Cleanup ------------------------------------------------
# archive your results
# Using the "here document" syntax, create a job script
# for archiving your data.
cd ${JOBDIR}
cat > archive_job.$$ <<END
#!/bin/bash
#PBS -l walltime=12:00:00
#PBS -q transfer
#PBS -A Project_ID
#PBS -l select=1:ncpus=1
#PBS -j oe
#PBS -S /bin/bash
cd ${JOBDIR}
mkdir ${ARCHIVE_HOME}/${JOBID}
cp -r ${JOBDIR} ${ARCHIVE_HOME}
ls -l ${ARCHIVE_HOME}/${JOBID}
END
#
# Submit the archive job script.
qsub archive_job.$$
rm archive_job.$$

8.2. MPI Script for HPE SGI MPT

HPE SGI MPT is an alternative to the MPI provided by Cray MPICH. The launching of a job using HPE SGI MPT is shown in the following script. The parallel executable would need to have been compiled and linked in the HPE SGI MPT environment. The appropriate compiler module must be loaded for the proper run-time environment (modulefile intel in the example).

#!/bin/bash
## Required Directives ------------------------------------
#PBS -l select=2:ncpus=128:mpiprocs=128
#PBS -l walltime=20:00:00
#PBS -q standard
#PBS -A Project_ID
#
## Optional Directives ------------------------------------
#PBS -N testjob
#PBS -j oe
#PBS -M my_email@mail.mil
#PBS -m be
#
## Execution Block ----------------------------------------
# Environment Setup
JOBID=`echo ${PBS_JOBID} | cut -d '.' -f 1`
#
# Change Environment from Cray MPICH to HPE SGI MPT
module unload PrgEnv-cray
module load mpt
module load intel # Loads modulefile for Intel run-time environment
module swap cray-pals cray-pals # Resolves to appropriate mpiexec binary
#
# change directory to job-specific directory within scratch
# directory in /p/work1 
cd ${JOBDIR}
#
# stage input data $HOME
cp ${HOME}/my_data_dir/*.dat .
#
# copy the executable from $HOME
cp ${HOME}/my_prog.exe .
#
## Launching ----------------------------------------------
mpiexec -n 256 ./my_prog.exe > my_prog.out
#
## Cleanup ------------------------------------------------
# archive your results
# Using the "here document" syntax, create a job script
# for archiving your data.
cd ${JOBDIR}
cat > archive_job.$$ <<END
#!/bin/bash
#PBS -l walltime=12:00:00
#PBS -q transfer
#PBS -A Project_ID
#PBS -l select=1:ncpus=1
#PBS -j oe
#PBS -S /bin/bash
cd ${JOBDIR}
mkdir ${ARCHIVE_HOME}/${JOBID}
cp -r ${JOBDIR} ${ARCHIVE_HOME}
ls -l ${ARCHIVE_HOME}/${JOBID}
END
#
# Submit the archive job script.
qsub archive_job.$$
rm archive_job.$$

8.3. MPI Script (accessing more memory per process)

By default, an MPI job runs one process per core, with all processes sharing the available memory on the node. If you need more memory per process, then your job needs to run fewer MPI processes per node.

The following script requests 4 nodes (512 cores), but uses only one core per node. This starts 4 MPI processes, each with access to approximately 182 GBytes of memory. The job runs for 20 hours in the standard queue. The appropriate modulefile PrgEnv-* must be loaded for the proper run-time environment (modulefile PrgEnv-intel in the example).

The options -ppn NUM [mpiexec] and -N NUM [aprun] specify that NUM processes are launched on each compute node, overriding the value in the job requirements.

#!/bin/bash
## Required Directives ------------------------------------
#PBS -l select=4:ncpus=128:mpiprocs=1
#PBS -l walltime=20:00:00
#PBS -q standard
#PBS -A Project_ID
#
## Optional Directives ------------------------------------
#PBS -N testjob
#PBS -j oe
#PBS -m be
#
## Execution Block ----------------------------------------
# Environment Setup
JOBID=`echo ${PBS_JOBID} | cut -d '.' -f 1`
#
# Changes to Intel Programming Environment
module swap PrgEnv-cray PrgEnv-intel
#
# change directory to job-specific directory within scratch
# directory in /p/work1 
cd ${JOBDIR}
# stage input data from $HOME
cp ${HOME}/my_data_dir/*.dat .
#
# copy the executable from $HOME
cp ${HOME}/my_prog.exe .
#
## Launching ----------------------------------------------
mpiexec -n 4 ./my_prog.exe > my_prog.out
#
## Cleanup ------------------------------------------------
# archive your results
# Using the "here document" syntax, create a job script
# for archiving your data.
cd ${JOBDIR}
cat > archive_job.$$ <<END
#!/bin/bash
#PBS -l walltime=12:00:00
#PBS -q transfer
#PBS -A Project_ID
#PBS -l select=1:ncpus=1
#PBS -j oe
#PBS -S /bin/bash
cd ${JOBDIR}
mkdir ${ARCHIVE_HOME}/${JOBID}
cp -r ${JOBDIR} ${ARCHIVE_HOME}
ls -l ${ARCHIVE_HOME}/${JOBID}
END
#
# Submit the archive job script.
qsub archive_job.$$
rm archive_job.$$

8.4. OpenMP Script

The following script is for an OpenMP job using one thread per core on a single node and running for 20 hours in the standard queue. Note the use of the $BC_CORES_PER_NODE environment variable to set the value. To start fewer than 128 threads, replace $BC_CORES_PER_NODE with a lower value.

#!/bin/bash
## Required Directives ------------------------------------
#PBS -l select=1:ncpus=128:mpiprocs=128
#PBS -l walltime=20:00:00
#PBS -q standard
#PBS -A Project_ID
#
## Optional Directives ------------------------------------
#PBS -N testjob
#PBS -j oe
#PBS -m be
#
## Execution Block ----------------------------------------
# Environment Setup
JOBID=`echo ${PBS_JOBID} | cut -d '.' -f 1`
#
# Changes to Intel Programming Environment
module swap PrgEnv-cray PrgEnv-intel
#

# change directory to job-specific directory within scratch
# directory in /p/work1 
cd ${JOBDIR}
#
# stage input data from $HOME
cp ${HOME}/my_data_dir/*.dat .
#
# copy the executable from $HOME
cp ${HOME}/my_prog.exe .
#
## Launching ----------------------------------------------
export OMP_NUM_THREADS=${BC_CORES_PER_NODE}
./my_prog.exe > my_prog.out
#
## Cleanup ------------------------------------------------
# archive your results
# Using the "here document" syntax, create a job script
# for archiving your data.
cd ${JOBDIR}
cat > archive_job.$$ <<END
#!/bin/bash
#PBS -l walltime=12:00:00
#PBS -q transfer
#PBS -A Project_ID
#PBS -l select=1:ncpus=1
#PBS -j oe
#PBS -S /bin/bash
cd ${JOBDIR}
mkdir ${ARCHIVE_HOME}/${JOBID}
cp -r ${JOBDIR} ${ARCHIVE_HOME}
ls -l ${ARCHIVE_HOME}/${JOBID}
END
#
# Submit the archive job script.
qsub archive_job.$$
rm archive_job.$$

8.5. Hybrid MPI/OpenMP Script

The following script uses 2 nodes (128 cores) placing one MPI process per node and 128 OpenMP threads per node. Note the use of the $BC_CORES_PER_NODE environment variable. The number of threads per MPI process is automatically computed per node. The number of MPI processes per node is the total number of MPI processes divided by the number of nodes. The appropriate modulefile PrgEnv-* must be loaded for the proper run-time environment (modulefile PrgEnv-intel in the example).

#!/bin/bash
## Required Directives ------------------------------------
#PBS -l select=2:ncpus=128:mpiprocs=1
#PBS -l walltime=20:00:00
#PBS -q standard
#PBS -A Project_ID
#	
## Optional Directives ------------------------------------
#PBS -N testjob
#PBS -j oe
#PBS -m be
#
## Execution Block ----------------------------------------
# Environment Setup
JOBID=`echo ${PBS_JOBID} | cut -d '.' -f 1`
#
# Changes to Intel Programming Environment
module swap PrgEnv-cray PrgEnv-intel
#
# change directory to job-specific directory within scratch
# directory in /p/work1 
cd ${JOBDIR}
#
# stage input data from $HOME
cp ${HOME}/my_data_dir/*.dat .
#
# copy the executable from $HOME
cp ${HOME}/my_prog.exe .
#
## Launching ----------------------------------------------
export OMP_NUM_THREADS=${BC_CORES_PER_NODE}
mpiexec -n 2 ./my_prog.exe > my_prog.out
#
## Cleanup ------------------------------------------------
# archive your results
# Using the "here document" syntax, create a job script
# for archiving your data.
cd ${JOBDIR}
cat > archive_job.$$ <<END
#!/bin/bash
#PBS -l walltime=12:00:00
#PBS -q transfer
#PBS -A Project_ID
#PBS -l select=1:ncpus=1
#PBS -j oe
#PBS -S /bin/bash
#
cd ${JOBDIR}
mkdir ${ARCHIVE_HOME}/${JOBID}
cp -r ${JOBDIR} ${ARCHIVE_HOME}
ls -l ${ARCHIVE_HOME}/${JOBID}
END
# Submit the archive job script.
qsub archive_job.$$
rm archive_job.$$