1. Introduction

1.1. Document Scope and Assumptions

This document provides an overview and introduction to the use of the Aspen Systems Intel (Talon) located at the AFRL DSRC, along with a description of the specific computing environment on Talon. The intent of this guide is to provide information to enable the average user to perform computational tasks on the system. To receive the most benefit from the information provided here, you should be proficient in the following areas:

  • Use of the UNIX operating system
  • Use of an editor (e.g., vi or emacs)
  • Remote usage of computer systems via network or modem access
  • A selected programming language and its related tools and libraries

1.2. Policies to Review

Users are expected to be aware of the following policies for working on Talon.

1.3. Obtaining Accounts

Authorized DOD and Contractor personnel may request an account on Talon by submitting a proposal to the AFRL DSRC via email to sp-proposal@helpdesk.hpc.mil. The proposal should include the following information:

  • HPC experience and level of required support
  • Project suitability for a Shared Memory system
  • Project contribution to the DoD mission and/or HPC technical advancement
  • Proposed workload

Direct any questions regarding this non-allocated system to sp-proposal@helpdesk.hpc.mil.

1.4. Requesting Assistance

The HPC Help Desk is available to help users with unclassified problems, issues, or questions. Analysts are on duty 8:00 a.m. - 8:00 p.m. Eastern, Monday - Friday (excluding Federal holidays).

For more detailed contact information, please see our Contact Page.

2. System Configuration

2.1. System Summary

Talon is an Aspen Systems Intel system. The project and compute nodes are populated with Intel x86 processors. Talon uses DDR InfiniBand as its high-speed network for MPI messages and IO traffic. Talon uses the Panasas file system to manage its parallel file system that targets its storage arrays. Talon has 2 project (i.e. login or transfer nodes) that share memory only on the node. Each user node has two 8-core processors (16 cores) with its own Red Hat operating system, sharing 128 GB of 1333-MHz DDR3 memory, with no user-accessible swap space. Talon has 30 compute nodes that share memory only on the node; memory is not shared across the nodes. Each compute node has two 8-core processors (16 cores) with its own Red Hat operating system, sharing 128 GB of DDR3 memory, with no user-accessible swap space. Talon is rated at 13.5 peak TFLOPS and has 131.84 TB (formatted) of disk storage.

Talon is intended to be used as a project and applications development and experiment system. Access and use of Talon's project nodes and other resources are assigned to specific project user(s) to run code, scripts, databases, and user interfaces that drive the processing submitted to the Talon or other DSRC HPC batch systems. Job executions that require large amounts of system resources should be sent to the compute nodes or other HCP systems by batch job submission. Talon nodes can also be reconfigured to experiment with unique operating systems, software, or hardware that is not readily available on standard HPCMP systems. All assigned projects will essentially share the system resources but may have dedicated resources for specific purposes or events as necessary to the project's needs. Usage and conflicts will be monitored by the DSRC, and any conflicts or priorities will be arbitrated by the DSRC management.

Node Configuration
Login Project (dedicated) Standard GPU Phi
Total Nodes 1 5 6 1 1
Processor Intel E5-2640v3 Haswell Intel E5-2640v3 Haswell Intel E5-2670v3 Haswell Intel E5-2670v3 Haswell Intel E5-2670v3 Haswell
Processor Speed 2.6 GHz 2.6 GHz 2.6 GHz 2.3 GHz 2.3 GHz
Sockets / Node 2 2 2 2 2
Cores / Node 16 16 24 24 24
Total CPU Cores 16 80 144 24 24
Useable Memory / Node 125 GB 125 GB 125 GB 125 GB 125 GB
Accelerators / Node 1 None None 2 2
Accelerator NVIDIA Grid K1 PCIe n/a n/a NVIDIA K80 PCIe Intel 7120P
Memory / Accelerator 16 GB n/a n/a 48 GB 32 GB
Storage on Node None None None None None
Interconnect 4x FDR InfiniBand 4x FDR InfiniBand 4x FDR InfiniBand 4x FDR InfiniBand 4x FDR InfiniBand

File Systems on Talon
Path Formatted Capacity File System Type Storage Type User Quota Minimum File Retention
/home ($HOME) 131.84 TB (shared across both partitions) Panasas HDD None None
/workspace ($WORKDIR) 131.84 TB (shared across both partitions) Panasas HDD None None
/p/cwfs ($CENTER) 3.3 PB GPFS HDD 100 TB 120 Days

2.2. Processor

Talon uses 2.6-GHz Intel Haswell (E5-2640v3) processors on its project and compute nodes. There are 2 processors per node, each with 8 cores, for a total of 16 cores per node. In addition, these processors have 20 MB of shared cache.

2.3. Memory

Talon uses both shared and distributed memory models. Memory is shared among all the cores on a node, but is not shared among the nodes across the cluster.

Each project node contains 128 GB of main memory. All memory and cores on the node are shared among all users who are logged in or the processes they run on these nodes.

Each node contains 125 GB of user-accessible shared memory. When running under the batch scheduling system, a process or job will have exclusive access to all 24 GB of compute node memory while executing.

2.4. Operating System

The operating system on Talon is Red Hat Enterprise Linux.

2.5. Peak Performance

Talon is rated at 13.5 peak TFLOPS.

3. Accessing the System

3.1. Kerberos

A Kerberos client kit must be installed on your desktop to enable you to get a Kerberos ticket. Kerberos is a network authentication tool that provides secure communication by using secret cryptographic keys. Only users with a valid HPCMP Kerberos authentication can gain access to Talon. More information about installing Kerberos clients on your desktop can be found at HPC Centers: Kerberos & Authentication.

3.2. Logging In

3.2.1. Kerberized SSH

% ssh user@talon01.afrl.hpc.mil

3.3. File Transfers

File transfers to DSRC systems (except transfers to the local archive system) must be performed using Kerberized versions of the following tools: scp, ftp, sftp, and mpscp.

4. User Environment

4.1. Shells

The following shells are available on Talon: csh, bash, ksh, tcsh, zsh, and sh.

4.2. Environment Variables

A number of environment variables are provided by default on all HPCMP high performance computing (HPC) systems. We encourage you to use these variables in your scripts where possible. Doing so will help to simplify your scripts and reduce portability issues if you ever need to run those scripts on other systems.

4.2.1. Login Environment Variables
Common Environment Variables
Variable Description
$ARCHIVE_HOME Your directory on the archive server.
$ARCHIVE_HOST The host name of the archive server.
$BC_HOST The generic (not node specific) name of the system.
$CC The currently selected C compiler. This variable is automatically updated when a new compiler environment is loaded.
$CENTER Your directory on the Center-Wide File System (CWFS).
$COST_HOME This variable contains the path to the base directory of the default installation of the Common Open Source Tools (COST) installed on a particular compute platform. (See BC policy FY13-01 for COST details.)
$CSI_HOME The directory containing the following list of heavily used application packages: ABAQUS, Accelrys, ANSYS, CFD++, Cobalt, EnSight, Fluent, GASP, Gaussian, LS-DYNA, MATLAB, and TotalView, formerly known as the Consolidated Software Initiative (CSI) list. Other application software may also be installed here by our staff.
$CXX The currently selected C++ compiler. This variable is automatically updated when a new compiler environment is loaded.
$DAAC_HOME The directory containing DAAC-supported visualization tools: ParaView, VisIt, and EnSight.
$F77 The currently selected Fortran 77 compiler. This variable is automatically updated when a new compiler environment is loaded.
$F90 The currently selected Fortran 90 compiler. This variable is automatically updated when a new compiler environment is loaded.
$HOME Your home directory on the system.
$JAVA_HOME The directory containing the default installation of JAVA.
$KRB5_HOME The directory containing the Kerberos utilities.
$PET_HOME The directory containing the tools formerly installed and maintained by the PET staff. This variable is deprecated and will be removed from the system in the future. Certain tools will be migrated to $COST_HOME, as appropriate.
$PROJECTS_HOME A common directory where group-owned and supported applications and codes may be maintained for use by members of a group. Any project may request a group directory under $PROJECTS_HOME.
$SAMPLES_HOME The Sample Code Repository. This is a collection of sample scripts and codes provided and maintained by our staff to help users learn to write their own scripts. There are a number of ready-to-use scripts for a variety of applications.
$WORKDIR Your work directory on the local temporary file system (i.e., local high-speed disk).
4.2.2. Batch-Only Environment Variables

In addition to the variables listed above, the following variables are automatically set only in your batch environment. That is, your batch scripts will be able to see them when they run. These variables are supplied for your convenience and are intended for use inside your batch scripts.

Batch-Only Environment Variables
Variable Description
$BC_CORES_PER_NODE The number of cores per node for the compute node on which a job is running.
$BC_MEM_PER_NODE The approximate maximum user-accessible memory per node (in integer MB) for the compute node on which a job is running.
$BC_MPI_TASKS_ALLOC The number of MPI tasks allocated for a job.
$BC_NODE_ALLOC The number of nodes allocated for a job.

5. Program Development

5.1. Message Passing Interface (MPI)

MPI establishes a practical, portable, efficient, and flexible standard for message passing that makes use of the most attractive features of a number of existing message-passing systems, rather than selecting one of them and adopting it as the standard. See "man mpi" for additional information.

A copy of the MPI 2.2 Standard, in PDF format, can be found at

6. Batch Scheduling

6.1. Scheduler

The Maui/TORQUE scheduling system is currently running on Talon. It schedules jobs, manages resources and job queues, and can be accessed through the interactive batch environment or by submitting a batch request. Maui/TORQUE is able to manage both single-processor and multiprocessor jobs.

6.2. Queue Information

The following table describes the Maui/TORQUE queues available on Talon:

Queue Descriptions and Limits on Talon
Priority Queue Name Max Wall Clock Time Max Cores Per Job Description
Highest debug 1 Hour N/A User testing
Down arrow for decreasing priority urgent N/A N/A Jobs belonging to DoD HPCMP Urgent Projects
frontier 168 Hours N/A Jobs belonging to DoD HPCMP Frontier Projects
high N/A N/A Jobs belonging to DoD HPCMP High Priority Projects
standard 96 Hours N/A Standard jobs
background 24 Hours N/A Unrestricted access - no allocation charge
Lowest transfer 24 Hours 1 Data transfer jobs

6.3. Interactive Logins

When you log in to Talon, you will be running in an interactive shell on a project node. The project nodes provide login access for Talon and support such activities as compiling, editing, and general interactive use by all users. Users and projects will be assigned a specific project node to support login and development as well as hosting long-running processes, services, or scripts for the project application. The preferred method to run resource intensive executions is to use an interactive batch session.

6.4. Advance Reservations

The Advance Reservation Service (ARS) is not available on Talon, and all projects share the assigned or shared batch queues. The Talon compute cluster or a group of compute nodes can be reserved for specific scheduled project events but must be requested in advance from the Talon System Manager and coordinated with other users/projects.