The Owl’s Nest User Guide

cluster-kvmThe Owl’s Nest is a Linux cluster for high-performance computing at Temple University. The cluster is funded from multiple sources, including an NSF grant, startup funds from several Temple faculty, the Institute for Computational Molecular Science, the College of Science and Technology, and Computer Services. The cluster is hosted and operated collaboratively by Computer Services staff and staff of the College of Science of Technology. The initial version of the Owl’s Nest cluster was procured and installed in winter 2010/2011 and has been gradually expanded since as additional funds and hardware became available.

The cluster consists of multiple sub-sections each with a different hardware configuration that all share (networked) home, (global) scratch, and application directories and login servers. Its primary use is for small to moderate size message passing parallel applications, but it is also suitable for multi-threaded and to a – lesser degree – serial calculations. A small part of the cluster is dedicated to GPU computing using Nvidia Tesla and AMD FirePro3d GPUs. Access to individual and groups of compute nodes is managed through a batch system using Torque as resource manager and Maui as batch scheduler.

Access to the HPC cluster is in principle available to all members of the Temple community and – with restrictions – to collaborators. Accounts on the HPC cluster can be requested by sending an e-mail to hpc@temple.edu and stating the planned usage and needs.

Note that copying and pasting code sections from the browser window may not copy the text as displayed, and may introduce non-ASCII characters. Job script files for the examples can be found under the directory /opt/examples/jobscripts. File names are specified above each example.

Contents

  1. HPC Training
  2. Specifications and Access
  3. Working in the Owl’s Nest Environment
  4. Application Development
  5. Running Applications Using the Batch System