As the centralized resource for high performance scientific computing in the Center for Information Technology, the Helix Systems currently comprise two computer systems: Helix and Biowulf. The features and capabilities of each of these systems are described in further detail below. Multiple systems allow different high performance computer architectures to be matched to the applications and programming styles for which they are best suited. A multiple system model also makes it easy to provide new systems with the latest and most cost effective technology much sooner than it would otherwise. The three systems are integrated in many respects so that a user on one system may be actually using the computational resources on another; most user disk space is common to all machines. All of the Helix systems offer a full development environment for users writing their own code in C, C++, and Fortran. By obtaining a Helix Systems account, users have access to each of the computing systems.
|Type of System||SunFire X4600-M2||PC/Linux Cluster|
|Architecture||Dual-Core AMD Opteron™ Processor||Parallel computer with distributed memory, gigabit ethernet, Myrinet and Infiniband interconnects|
|Number of Processors||16 x 3.0 GHz|
Helix is the "hostname" of the primary login machine of the Helix Systems. Its main role is to provide a full suite of third party scientific applications as well as mail hosting for some of the other systems.
Helix is intended for interactive use of computational intensive jobs that last for less than one hour.
Biowulf Cluster (biowulf.nih.gov)
15,500+ processor parallel computer, named Biowulf, represents the latest trend in high performance computing: a distributed memory system composed entirely of commodity components such as Intel and AMD processors and fast-ethernet interconnects. "Supercomputing performance at a fraction of the cost." To enable access to Biowulf, Helix users must submit a short description of their project (http://biowulf.nih.gov/account_request.html).
Helixdrive a system used by our customers to access their filesystems on Helix. It uses Samba to export user's /data, /home and /scratch directories as Windows shares. More information can be found here under "Mount Helix Systems Directories To Desktop."
In addition to the computational systems described above, the Helix staff maintains a number of Network Appliance FAS960 Filers and two Network Appliance FAS3050 Filers. The filers provide Helix Systems users with access to high performance NFS RAID file systems over a dedicated high speed network. Researchers can seamlessly move between computational platforms without having to transfer their data or maintain multiple copies. A total of 20 terabytes of online storage is accessible in a high availability configuration that includes redundant components and clustered failover.
Additionally, hierarchical storage management software utilizes a Network Appliance R200 system and two R100 systems providing 70 terabytes of nearline and backup storage.