Helix Systems Policies
High-Performance Computing at the NIH
Accounts on the NIH Helix systems are for the use of researchers in NIH intramural research programs.
NIH Volunteers can maintain Helix/Biowulf accounts for the duration of their NIH status, but will get access to fewer Helix Systems resources than NIH employees and contractors.
The NIH Helix systems are for appropriate government use only
System resources are for the work-related use of authorized users only.
Account sharing among multiple users is strictly prohibited. By NIH Access Control policy, a separate Helix account must be set up for each user.
E-Mail and Internet Use
The NIH Office of Information Resources Management considers chain letters, joke messages, and ads to be inappropriate activities, subject to disciplinary action, according to Appropriate Use of E-Mail and Internet Services.
Helix users are forbidden from transmitting or storing any Personally Identifiable Information (e.g. patient data containing names or social security numbers) or sensitive data anywhere on the Helix systems, including their home and data directories.
Users are responsible for reading the system messages and announcements. These will appear as messages during login, and will also be sent to all Helix users by email. [Archive of Helix & Biowulf messages]
Please do not access files or directories belonging to another user without explicit permission.
To improve system security and availability, a monthly maintenance cycle has been instituted. This cycle will generally involve a reboot of both Helix and the Biowulf login node (not the entire cluster). The reboots are scheduled at 7 am on the first Monday of every month, or the following Tuesday if that Monday is a holiday. Downtime during a reboot will typically be 10-15 minutes.
Scheduled maintenance that requires a longer downtime and emergency maintenance will be announced separately. Every effort will be made to minimize disruptions.
Helix is a single shared system with 64 cores (128 threads) and 1 TB of memory. It is intended for interactive use and for relatively short jobs. All compute-intensive jobs should be performed on the Biowulf cluster, which is intended for large-scale computing.
Length of job: Since Helix is rebooted once a month, a process can run for a max of 30 days.
Number of cores: Each user should use a max of 8 simultaneous cores (16 threads). Users can utilize more than 8 cores and we will allow the processes to continue as long as the system is not busy. If, however, a user is using more than 8 cores and this is impacting the system, we'll ask the user to reduce the number of processes.
Memory: max of 100 GB per user. If you expect to need more than this, please contact the Helix staff. We will allow processes to continue as much as possible, but if a large-memory job is impacting the system, we may need to contact the user and terminate it.
I/O intensive jobs: The I/O load of a program may be tricky to determine in advance. In some cases users may be unaware that their program has a massive I/O load.. It is also not always clear why sometimes the I/O load of a particular process is so high as to swamp the system. We monitor loads on Helix continuously, and in some cases we may have to contact the user or kill a process to prevent the system from hanging.
Please contact the Helix staff (firstname.lastname@example.org, or 301-496-4825) if you have questions about the appropriateness of your job for a particular platform, or need more information about how to run your job.