News & Announcements
Important Change to NIH Biowulf -- "e2666" properties changed (Biowulf)Date: 12 September 2011 08:09:39
From: steven fellini (sfellini@NIH.GOV)
A configuration change to the 8-core, 2.666 GHz, 24 GB nodes has allowed these nodes to support 16 threads-of-execution. Previously these nodes could be specified by using the c8 property. Effective immediately, two changes will be made to the PBS batch properties used to select for these nodes: -- The c8 property will be replaced with the c16 property reflecting the fact that these nodes can now support up to 16 processes or threads-of-execution. The swarm command will automatically place 16 processes on these nodes (see below for performance implications). Jobs allocating c16 nodes will only be "charged" for 8 cores against the cores per user maximum. This effectively doubles your core limit when running on c16 nodes. -- The m2 (gigabytes of memory per core) property for these nodes will be replaced by m1 reflecting the change in the "core" to memory ratio. The c8 property will continue to be valid for 60 days; after that, specifying this property will result in an error. The g24 or g72 (gigabytes of memory per node) properties will not change for these nodes. Performance The Biowulf staff have run numerous application benchmarks on the e2666 nodes using 16 processes/threads-of-execution with the following _generalized_ results: -- Multi-threaded programs such as `cufflinks' will generally see an improvement in performance when running 12 or 16 threads over 8. -- Running greater than 8 processes such as `R' can result in higher throughput at the cost of somewhat longer job walltime.