NJIT is in the process of standardizing on groups ("clusters") of commodity computers as the vehicle for providing high performance computing services for researchers. These computers (nodes) can act independently, or in parallel, to handle large-scale, usually floating-point intensive, computationally demanding tasks.
Clusters consist of a master (head) node, and slave nodes. The master node provides various management and software distribution functions; the slave nodes perform the computations.
Clusters are especially effective when computational tasks can be divided into sub-tasks that can be performed independently and simultaneously (parallelization). Parallel computing requires fast communication between processes running on the slave nodes; this communication is enabled by a Message Passing Interface (MPI).
The user interacts with the cluster via scheduling and resource management software. This software can perform these same functions for groups of clusters (or other computational elements).
Scheduling and allocation of resources are determined by policies that are dependent on various factors, including group and individual ownership of nodes in a cluster.
There are currently three clusters in operation at NJIT : cappl.njit.edu, hydra.njit.edu, and kong.njit.edu. All are managed by IST University Computing Systems.
All of the standard Unix/Linux compilers, programs and utilities are available on all clusters. In addition, various special-purpose software is available.
Requests for additional software for use on clusters should be sent to email@example.com. Free software can usually be installed within 14 days of the request. Requests for commercial software should identify the funding source for that software.
Access to Clusters
Access to two of the clusters, cappl.njit.edu and hydra.njit.edu, is determined by the groups that have funded those machines (cappl : Electrical and Computer Engineering; hydra : Department of Mathematical Sciences).
Researchers wishing to access either cappl or hydra should contact the following :
cappl : Dr. Jie Hu, firstname.lastname@example.org
hydra : Dr. Michael Siegel, email@example.com
Researchers wishing to access kong.njit.edu should send mail to that effect to firstname.lastname@example.org in order to initiate the access process.
|Sun Grid Engine||Y||Y||Y||Batch scheduler|
MPI - Message Passing Interface
|Y||Y||Y||Implementation of MPI (a standard for message-passing for distributed-memory applications used in parallel computing)|
|C, C++, Fortran||Y||Y||Y||Programming languages & compilers|
|Modules||Y||Y||Y||Cluster user tools|
|ACML||-||Y||Y||AMD Core Math Library|
|ATLAS||-||Y||Y||An OMG standard for performing model transformations|
|ScaLAPACK||-||Y||Y||Library of high-performance linear algebra routines for distributed-memory message-passing MIMD|
|Goto BLAS||-||Y||Y||Linear Algebra|
|NAMD/CHARM||-||Y||Y||Molecular Dynamics simulation|
|SUNDIALS||-||Y||Y||Suite of Nonlinear and Differential/Algebraic equation Solvers|
|Portland Group Compilers||-||Y||Y||Compilers for C, C++, and Fortran|
|IMSL||-||Y||-||Software libraries of numerical analysis functionality that are implemented in widely used computer programming languages|
|Fluent||-||-||Y||Computational fluid dynamics|
|Gaussian||-||-||Y||Chemical engineering & chemistry software|
|Sybyl||-||-||Y||Chemical engineering & chemistry software|