High Performance Computing on Stheno

The Stheno.njit.edu high performance computing (HPC) cluster is managed nearly identically to the Kong cluster, so most of Kong's documentation applies to Stheno also.

The Stheno cluster is physically distinct from the Kong cluster. Each cluster has its own headnode, on which resides user disk space, which is accessible from the compute nodes. Both clusters have access to AFS disk space.

Stheno utilizes an InfiniBand interconnect between headnode and compute nodes, whereas Kong uses a slower Gigabit Ethernet for the same purpose .Older Stheno nodes (#<15) have 32 Gbit/s QDR InfiniBand; newer nodes have 54.54 Gbit/s FDR. The full hardware specifications of the cluster appear in the HPC Machine Specifications table.Whereas Kong is open to all NJIT researchers, Stheno, which is is funded by the Department of Mathematical Sciences (DMS), is open only to researchers associated with DMS. DMS faculty can obtain access by email to Academic & Research Computing Systems (ARCS) at arcs@njit.edu. Please send email using your official @NJIT.EDU email. DMS students, postdocs, and other researchers must have their DMS faculty advisor request access on their behalf.

Back to the Cyberinfrastructure page

~
Last Updated: March 22, 2016