High Performance Computing Machine Specifications

High Performance Computing at NJIT

Table : High Performance Computing at NJIT

Table last modified: 1-Sep-2015 12:03
HPC Machine Specifications
 Kong.njit.edu Phi.njit.edu Gorgon.njit.eduStheno.njit.eduNJIT HPC
Expansion[1] Kong-2 Kong-3 Kong-4Kong-5Kong-6Kong-7Kong-8Cluster Total Stheno-1Stheno-2Stheno-3Stheno-4Stheno-5Cluster TotalGrand Totals
Tartan designationTartan-1Tartan-7Tartan-8Tartan-9Tartan-11Tartan-12Tartan-15Tartan-4Tartan-3Tartan-5Tartan-6Tartan-10Tartan-13Tartan-14  
Manufacturer Sun SunSunIBMIBMSupermicroSunVMware[5]MicrowayMicrowayIBMIBMIBM  
Model SB/X6220SB/X6220SB/X8420iDataPlex dx360 M4iDataPlex dx360 M4SS2016X4600VMware[5] NumberSmasher-4XiDataPlex dx360 M4iDataPlex dx360 M4iDataPlex dx360 M4  
Nodes 1112182231413601188132132394
• PROCESSORS •                 
CPUs per node22422281422222  
Cores per CPU2226104448666610  
Cores per node44812208324321212121220  
Total CPU cores444814424402512322844432969615624203923272
Processor model[4] AMD Opteron
2218
AMD Opteron
2218
AMD Opteron
8220
Intel Xeon E5-2630
Sandy Bridge
Intel Xeon E5-2660v2Intel Xeon L5520AMD Opteron 8384 AMD Opteron
8384
AMD Opteron
6134
Intel Xeon E5649
Westmere
Intel Xeon E5-2630
Sandy Bridge
Intel Xeon E5-2630
Sandy Bridge
Intel Xeon E5-2630
Sandy Bridge
Intel Xeon E5-2660 v2  
Processor speed, GHz 2.62.62.82.32.22.272.692.72.32.532.32.32.32.2  
• MEMORY •                 
RAM per node, GB 16646412812864128326496128128128128  
RAM per CPU, GB 832166464321632164864646464  
RAM per core, GB 416810.676.48482810.6710.6710.676.4  
Total RAM, GB17676811522562562009612822832326476810241664256128384026768
• CO-PROCESSORS •                 
GPU ModelNvidia K20XNvidia K20Nvidia K20m  
GPUs4442610
Cores per GPU268824962668  
Total GPU cores1075210752998453361532026072
RAM per GPU, GB656  
Total GPU RAM, GB242420123256
• STORAGE •                 
Local disk per node, GB[6] 1400705005001000146 117117500500500  
Total local disk, GB15400126010001000314000146318946936936650010005009872328818
Shared scratch[7] /bscratch, 5TB /nscratch, 1.3TB/scratch, 151GB/scratch, 938GB/gscratch, 361GB  
NFS /home/, GB24372728  
Node interconnect GigE 10GbE10GbEGigEGigE InfiniBand QDRInfiniBand FDRInfiniBand FDRInfiniBand FDR  
• SOFTWARE •                 
Scheduler SunGridEngine 6.2 SunGridEngine 6.2  
Cluster mgmt Warewulf Warewulf  
Operating System SL 6.2 SL 5.5 SL 5.5 SL 5.5   
Kernel Release 11725398504313711725  
• RATINGS •                 
Max GFLOPS [9] 429468151220733021383.4322.824652.240.5276910.88281345.52071653456.328425
CPU Mark, per CPU [11]15741574365119106136594357705168146814811819106191061910613659  
CPU Mark, per node314831481460438212273188714564086814272561623638212382123821227318  
CPU Mark, sum of CPUs*nodes34,62837,776262,87276,4245463627361965640832589406814272561298883056964967567642427318906194 
Max GPU GFLOPS[10]395035203950  
Total GPU GFLOPS15800140807900  
• POWER •                 
Watts per node3001975          
Total Watts942001975          
MFLOPS per Watt                 
• ETC •                 
General-access No[2]YesMostly[3]No[12]YesYesYes Yes No[1]No[1]  
Head node AFS client Yes YesYesYes  
Compute nodes AFS client Yes YesYesYes  
In-service date Nov 2008 Sep 2013Apr 2013Aug 2013Oct 2013Mar 2015Aug 2015 Oct 2010 Aug 2010Nov 2011Sep 2012Aug 2013May 2015Jun 2015  
Node numbers112-122141-146123-140147-150151, 1520-111, 200-4011530-78-1516-2730-3132  
URL Link  
 Kong.njit.edu Phi.njit.edu Gorgon.njit.eduStheno.njit.eduNJIT HPC

Notes: Notes last modified: 1-Sep-2015 12:03
[1]  Access to Stheno and Gorgon is restricted to Department of Mathmatics use.
[2]  Access to Kong+2 is reserved for Dr. J.Bozzelli and designees.
[3]  A small number of Kong nodes are reserved by specific faculty.
[4]  All active systems are 64-bit.
[5]  Phi is virtual machine running on VMware provisioned as shown here; actual hardware is irrelevant.
[6]  A small portion of compute nodes' local disk is used for AFS cache and swap; the remainder is /scratch available to users.
[7]  Shared scratch writable by all nodes via NFS (bscratch) or Gluster (gscratch) or locally mounted for one-node systems (Phi, Gorgon)
[8]  Core counts do not include hyperthreading
[9]  Most GFLOPS estimated by cores*clock*(FLOPs/cycle), however 3.75 FLOPs/cycle conservatively assumed instead of the typical 4.0
[10]  Peak single precision floating point performance as per manufacturer's specifications
[11]  PassMark CPU Mark from http://cpubenchmark.net/
[12]  Access to Kong+5 is reserved for Dr. C.Dias and designees.
[13]  This document is also available as OSD and CSV at http://web.njit.edu/all_topics/HPC/specs/specs.ods and http://web.njit.edu/all_topics/HPC/specs/specs.csv
[14]  Cappel.njit.edu was a 16-node, 32-bit cluster put in service in Feb 2006. It was taken out of service in Jan 2010 and its users/services migrated to reserved nodes on Kong.
[15]  Hydra.njit.edu was a 76-node, 960 GFLOP cluster put in service in Jun 2006. It was taken out of service in Sep 2013 and its users/services migrated to Stheno.
[16]  Kong-1 was a 112-node, 800 GFLOP cluster put in service in Apr 2007. It was taken out of service in Jan 2015 when Kong-7 was deployed. If you require the old specifications please see https://web.njit.edu/all_topics/HPC/specs/index.2014.php.
Last Updated: August 24, 2017