High Performance Computing Machine Specifications

High Performance Computing at NJIT

Table : High Performance Computing at NJIT

Table last modified: 11-Jul-2018 14:12
HPC Machine Specifications
 Kong.njit.edu Phi.njit.edu Gorgon.njit.eduStheno.njit.eduStheno.njit.edNJIT HPC
Expansion[1]Kong-5Kong-6Kong-7Kong-8Kong-9Kong-10Kong-11Kong-12Kong-13Cluster Total Stheno-1Stheno-2Stheno-3Stheno-4Stheno-5Cluster TotalGrand Totals
Tartan designation[2]Tartan-9Tartan-11Tartan-12Tartan-15Tartan-16Tartan-17Tartan-18Tartan-19Tartan-20Tartan-4Tartan-3Tartan-5Tartan-6Tartan-10Tartan-13Tartan-14   
Manufacturer IBMIBMSupermicroSunDellMicrowayMicrowayMicrowayMicrowayVMware[5]MicrowayMicrowayIBMIBMIBM   
Model iDataPlex dx360 M4iDataPlex dx360 M4SS2016X4600PowerEdge R630NumberSmasherNumberSmasher-4XNumberSmasher DualXeon Twin ServerVMware[5] NumberSmasher-4XiDataPlex dx360 M4iDataPlex dx360 M4iDataPlex dx360 M4   
Nodes 2231413512243451188132132379
• PROCESSORS •                       
CPUs per node2228222222422222   
Cores per CPU61044101010101088666610   
Cores per node1220832202020202016321212121220   
Total CPU cores244025123260100240408031281632969615624203923568
Processor model[4]Intel Xeon E5-2630Intel Xeon E5-2660v2Intel Xeon L5520AMD Opteron 8384Intel Xeon E5-2660-v3Intel Xeon E5-2630-v4Intel Xeon E5-2630-v4Intel Xeon E5-2630-v4Intel Xeon E5-2630-v4Intel Xeon E5-2680 AMD Opteron
Intel Xeon E5649Intel Xeon E5-2630Intel Xeon E5-2630Intel Xeon E5-2630Intel Xeon E5-2660 v2   
Processor µarchSandy BridgeIvy BridgeNehalemK10 ShanghaiHaswellBroadwellBroadwellBroadwellBroadwellSandy BridgeK10 MaranelloWestmereSandy BridgeSandy BridgeSandy BridgeIvy Bridge   
Processor launch 2012 Q12013 Q32009 Q12008 Q42014 Q32016 Q12016 Q12016 Q12016 Q12012 Q12010 Q12011 Q12012 Q12012 Q12012 Q12013 Q3   
Processor speed, GHz   
• MEMORY •                       
RAM per node, GB 12812864128128256256256256646496128128128128   
RAM per CPU, GB 646432166412812812812832164864646464   
RAM per core, GB 10.676.4846.412.812.812.812.842810.6710.6710.676.4   
Total RAM, GB2562562009612838412803072512102427008646476810241664256128384030976
• CO-PROCESSORS •                       
GPU ModelNvidia K20XNvidia Tesla P100 16GB “Pascal”Nvidia Tesla P100 16GB “Pascal”Nvidia K20Nvidia K20m   
Cores per GPU26883584358424962668   
Total GPU cores10752358401433660928998453361532076248
RAM per GPU, GB6161656   
Total GPU RAM, GB2416064248201232280
• STORAGE •                       
Local disk per node, GB[6] 500500100014610241024102410241024 117117500500500   
Total local disk, GB10001000314000146307251201228820484096342770936936650010005009872352642
Shared scratch[7]33743374/nscratch, 151GB/scratch, 938GB/gscratch, 361GB   
NFS /home/, GB826182612728   
Node interconnect 10GbE10GbEGigEGigE10GbE10GbEInfiniBand FDR10GbE10GbE InfiniBand QDRInfiniBand FDRInfiniBand FDRInfiniBand FDR   
• SOFTWARE •                       
Scheduler SunGridEngine 6.2   
Cluster mgmt Warewulf   
Operating System SL 5.5 SL 5.5 SL 5.5    
Kernel Release 398504313711725   
• RATINGS •                       
Max GFLOPS [9] 20733021383.4322.8585825198033066026623.2162276910.88281345.52071653456.330517.5
CPU Mark, per CPU [11]19106136594357705168146814811819106191061910613659   
CPU Mark, per node3821227318871456408227331880718807188071880713628272561623638212382123821227318   
CPU Mark, per node totaled76,424546362736196564086819994035225684376147522834244241362827256129888305696496756764242731810360824501390
Max GPU GFLOPS[10]395035203950   
Total GPU GFLOPS15800140807900   
• POWER •                       
Watts per node300197515001600162016201000              
Total Watts942001975450080001944032404000              
MFLOPS per Watt                       
• ETC •                       
Access modelReserved[12]PublicPublicPublicReserved[12]Partly[13] Reserved[14] Reserved[13] Reserved[16] Public Reserved[1] Reserved[1]   
Head node AFS client YesYesYes   
Compute nodes AFS client YesYesYes   
In-service date Aug 2013Oct 2013Mar 2015Aug 2015Nov 2016Aug 2017Sep 2017Sep 2017May 2018 Oct 2010
Sep 2017[15]
Aug 2010Nov 2011Sep 2012Aug 2013May 2015Jun 2015   
Node numbers147-150151, 152100-111, 200-401, 500-599153402-404412-416417-428429-430431-4340-78-1516-2730-3132   
URL Link   
 Kong.njit.edu Phi.njit.edu Gorgon.njit.eduStheno.njit.eduStheno.njit.edNJIT HPC

Notes: Notes last modified: 11-Jul-2018 14:12
[1]  Access to Stheno and Gorgon is restricted to Department of Mathmatics use.
[2]  See https://ist.njit.edu/tartan-high-performance-computing-initiative
[3]  A small number of Kong nodes are reserved by specific faculty.
[4]  All active systems are 64-bit.
[5]  Phi is virtual machine running on VMware provisioned as shown here; actual hardware is irrelevant.
[6]  A small portion of compute nodes' local disk is used for AFS cache and swap; the remainder is /scratch available to users.
[7]  Shared scratch writable by all nodes via NFS (/nscratch) or locally mounted for one-node systems (Phi, Gorgon)
[8]  Core counts do not include hyperthreading
[9]  Most GFLOPS estimated by cores*clock*(FLOPs/cycle), however 3.75 FLOPs/cycle conservatively assumed instead of the typical 4.0
[10]  Peak single precision floating point performance as per manufacturer's specifications
[11]  PassMark CPU Mark from http://cpubenchmark.net/ or https://www.cpubenchmark.net/multi_cpu.html
[12]  Access to Kong-5 and Kong-9 is reserved for Dr. C.Dias and designees.
[13]  Access to Kong-10 is reserved for Data Sciences faculty and students, contact ARCS@NJIT.EDU for additional information.
[14]  Access to Kong-11 is reserved for Dr. G.Gor and designees.
[15]  Phi upgraded, was originally 1-CPU of 4-cores and 32GB RAM
[16]  Access to Kong-13 is reserved for Dr. E. Nowadnick and designees.
[17]  This document is also available as OSD and CSV at http://web.njit.edu/all_topics/HPC/specs/specs.ods and http://web.njit.edu/all_topics/HPC/specs/specs.csv
[18]  Cappel.njit.edu was a 16-node, 32-bit cluster put in service in Feb 2006. It was taken out of service in Jan 2010 and its users/services migrated to reserved nodes on Kong.
[19]  Hydra.njit.edu was a 76-node, 960 GFLOP cluster put in service in Jun 2006. It was taken out of service in Sep 2013 and its users/services migrated to Stheno.
[20]  Kong-1 was a 112-node, 800 GFLOP cluster put in service in Apr 2007. It was taken out of service in Jan 2015 when Kong-7 was deployed. If you require the old specifications please see https://web.njit.edu/all_topics/HPC/specs/index.2014.php.
[21]  Kong-2, Kong-3, and Kong-4 (total 41 nodes / 2409 GFLOP / installed 2008 & 2013). were taken out of service in 2017. If you require the old specifications please see https://web.njit.edu/all_topics/HPC/specs/index.2016.php.
Last Updated: August 24, 2017