High Performance Computing Machine Specifications

High Performance Computing at NJIT

Table : High Performance Computing at NJIT

Table last modified: 9-Nov-2017 18:31
HPC Machine Specifications
 Kong.njit.edu Phi.njit.edu Gorgon.njit.eduStheno.njit.eduStheno.njit.edNJIT HPC
Expansion[1]Kong-5Kong-6Kong-7Kong-8Kong-9Kong-10Kong-11Cluster Total Stheno-1Stheno-2Stheno-3Stheno-4Stheno-5Cluster TotalGrand Totals
Tartan designation[2]Tartan-9Tartan-11Tartan-12Tartan-15Tartan-16Tartan-17Tartan-18Tartan-4Tartan-3Tartan-5Tartan-6Tartan-10Tartan-13Tartan-14   
Manufacturer IBMIBMSupermicroSunDellMicrowayMicrowayVMware[5]MicrowayMicrowayIBMIBMIBM   
Model iDataPlex dx360 M4iDataPlex dx360 M4SS2016X4600PowerEdge R630NumberSmasherNumberSmasher-4XVMware[5] NumberSmasher-4XiDataPlex dx360 M4iDataPlex dx360 M4iDataPlex dx360 M4   
Nodes 22314135123391188132132373
• PROCESSORS •                     
CPUs per node22282222422222   
Cores per CPU6104410101088666610   
Cores per node122083220202016321212121220   
Total CPU cores24402512326010024030081632969615624203923448
Processor model[4]Intel Xeon E5-2630Intel Xeon E5-2660v2Intel Xeon L5520AMD Opteron 8384Intel Xeon E5-2660-v3Intel Xeon E5-2630-v4Intel Xeon E5-2630-v4Intel Xeon E5-2680 AMD Opteron
6134
Intel Xeon E5649Intel Xeon E5-2630Intel Xeon E5-2630Intel Xeon E5-2630Intel Xeon E5-2660 v2   
Processor µarchSandy BridgeIvy BridgeNehalemK10 ShanghaiHaswellBroadwellBroadwellSandy BridgeK10 MaranelloWestmereSandy BridgeSandy BridgeSandy BridgeIvy Bridge   
Processor launch 2012 Q12013 Q32009 Q12008 Q42014 Q32016 Q12016 Q12012 Q12010 Q12011 Q12012 Q12012 Q12012 Q12013 Q3   
Processor speed, GHz 2.32.22.272.692.62.22.22.72.32.532.32.32.32.2   
• MEMORY •                     
RAM per node, GB 12812864128128256256646496128128128128   
RAM per CPU, GB 646432166412812832164864646464   
RAM per core, GB 10.676.4846.412.812.842810.6710.6710.676.4   
Total RAM, GB256256200961283841280307225472646476810241664256128384029440
• CO-PROCESSORS •                     
GPU ModelNvidia K20XNvidia Tesla P100 16GB “Pascal”Nvidia K20Nvidia K20m   
GPUs4101442620
Cores per GPU2688358424962668   
Total GPU cores107523584046592998453361532061912
RAM per GPU, GB61656   
Total GPU RAM, GB24160184201232216
• STORAGE •                     
Local disk per node, GB[6] 500500100014610241024 117117500500500   
Total local disk, GB1000100031400014630725120324338936936650010005009872334210
Shared scratch[7]33743374/nscratch, 151GB/scratch, 938GB/gscratch, 361GB   
NFS /home/, GB826182612728   
Node interconnect 10GbE10GbEGigEGigE10GbE10GbEInfiniBand FDR InfiniBand QDRInfiniBand FDRInfiniBand FDRInfiniBand FDR   
• SOFTWARE •                     
Scheduler SunGridEngine 6.2   
Cluster mgmt Warewulf   
Operating System SL 5.5 SL 5.5 SL 5.5    
Kernel Release 398504313711725   
• RATINGS •                     
Max GFLOPS [9] 20733021383.4322.8585825198025633.2162276910.88281345.52071653456.329527.5
CPU Mark, per CPU [11]19106136594357705168146814811819106191061910613659   
CPU Mark, per node382122731887145640822733188071880713628272561623638212382123821227318   
CPU Mark, per node totaled76,42454636273619656408681999403522568433115821362827256129888305696496756764242731810360824388548
Max GPU GFLOPS[10]395035203950   
Total GPU GFLOPS15800140807900   
• POWER •                     
Watts per node3001975150016001620              
Total Watts9420019754500800019440              
MFLOPS per Watt                     
• ETC •                     
Access modelReserved[12]PublicPublicPublicReserved[12]Partly[13] Reserved[14] Public Reserved[1] Reserved[1]   
Head node AFS client YesYesYes   
Compute nodes AFS client YesYesYes   
In-service date Aug 2013Oct 2013Mar 2015Aug 2015Nov 2016Aug 2017Sep 2017 Oct 2010
Sep 2017[15]
Aug 2010Nov 2011Sep 2012Aug 2013May 2015Jun 2015   
Node numbers147-150151, 152100-111, 200-401, 500-599153402-404412-416417-4280-78-1516-2730-3132   
URL Link   
 Kong.njit.edu Phi.njit.edu Gorgon.njit.eduStheno.njit.eduStheno.njit.edNJIT HPC

Notes: Notes last modified: 9-Nov-2017 18:31
[1]  Access to Stheno and Gorgon is restricted to Department of Mathmatics use.
[2]  See https://ist.njit.edu/tartan-high-performance-computing-initiative
[3]  A small number of Kong nodes are reserved by specific faculty.
[4]  All active systems are 64-bit.
[5]  Phi is virtual machine running on VMware provisioned as shown here; actual hardware is irrelevant.
[6]  A small portion of compute nodes' local disk is used for AFS cache and swap; the remainder is /scratch available to users.
[7]  Shared scratch writable by all nodes via NFS (/nscratch) or locally mounted for one-node systems (Phi, Gorgon)
[8]  Core counts do not include hyperthreading
[9]  Most GFLOPS estimated by cores*clock*(FLOPs/cycle), however 3.75 FLOPs/cycle conservatively assumed instead of the typical 4.0
[10]  Peak single precision floating point performance as per manufacturer's specifications
[11]  PassMark CPU Mark from http://cpubenchmark.net/ or https://www.cpubenchmark.net/multi_cpu.html
[12]  Access to Kong-5 and Kong-9 is reserved for Dr. C.Dias and designees.
[13]  Access to Kong-10 is reserved for Data Sciences faculty and students, contact ARCS@NJIT.EDU for additional information.
[14]  Access to Kong-11 is reserved for Dr. G.Gor and designees.
[15]  Phi upgraded, was originally 1-CPU of 4-cores and 32GB RAM
[16]  This document is also available as OSD and CSV at http://web.njit.edu/all_topics/HPC/specs/specs.ods and http://web.njit.edu/all_topics/HPC/specs/specs.csv
[17]  Cappel.njit.edu was a 16-node, 32-bit cluster put in service in Feb 2006. It was taken out of service in Jan 2010 and its users/services migrated to reserved nodes on Kong.
[18]  Hydra.njit.edu was a 76-node, 960 GFLOP cluster put in service in Jun 2006. It was taken out of service in Sep 2013 and its users/services migrated to Stheno.
[19]  Kong-1 was a 112-node, 800 GFLOP cluster put in service in Apr 2007. It was taken out of service in Jan 2015 when Kong-7 was deployed. If you require the old specifications please see https://web.njit.edu/all_topics/HPC/specs/index.2014.php.
[20]  Kong-2, Kong-3, and Kong-4 (total 41 nodes / 2409 GFLOP / installed 2008 & 2013). were taken out of service in 2017. If you require the old specifications please see https://web.njit.edu/all_topics/HPC/specs/index.2016.php.
Last Updated: August 24, 2017