IST Support for Faculty Research

Areas of Support

NJIT provides support for researchers in the following ways :

Read more about the IST division and the services offered to faculty.


High Performance Computing (HPC)

The hardware computational infrastructure at NJIT is comprised of clusters and shared memory machines, some of which are public-access and others of which are restricted to users authorized by the departments that purchased the machines. Researchers can purchase dedicated nodes on the open-access cluster, kong.njit.edu.

Summary of HPC resources as of July 2015:

Hardware type Aggregate cores Aggregate RAM, GBytes
Public-access cluster 2,852 23,024
Restricted-access cluster 540 5,248
Restricted-access shared-memory 132 96
Public-access GPU 10,752 24
Restricted-access GPU 10,752 24
Public-access Hadoop cluster 32 256

Detailed Specifications
Obtaining Access
top

HTCondor

HTCondor is open source software that uses idle cycles on networked computers to run serial or parallel compute-intensive jobs. HTCondor is designed so that it will not interfere with other jobs running on a computer, or with the user experience for someone at the console of a computer on which HTCondor is enabled.

 

Current Implementation

  • 66 Linux computers in GITC 2315C and 2400
  • 2 test Windows computers in GITC 2305 - expand to ~60 computers in GITC 2305 + GITC 2302 by the end of this year

All researchers can have access to the HTCondor resources. About HTCondor
top

Big Data Resources : Hadoop Cluster

Hadoop is an open source software framework from the Apache Foundation for distributed storage and distributed processing of very large datasets - big data - designed to be run on computer clusters.

Hardware

  • 2 x IBM iDataPlex DX360 M4 nodes, each with :
    • 2 x Intel Xeon E5-2680 processors, 8 Cores each
    • 128GB RAM

Software

  • Project Serengeti - opensource project to automate the deployment and management 9 of Apache Hadoop and HBase on virtual environments such as vSphere

  • vSphere Big Data Extensions at NJIT runs on top of Project Serengeti - commercial/supported version of Project Serengeti

All researchers can have access to the Hadoop cluster.
top

Faculty-managed Servers

In general, IST staff manage the servers located in the IST datacenters. This staff is is equipped with the professional skills needed to competently and securely manage these computers.

There are cases in which faculty researchers have legitimate reasons for partial or complete management - i.e., granted root privileges - of servers located in IST datacenters. These servers may be physical machines, or virtual machines (VMs) that are part of the IST infrastructure.

Root privileges are enabled by use of the "sudo" program. The extent of sudo privileges is determined on a case-by-case basis, and is based on the faculty member's experience and expertise, and the need for these privileges such that the faculty member can efficiently use the server.

Students are not given any root privileges on servers located in the IST data centers.

Researchers' needs must be balanced against the need for network and system security, the availability of VMs with the requested configuration, and the level of IST support needed for the server

Researchers are discouraged from purchasing servers and data storage devices and connecting them to the network located in the researchers' laboratories or offices. There is a long history of problems associated with such practices. Instead, researchers are encouraged take advantage of the Tartan Computing Initiative for their computational and data storage needs. Doing so will avoid problems that usually accompany self-provisioned and self-managed servers : inadequate HVAC, power, and networking; inadequate or non-existent data and system backups; inadequate physical security; inadequate systems administration or loss of the student systems administrator; non-accessibilty of the server and/or storage from outside of the NJIT network - i.e., no access from the Internet; limited or non-existent expandibility.

top

Base researcher allocations

Base researcher allocations
top

 

On-premise computational resources

On-premise computational resources
top

 

Off-premise computational resources

Off-premise computational resources
top

 

Data Storage

Big data and many HPC applications require very large amounts disk storage. The need for storage is growing at an increasingly rapid rate, especially in the areas of genomics, bioinfomatics, and big data.

Disk storage is comprised of three modes :

  • Network
    • NFS
    • AFS
  • Local scratch disk on compute nodes, used for transient intermediate calculations
  • Shared scratch network disk, used for transient intermediate calculations

Both NFS- and AFS-mounted space, which are used for applications, and input and output files, are housed in the enterprise storage system managed by IST. Researchers are provided a base allocation of disk storage. Researchers whose needs exceed the base allocation can purchase disk space in the enterprise storage system.

Both NFS and AFS space is backed up by the enterprise backup system.

Capacities of the NFS, AFS, and scratch space, as of June 2015.<

Type Capacity, Gbytes
NFS-mounted 5,165
AFS-mounted 28,860
Scratch, local 328,672
Scratch, shared 7,750

top

Software

A wide spectrum of scientific and engineering software, compilers, parallel programming libraries, and utilities are available for the HPC hardware infrastructure.

IST Academic and Research Computing Systems (ARCS) provides software support in the following areas

  • Installation of compilers, applications, libraries, and utilities requested by users
  • Customized scripts to aid users in their use of HPC resources
  • Assistance in debugging and optimizing code
  • Assistance in getting applications to run
  • Assistance in running parallel code
  • Assistance in working with and managing big data

top

Networking

Network Overview

NJIT provides a robust wired and wireless high speed data networking environment. The typical wired building network is supported by one or more 10Gb uplinks providing 1Gb or 100Mb connectivity to the desktop. The wireless network has 1,500 wireless access points. Off-NJIT network access to to faculty workstations is available through VPN services.

The network is secured with internal and external firewalls providing multiple internal security zones to the IST datacenters, residence halls, administrative areas, classrooms, academic/research areas, wireless as well as other secure applications. The typical intranet servers and workstations are provided first level incoming firewall security and are provided full outgoing connectivity. Secure remote access is supplemented with software and appliance-based VPN support.

Non-enterprise network devices are not supported. For more information, contact telecom@njit.edu

Internet Access to NJIT Servers

NJIT recognizes that academic and research applications may require flexibility, and access to the Internet beyond normal needs. Servers and applications requiring incoming Internet connectivity are provided that connectivity - consistent with the need to maintain network security - after passing a security scan.

Server administrators self-scan their servers and applications using NJIT's Nessus Vulnerability Scanner. The scan is done remotely by a server on the NJIT network.

Upon a successfully passing the scan, the administrator sends the results to IST Telecom and Coresys along with the server name and the TCP and/or UDP ports requested to be accessible from the Internet. The server is then provisioned with the requested Internet access.


top

Git Server

ARCS manages a Git server using GitLab - open to all researchers - for source code development.
top

Sharing Data

Data often needs to be shared among research project members. Methods of data sharing vary depending on whether the date is in AFS or NFS, and whether project members are associated with NJIT or another institution. Sharing data
top

 

External Services

There may be instances in which the computational and storage resources that NJIT can provide are not adequate for researchers' needs. In such cases, researchers may need to arrange to have those resources provided by off-campus sources, such as the XSEDE science gateway, or computational and storage cloud resources provided by various vendors. For advice in this area, please contact ARCS at arcs@njit.edu.

Researchers needing computational and storage resources sometimes purchase this hardware and locate it laboratories and offices. The systems are usually managed by students. This practice is strongly discouraged by IST : in general, such systems are not provided the proper HVAC and power environment; are not physically secure; are not competently managed, leading to security problems; are not backed up; and are subject to abandonment when the students managing them leave NJIT.

Researchers needing computational and storage resources are encouraged to consult with ARCS at arcs@njit.edu.


top

Consultation and Guidance

Two staff members of Academic and Research Computing Systems (ARCS) devote almost all of their time to HPC and big data support, and a third member devotes some time to this. Support includes :

  • Installing specialized hardware for individual researchers
  • Configuring and/or arranging the implementation of specialized networking
  • Assistance in selection and purchase of hardware
  • Consultation on software problems
  • Providing access to MySQL, Oracle, GIT, and other services in the NJIT cloud
  • Assistance in using cloud HPC resources

top

Education

HPC and big data are fields in which both the software and hardware landscape changes rapidly. Academic and Research Computing Systems staff strives to keep current in these technologies, and to provide means of educating users in these areas.

In order to aid researchers in using HPC and big data resources, and apprising users of current technologies and practices, these services are offered :

  • 2 to 3 workshops on use of HPC and big data resources
  • HPC and big data wiki
  • Annual meeting between researchers using HPC and big data resources and Academic and Research Computing Systems staff

top

Back to the Cyberinfrastructure page

 

Last Updated: March 3, 2017