ANU Supercomputer Facility - Annual Report 1997

Major Installed Hardware and Software

 

Fujitsu VPP300

The VPP300 is an aircooled, CMOS based supercomputer from Fujitsu, with 13 processing elements (PEs) providing a peak speed of almost 30 Gflops. Each of the PEs has a uniform clock speed of 7 ns, and contains both a Vector Unit and a Scalar Unit. The Vector Unit can achieve a peak speed of 2.2 Gflops, while the Scalar unit can reach approximately 100 Mflops. Each PE is connected via a full crossbar network (equidistant from one another) with a peak bandwidth of 570 MBytes per second bi-directional and an achievable latency of about 5 microseconds. Each PE also has a data transfer unit (DTU) for direct memory access data communication to the interprocessor network.

In January 1997, the Fujitsu VPP300 was upgraded to 14 GBytes of memory with 5 PEs having 2 GBytes, the remaining 8 PEs with 0.5 GBytes. The upgrade also included an enhancement of a second PE to provide additional I/O capability. In addition, the UNIX operating system was upgraded to Fujitsu's UXP/V V10L20.

The VPP300 has two very fast and large filesystems of 32 GBytes and 20 GBytes that are used for production work. Two short term filesystems of 16 GBytes and 10 GBytes are each used for short term storage between production jobs. User home space is controlled by filesystem quotas, with home disk space totalling 13 GBytes.

Core System software includes:

To improve the VPP system utilisation, a SUN UltraSPARC (covpp) is tightly coupled to the VPP. This includes cross mounting of filesystems, and moving common tasks such as GNU emacs editing over to covpp. In 1998 a Fujitsu cross-compiler for Solaris is expected to be installed on covpp, which will further improve the performance of the VPP and ease of use.

Fujitsu system documentation, originally accessible through the proprietary document reader OLIAS, was successfully transferred to HTML and published on the ANUSF web server. The documentation was made available to all VPP users either by IP address access, or by password for remote sites.

Silicon Graphics PowerChallenge

The ACSF PowerChallenge system at ANUSF is a CMOS Shared Multiprocessing (SMP) supercomputer with twenty 195 MHz MIPS R10000 superscalar processors, each with a 2 MByte cache. The PowerChallenge has a peak speed of 7.8 Gflops. The computer also has 2 GBytes of 8-way interleaved memory. The processors are connected to a backplane BUS with 1.2 GBytes per second peak transfer capacity.

The PowerChallenge has a 48 GBytes RAID filesystem used for production work. In addition three short term filesystems of 25 GBytes is used for short term storage between production jobs. User home space is controlled by filesystem quotas, with home disk space totalling 27 GBytes.

The PowerChallenge uses SGI's IRIX 6.2, 64-bit operating system. System software includes:

High Performance Computing Laboratory

The High Performance Computing Laboratory (HPCLab) is a teaching laboratory used to introduce students to the concepts of High Performance Computing. It is managed by Facility staff and is located in the CSIT building as part of the Facility's joint Computational Science and Engineering education program with the Department of Computer Science.

The HPCLab uses Silicon Graphics hardware and is constructed to closely resemble the configuration of the production machines. The lab is served using a Challenge S server with a 200 MHz MIPS R4400 processor with 64 MBytes of memory. A total of 8 GBytes is used for home directories and for short term scratch space.

Nine Indy workstations using 133 MHz MIPS R4600 processors and 64 MBytes of memory are used mainly for interactive work. Each workstation has a 24 bit graphics card installed. The workstations and server are connected using a 10 Mbits/second ethernet switch, with the server connected using a Fast Ethernet downlink at 100 MBytes/second.

Since the HPCLab is used to teach students the techniques of High Performance Computing, all core system software installed in the HPClab reflect that installed on the Facility's larger systems.

Software installed on the VPP, PC and HPCLab

In the following table, availability of packages and libraries on the VPP is indicated by v, on the PowerChallenge by s, and the HPCLab by an h.

 

 

Chemistry

 

Mathematics

 

Graphics

 

Biological

 

CADPAC 6 s

 

IMSL 1.1 s

 

AVS-5.3 sh

 

AMBER sv

 

ACES II sv

 

NAG 17 sv

 

AVS-Express sh

 

X-PLOR sv

 

GAMESS sv

 

ELLPACK s

 

PGPLOT sv

 

 

MOPAC 93 v

 

LAPACK v

 

NCAR sv

 

 

AMPAC 2.1 v

 

BLAS v

 

GNUPlot sh

 

Code Development

 

GAUSSIAN 94 svh

 

ITPACKV s

 

Houdini 1.2 sh

 

CVS sv

 

XPLOR 3.8 s

 

SSL2 v

 

IDL h

 

emacs,xemacs sh

 

MOLPRO 96 s

 

VECFEM sv

   

 

CCP4 s

 

Maple sh

 

Engineering

 

Environmental Sciences

 

MNDO04 v

 

Mathematica sh

 

VECFEM sv

 

CCM3 v

 

SPARTAN 5 v

 

Matlab sh

 

Strand6 sh

 

 

DISCOVER v

 

BLACS v

 

Fluent h

 

 

Mass Data Store

The Mass Data Store System (MDSS) is composed of several components. An eight processor Sun SparcCenter 2000 with 450 MByte of memory is used as a data server and is accessible to users over the network as the system store. A total of 100 GBytes of disk is used as an intermediate disk cache to the tape system. The migration of data between the disk cache and tape system is managed using LSCI's SAM-FS migration software, running on Solaris 2.6.

Over the past 12 months the MDSS has been significantly improved with the addition of two very fast and high density Redwood drives and two additional Timberline drives. This has increased the MDSS capacity by a factor of 100 to a potential storage of 300 TBytes. The current configuration of the tape system installed in the Storage Tek 4400 Robotic Tape Silo (6000 tape capacity)is :

In 1998 the server is expected to be upgraded by increasing both the memory and the magnetic disk cache on the data server to meet the growing demands on the system.

Further details of the MDSS and associated work are given in elsewhere in this report.

Visualisation Laboratory

The Visualisation Laboratory (VizLab) contains high end graphics and video production hardware and software.

A Silicon Graphics Onyx Reality Engine (RE2) workstation with dual 150 MHz MIPS R4400 processors is used for high end graphics work. This system has 256 MByte of 2-way interleave memory, 13 GBytes of production raid disk, and 8 GBytes of home directory space.

To produce high quality video, a Silicon Graphics Indigo2 High Impact workstation with 200 MHz MIPS R4000 processor with 256 MBytes of memory is used. A Ciprico disk array (30 GBytes) is used to provide real-time video playback. In addition 16 GBytes of RAID disk is also provided for production work.

In addition to home grown visualisation code, the following commercial packages are installed:

Further details of the VizLab and associated work are given in elsewhere in this report.

High Bandwidth Networks

In January of 1998 it is planned to move all production machines to a faster 100 Mbits per second FDDI connection to the ANU campus backbone. This is a 10 fold increase over the slower 10 Mbits per second connection currently in place. In particular, this will allow faster large data transfers to the Mass Data Store such as from the MACHO project at Mount Stromlo.

In addition all ANUSF production machines will be connected to a fast 155 Mbits per second ATM switch, to allow fast data transfers between the VPP, Mass Data Store, the Power Challenge and Visualisation Laboratory. The ATM switch will also be directly connected on the ACT AARNet2 Network RNO for faster transfers to other Australian academic sites.

Each host will be configured to allow data transfers to occur across the fastest network connection available.


ANU Supercomputer Facility - Home Page | Contact us
The Australian National University