The VPP300 is an aircooled, CMOS based supercomputer from Fujitsu, with 13 processing elements (PEs) providing a peak speed of almost 30 Gflops. Each of the PEs has a uniform clock speed of 7 ns, and contains both a Vector Unit and a Scalar Unit. The Vector Unit can achieve a peak speed of 2.2 Gflops, while the Scalar unit can reach approximately 100 Mflops. Each PE is connected via a full crossbar network (equidistant from one another) with a peak bandwidth of 570 MBytes per second bi-directional and an achievable latency of about 5 microseconds. Each PE also has a data transfer unit (DTU) for direct memory access data communication to the interprocessor network.
The Fujitsu VPP300 has 14 GBytes of memory with 5 PEs having 2 GBytes, the remaining 8 PEs with 0.5 GBytes. Two of these PEs provide I/O capability.
The VPP300 has two very fast and large filesystems of 32 GBytes each that are used for production work. Two short term filesystems of 18 GBytes and 16 GBytes are used for short term storage between production jobs. User home space is controlled by filesystem quotas, with home disk space totalling 34 GBytes.
Core System software includes:
- vectorizing and parallelizing Fortran90 compilers
- a vectorizing C compiler
- vectorized and parallelized SSL2 mathematical subroutine libraries
- the PVM message passing library
- the MPI message passing library Analyzer, a profiling tool
- NQS batch queuing system with ANUSF RASH project accounting
VPP Workbench, a GUI providing an interface to compilers, tools (debugger, sampler, documentation) and queue submission
To improve the VPP system utilisation, a SUN UltraSPARC (covpp) is tightly coupled to the VPP. This includes cross mounting of filesystems, and moving common tasks such as GNU emacs editing over to covpp. During this year a Fujitsu cross-compiler for Solaris was also installed on covpp, which further improved the performance of the VPP and ease of use.
System documentation is made available to all VPP users by our web server.
Access is granted either by IP address, or by password for remote sites.
The ACSF PowerChallenge system at ANUSF is a CMOS Shared Multiprocessing (SMP) supercomputer with twenty 195 MHz MIPS R10000 superscalar processors, each with a 2 MByte cache. The PowerChallenge has a peak speed of 7.8 Gflops. The computer also has 2 GBytes of 8-way interleaved memory. The processors are connected to a backplane BUS with 1.2 GBytes per second peak transfer capacity.
The PowerChallenge has a 48 GBytes RAID filesystem used for production work. In addition three short term filesystems of 25 GBytes is used for short term storage between production jobs. User home space is controlled by filesystem quotas, with home disk space totalling 27 GBytes.
The PowerChallenge uses SGI's IRIX 6.2, 64-bit operating system. System software includes:
The Alpha-Linux Cluster (named wyrd) was installed at the end of October. The Alpha- Linux Cluster consists of twelve 533MHz Alpha LX164s each with 256MB of memory and 5.3 GBs of IDE disk conected by a HP fast ethernet switch. An extra node provides compile and file serving facilities using SCSI disks for home directories. Each node has a local scratch partition /scratch on the IDE disk useful for out of core processing, and a short term data directory (/short) accessible from all nodes.
The Alpha was chosen for its price/performance in floating computations - although only marginally more expensive than a 400MHz Pentium II, they perform 1.5 to 2 times faster for floating point intensive codes. The Alpha-Linux cluster uses the freely available Linux operating system. System software includes: EGCS compilers including Fortran, C and C++ tuned for the alpha hardware, the Compaq/Digital Compact Math library (cmpl) the MPICH MPI message passing library and a batch queuing system.
The High Performance Computing Laboratory (HPCLab) is a teaching laboratory used to introduce students to the concepts of High Performance Computing. Up until the end of 1998 it has been managed by Facility staff and is located in the CSIT building as part of the Facility's joint Computational Science and Engineering education program with the Department of Computer Science.
The HPCLab uses Silicon Graphics hardware and the software/user environment is set up to closely resemble the configuration of the production machines. The lab is served using a Challenge S server with a 200 MHz MIPS R4400 processor with 64 MBytes of memory. A total of 12 GBytes is used for home directories and for short term scratch space.
Ten SGI Indy workstations using 133 MHz MIPS R4600 processors and 64 MBytes of memory are used mainly for interactive work. An SGI O2 with 96MB is also available for computationally intensive work. Each workstation has a 24 bit graphics card installed. The workstations and server are connected using a 10 Mbits/second ethernet switch, with the server connected using a Fast Ethernet downlink at 100 MBytes/second.
Since the HPCLab is used to teach students the techniques of High Performance Computing, all core system software installed in the HPClab reflect that installed on the Facility's larger systems.
The Mass Data Storage System (MDSS) is composed of several components. An eight processor Sun SparcCenter 2000 with 450 MByte of memory is used as a data server and is accessible to users over the network as the system store. A total of 120 GBytes of disk is used as an intermediate disk cache to the tape system. The migration of data between the disk cache and tape system is managed using LSCI's SAM-FS migration software, running on Solaris 2.6. In 1998, a short term scratch area of 20GB was also added.
The MDSS has a total potential storage capacity of 300 TBytes. The current configuration of the tape system installed in the Storage Tek 4400 Robotic Tape Silo (6000 tape capacity) is:
In 1999 the server is expected to be replaced, the tape system upgraded with a larger number of drives, and the magnetic disk cache on the data server be significantly increased to meet the growing demands on the system.
Software installed on the VPP, PC and HPCLab
Full details of software on ANUSF Machines can be found on the WWW at URL http://anusf.anu.edu.au/software/.