Major Installed Hardware and Software

 

Fujitsu VP2200

The VP2200 which had a peak speed of 1.25 Gflops and 512 Mbytes of memory (and 1 Gbytes secondary memory), was decommissioned in July. It had been in operation since 1991.

 

Fujitsu VPP300

The initial installation of the Fujitsu VPP300 began in May. The system uses CMOS technology, is air-cooled, and takes around 10 per cent of the floor space of the VP2200. The final configuration in 1996 had 13 processors providing a peak speed of almost 30 Gflops and 6.5 Gbytes of memory. In early 1997 it is expected to be upgraded to 14 Gbytes of memory and additional IO capacity.

Each of the processing elements (PEs) has a uniform clock speed of 7 ns and consists of:

a scalar unit (SU):

 

a vector unit (VU):

 

memory (MSU):

 

A full crossbar network connects all processors (equidistant from one another) with a peak bandwidth of 570 MB/s bi-directional and an achievable latency of about 5 microseconds.

The system has 100 Gbytes of user disk including 48 GBytes of RAID disk array used mainly for scratch space during production work. Network access is through TCP/IP connections to the campus ethernet network. There is an FDDI connection to the mass data storage system, the Silicon Graphics Power Challenge and the Visualization Laboratory.

The VPP300 provides a modern programming environment based on the latest standards in languages and parallel libraries as well as useful development tools. Both the emacs and microemacs editor are provided along with other GNU tools. System software includes:

 

A Sun UltraSPARC is available as a file server and development platform with the VPP Workbench software providing a similar development environment as on the VPP300. Much of the documentation is either on-line on the Facility's Web pages or accessible via the Olias on-line documentation viewing system. Solaris versions of compilers, debugger, sampler, source analyzer tool and online Fortran90 books are available. This software is also available for installation on users' local Sun workstations.

 

Silicon Graphics PowerChallenge

The ACSF Power Challenge system at ANU has twenty R10000 processors each with a 2 Mbyte cache. The peak speed of the system is 8 Gflops. There are 2 Gbytes of shared memory and 70 Gbytes of disk array.

System Software includes:

 

 

Software

Although usage of packages is not high at ANU since many researchers are developing their own codes, the packaged software base is gradually growing and now includes the following. Availability of packages and libraries on the VPP is indicated by v and on the SGI-PC by s. Parentheses indicate packages which are planned to be installed.
Chemistry Mathematics Graphics Biological
Sciences
CADPAC s IMSL 1.1 s PGPLOT sv AMBER sv
ACES II v NAG 15 sv NCAR sv X-PLOR sv
GAMESS (v) ELLPACK (s) HDF s
MOPAC 93 (v) LAPACK (v) netCDF v
AMPAC 2.1 (v) BLAS v AVS s
GAUSSIAN 94 sv ITPACKV s MPI sv Parallel Tools
CCP4 s SSL2 v Engineering PVM sv
MNDO04 (v) Maple s Strand6 v HPF s(v)
SPARTAN v Mathematica s Abaqus (v)
DISCOVER v Matlab s
MOLPRO 96 s(v)

BLACS (v)

SCALAPACK (v)

Environmental Sciences
CCM3 v

 

Massive Data Storage System

The robotic cartridge-tape data silo supplied by StorageTek was in its third year of use in 1996. The ACS4400 data silo initially has approximately 2 Terabyte capacity and is equipped with four 36 track tape drives plus two more on loan from StorageTek. These are expected to be upgraded with two helical scan 'Redwood' drives with speeds in excess of 10 Mbytes per second and with tape capacities such that the total amount of possible storage will be in the 300 Terabyte range. Unfortunately delays in the supply of upgraded file migration software prevented the delivery of the Redwood system in 1996 which regrettably required the Facility to continue to dampen demand for further growth of usage of the system.

Two Sun computers act as file servers to the data silo. There are FDDI network connections from these Suns to the VPP and SGI-PowerChallenge and other systems.

The Facility has been actively seeking alternative file migration software by studying the market and exchanging experiences with other customers. A potential candidate was evaluated and it is hoped that this can replace the original software in early 1997 and allow us to take delivery of the Redwood drives. StorageTek is working closely with the University in pursuing our original objectives and has provided a Sun Sparc 20 to replace the old file server Sun 4/690 to help reduce the bottlenecks in the current migration software and has continued to loan the University a 40 Gbyte RAID disk array.

Despite the difficulties and a deliberate policy to dampen demand, system usage grew considerably with the system accommodating 1.7 million files and 724 Gbytes. This represents a growth of 13% in number of files and 21% in amount of data. In addition to users of the supercomputer systems (and some specific projects from other ITS machines), there were 91 users supported in special projects on the mass storage system.

 

Connection Machine

In 1996, the Facility continued supporting users of the Connection Machine although access was no longer governed by the Supercomputer Time Allocation Committee after March. The CM-5 which was installed in June 1992 with 32 nodes has a peak speed of 4 Gflops and 1 Gbytes of memory. At the ACSF board meeting in December, it was agreed to relocate the system to Adelaide with a view to building a larger system which could run for an extended time. The larger system will be based on the original systems operated by each of the ACSF partners. This is expected to be formalised in early 1997.