ANUSF
Fujitsu VPP300

 

Introduction

VPP News

System Configuration

Software

VPP Userguide

Fujitsu Documentation

System Downtimes

Job Turn Around Time Statistics

 

[HOME]
[APAC-NF]   [VPP]
[wyrd beowulf]
[Data Store]
[Vizlab]

ANUSF
Home Page | Contact us

The Australian National University

 

 

ANU VPP300 Configuration

PE labels and uses

The thirteen processing elements (PEs) on the ANU VPP300 are divided into:
primary PE (PPE):
  • known PE0 or vpp or vpp00 (or vpp.anu.edu.au on the internet)
  • the PE users log on to
  • operates primarily in time-shared (TS) mode
  • runs batch jobs at lower priority but no parallel jobs
  • has disks attached

secondary PEs (SPEs):
  • known as PE1 to PE11 (or vpp01 to vpp11)
  • operate primarily in batch (BT) mode, no interactive access
  • can participate in parallel jobs
  • no disks attached

IPL master PE (IMPE):
  • known as PE12 (or vpp12),
  • operates primarily in batch (BT) mode, no interactive access
  • can participate in parallel jobs
  • has disk attached
  • also considered an SPE
From the operating system point-of-view, the processors are divided into two IPL groups. Within an IPL group only one PE has disks and can perform IO and certain system operations, all other members of the IPL group mount their filesystems via this IO PE. On the ANU VPP300, we have:
  • IPL group 0 (IPL0) consisting of PE0 to PE6 with PE0 providing IO and system services to other PEs

  • IPL group 1 (IPL1) consisting of PE7 to PE12 with PE12 providing IO and system services to other PEs
Note that from the user point-of-view all filesystems are "globally visible" (accessible from any PE with no special operations).

Execution modes

Processes may execute on VPP processors in one of a number of possible modes. In order of decreasing CPU priority these are:

System (SYS) mode:
  • A limited number of system processes have highest priority
Synchronous Parallel (SP) mode:
  • Fixed percentage of CPU with large synchronized timeslices
  • Allows parallel jobs to execute efficiently while sharing with a mixture of BT jobs and TS processes
Simplex mode:
  • dedicated use of CPU
  • special case of SP mode with 100% of CPU
SP mode is a property of a batch job -- NQS queue the job is running in must allow SP mode jobs

Time Sharing (TS) mode:
  • Normal UNIX scheduling mode for scalar processes
Batch (BT) mode:
  • scheduling mode for normal batch jobs
  • fairly large timeslices (0.1 seconds)
TS and BT modes can be given relative percentages of the available CPU on a per-PE basis

Memory:

Memory on each PE is segmented as either:

scalar:
  • where normal Unix processes are resident
  • where binaries compiled with cc execute
  • this memory is virtual and swappable
  • the ANU VPP300 has been configured with as little scalar memory as is reasonable (note that the kernel, system functions, unix commands, etc all use scalar memory)
  • do not run large scalar processes in batch jobs - you may crash the system

vector:
  • where vector processes (those compiled with frt or vcc) are resident under TS mode
  • not virtual or swappable (??)
  • the ANU VPP has a small amount of vector memory on PE0 for development work

PM:
  • where vector processes under the batch system (BT, SP or simplex mode) run
  • memory is not virtual and can only be swapped in units of whole jobs (takes minutes)
  • scalar processes can be forced into PM memory using the vector command. For example, using the following line in an NQS script
         vector frt big.f -o big.exe 
    will cause PM memory to be used.

ANU VPP300 processor configuration:

The current configuration of the PEs on the ANU VPP300 is:

                      CPU spilt              Memory
          PE           TS   BT     scalar   vector     PM
      _____________________________________________________

       PE0 (PPE)       90   10      256MB    128MB    128MB  

       PE1 - PE6       10   90       96MB      -      416MB

       PE7 - PE11      10   90      128MB      -     1920MB

       PE12 (IMPE)     90   10      192MB      -      320MB

Note that these allocations may change over time - in future, please check this page or use man anu_limits to find the current configuration.