Systems  
Status displays
System status
Retired systems «
 
 
 
 
 
 
 
 
 
 
 
 

Grendel
Current Status

Grendel Users E-mail list

Grendel retired in 2004 and is no longer in production.

A collaboration between National Supercomputer Centre (NSC), Linköping University and Department of Information Technology (IIT), Uppsala University made it possible to purchase and build the Beowulf Class I system Grendel.

Grendel is placed at IIT in Uppsala but will serve users from both NSC and IIT. The system will also be used as an testbed for projects in distributed computing and GRID technology.

Hardware


Grendel is a PC cluster, a pile of PCs, a Beowulf. It consist of 17 computer boxes connected together with a Fast Ethernet network. The head of the cluster or the front-end has one AMD Athlon Thunderbird running at a clock frequency of 1 GHz mounted on a ASUS A7V133 motherboard. The processor has 256 Mbyte of primary memory. It has also 60 Gbyte of ATA disk. For communication, a Intel 100/PRO+ NIC is connected to the cluster switch and there is also another NIC installed for communication with the rest of the world.

The other 16 nodes has the same amount of processors and memory. The secondary memory consist of one 10 Gbyte ATA disk and a FastEthernet NIC provides the network connectivity to other nodes in the cluster. The nodes as well as the front-ent are assembled by Advanced Computer Technology AB.

Only the front-end has connection with the rest of the world. The other nodes are connected to a HP ProCurve 2424M (24 ports) FastEthernet switch.

Software


Most of the software on Grendel is open source.

Operating System

The operating system is Linux. RedHat is the distribution we use and the current version is 6.2. The currently running Linux kernel is 2.2.18 with additional modules for hardware monitoring.

Programming Environment

The following compiler suites are available on Grendel:

  • PGI Workstation, including compilers for Fortran 90, Fortran 77, C, C++, and HPF. You find domumentation here.
  • EGCS 1.1.2 (C, C++, and F77)
Related debugging and profiling tools are available.

Communication APIs

To easily utilize all boxes in a cluster, a efficient communication API is needed. The most well known APIs that use the message passing paradigm in parallel computations are MPI and PVM. Both are available on Grendel.

Two MPI implementations are installed and configured:

  • MPICH from Argonne National Laboratory.
  • LAM from Ohio Supercomputer Center.

Batch System

Portable Batch System (PBS) is the software we use to handle resource allocations in the cluster. The Maui scheduler is incorporated into PBS to get the most efficient utilization of the machine. Please use the batch queue system for all jobs. There is no need to log in on the nodes of the cluster except for killing processes that has, for some reason, been left behind. In such cases, please notify support so that we can monitor and/or correct the erroneous behaviour.

The front end is only for logins, compilation, debugging, etc. Batch jobs are banned from the front end.

Syncronization

To simplify installation of the same image on all machines, VA SystemImager has been used. The same software together with rsync and a couple of home-brewed scripts are used to keep the nodes syncronized.

Performance


The following performance have been measured on the system:

High Performance Computing Linpack Benchmark (HPL)

A linpack score of 11.1 Gflops was noted using the HPL linpack implementation (link). HPL was built on top of a highly optimized ATLAS built BLAS library. Top performance was reached using 16 nodes in a 4x4 process grid and a matrix size of 21120x21120 elements. (details)

STREAMS

The stream_d numbers below was reached using a binary compiled with the system default fortran compiler, pgf90. (full output)

   Function     Rate (MB/s) 
   Copy:        586.9944    
   Scale:       585.1167    
   Add:         730.7831    
   Triad:       713.9140

STREAM2 results are also available, both as data and graph

NAS 2.3

          CLASS W    CLASS A    CLASS B
           1 proc    4 procs   16 procs
                         LAM        LAM
   FT      131.96     223.80     774.89
   MG      155.97     384.76    1207.00
   LU      189.03     627.93    2196.25
   CG       94.49     160.98     308.37
   SP      115.54     341.22     854.59
   BT      150.13     525.00    1660.76

Support


If you have any questions, corrections, additions, or suggestions regarding Grendel or this web-page, please contact NSC's helpdesk; support@nsc.liu.se.
News

2001-04-10
Permanent license for PGI compiler suite installed.
2001-03-27
The system is up and running!
Facts

Hardware:
One box (Front-End) containing
  • 1 Enlight 7230 case
  • 1 AMD Athlon Thunderbird 1 GHz (133 MHz FSB)
  • 256 Mbyte SDRAM PC133
  • 60 Gbyte ATA disk
  • 2 FastEthernet NIC
  • CD-ROM
    16 boxes, each containing
  • 1 Enlight 7230 case
  • 1 AMD Athlon Thunderbird 1 GHz (133 MHz FSB)
  • 256 Mbyte SDRAM PC133
  • 10 Gbyte ATA disk
  • 1 FastEthernet NIC
    1 HP ProCurve 2424M Switch
    Software:
  • Linux 2.2.18
  • RedHat 6.2
  • MPICH
  • LAM
  • PVM
  • VA SystemImager
  • OpenPBS (+the Maui scheduler)
  • Links


    Niclas Andersson





    Page last modified: 2007-05-22 13:24
    For more information contact us at info@nsc.liu.se.