Systems  
Status displays
System status
Retired systems «
 
 
 
 
 
 
 
 
 
 
 
 

Electra

Hardware

Electra retired in 2004 and is no longer in production.

Electra is a PC cluster, a pile of PCs, a Beowulf. It consist of 16 computer boxes connected together with a FastEthernet. The head of the cluster or the front-end has one AMD Athlon Thunderbird running at a clock frequency of 800 MHz mounted on a ASUS A7V motherboard. The processor has 256 Mbyte of primary memory. It has also 60 Gbyte of ATA disk and a ATI graphics board. For communication, a FastEthernet NIC (Intel Pro/100+) is connected to the cluster switch and there is also a second NIC installed for communication with the rest of the world.

The other 15 nodes has the same amount of processors and memory. The secondary memory consist of one 10 Gbyte ATA disk and a FastEthernet NIC provides the network connectivity to other nodes in the cluster. An ATI Rage graphics card is installed in each node. Its main purpose is to provide initial access to the BIOS settings. The nodes as well as the front-ent are assembled and delivered by Advanced Computer Technology AB.

Only the front-end has a network connection outside the cluster. The front-end and all other nodes are connected to a HP ProCurve 2424M (24 ports) FastEthernet switch which is used for all internal communication.

Software

Most of the software on Electra is open source.

Operating System

The operating system is Linux. RedHat is the distribution we use and the current version is 6.2. The currently running Linux kernel is 2.2.16 with an additional patch for temperture sensors.

Programming Environment

The following compiler suites are available on Electra:

  • EGCS 1.1.2 (C, C++, and F77)
  • PGCC 2.95-2 (C, C++, and F77)
  • PGI Workstation, including compilers for Fortran 90, Fortran 77, C, C++, and HPF.
Related debugging and profiling tools are available.

Communication APIs

To easily utilize all boxes in a cluster, a efficient communication API is needed. The most well known APIs that use the message passing paradigm in parallel computations are MPI and PVM. Both are available on Electra.

The following MPI implementation is installed and configured:

  • MPICH from Argonne National Laboratory.

For PVM, the well known distribution from Oak Ridge National Laboratory is used.

Syncronization

To simplify installation of the same image on all machines, VA SystemImager has been used. The same software together with rsync and a couple of home-brewed scripts are used to keep the nodes syncronized.

Performance

No information available.

Projects
esavionics

All available time on Electra is dedicated for Saab Avionics.

Support

If you have any questions, corrections, additions, or suggestions regarding Electra or this web-page, please contact NSC's helpdesk; support@nsc.liu.se.

Facts
Hardware:
One box (Front-End) containing
  • 1 Enlight 7237 case
  • 1 AMD Athlon Thunderbird 800 MHz
  • 256 Mbyte SDRAM PC133
  • 60 Gbyte ATA disk
  • 2 FastEthernet NIC
  • 1 ATI Rage
    15 boxes, each containing
  • 1 Enlight 7237 case
  • 1 AMD Athlon Thunderbird 800 MHz
  • 256 Mbyte SDRAM PC133
  • 10 Gbyte ATA disk
  • 1 FastEthernet NIC
  • 1 ATI Rage
    1 HP ProCurve 2424M Switch
    Software:
  • Linux 2.2.16
  • RedHat 6.2
  • MPICH
  • PVM
  • VA SystemImager
  • Users
    esavionicsSaab Avionics

    Links
  • NSC Support
  • Beowulf
  • Beowulf Underground
  • Freshmeat
  • Linux Apps
  • Portable Batch System
  • Maui Scheduler
     
  • Advanced Computer Technology AB
  • Portland Group Inc.
  • IKEA

  • Niclas Andersson, National Supercomputer Centre




    Page last modified: 2007-05-22 13:23
    For more information contact us at info@nsc.liu.se.