Announcement

Speakers

Programme

Registration

Hotel
Information

Travel
Information

Contact
Information

The workshop will take place on the Linköping University Campus in Lecture Theatre Planck in Fysikhuset.

Map over campus

October 25

09:30 Registration
10:00 Introduction
Anders Ynnerman, NSC, Linköping University
10:15 Keynote: The Beowulf Project Evolves
Donald Becker, Scyld Computing Corp.   Photo
11:15 Build you own Beowulf Cluster! It's easy, isn't it?
Niclas Andersson, NSC, Linköping University
11:45 Inauguration of NSCs latest Beowulf
12:00 Lunch
13:30 Parallel Simulation of Multi-body Systems Applied to Rolling Bearings
Dag Fritzson and Iakov Nakhimovski, SKF AB
14:00 Parallel particle in cell codes: A comparison of the code performance on a Cray T3E, an Onyx 2 and a Beowulf Cluster
Mark Dieckmann, Linköping University
14:30 Experiences from mpp/LS-DYNA on Linux Beowulf Systems
Larsgunnar Nilsson, Engineering Research AB and Linköping University
15:00 Break
15:30 Project CLUX: A 96 Node Alpha Cluster
Oskari Jääskeläinen, Helsinki University of Technology
16:00 Birds-of-a-Feather Sessions:
  • Collaboration between Beowulf sites, Niclas Andersson, NSC
  • Experiences from using Beowulf, Torgny Faxén, NSC
Take a walk to G-huset and have a look at NSC's PC-Clusters! Guide: Peter Kjellström, NSC
19:00 Dinner

October 26

09:00 High Performance Computing on the HPC2N Linux Cluster - Benchmarks and Applications in Computational Physics
Sven Öberg, HPC2N, Umeå University
09:30 Workstation/PC clusters in a lab setting - 1 million CPU hours in 10 years
Lennart Nilsson, Karolinska Institutet
10:00 Break
10:30 The n-D SCI Torus as a basis for truly Scalable Linux Clusters for Production Environments
Knut Omang, Scali Computer AS
11:00 Preliminary results from running the NSC benchmark suite on the PC-cluster Ingvar
Torgny Faxén, NSC, Linköping University
11:30 BProc - a new framework for Beowulf clusters
Donald Becker, Scyld Computing Corp.
12:00 Lunch
13:15 IBM and cluster computing
Carl G. Tengwall, IBM
13:30 Cluster computing at PDC, past and future
Nils Smeds, PDC, Royal Institute of Technology
14:00 The X86 Architecture from a Beowulf Point of View
Peter Kjellström, NSC, Linköping University
14:30 Summaries of the Birds-of-a-Feather Sessions
14:50 Closing remarks

Abstracts

Keynote Talk: The Beowulf Project Evolves

The Beowulf cluster project has evolved from research and demonstration systems, to prototype development systems, and is now in the early stages of being deployed as a major high performance computing platform in industrial and commercial environments.

This talk will briefly review the history of the Beowulf project, survey the state of the project, describe how Beowulf systems are being used, and discuss the likely direction of Beowulf development in the near-term future.

From the beginning the most important element of Beowulf has been building a community of developers and users. Beowulf is based on Linux. This includes not only the software itself, but the philosophy of open-source, large-scale distributed development. The project history will be described referenced to how we succeeded and failed in utilizing contributions.

The rapid acceptance of Linux in the past 18 months as the major alternative server operating system has a analogous effect in the high performance computing market. Today almost every major U.S. computer vendor has a Beowulf cluster product or strategy. These are just the first tentative steps of major change in direction from proprietary, exotic systems to standards-based commodity systems.

About the author: Donald Becker has contributed to the Linux kernel since 1992, and is largely responsible for Linux's broad support of network adapters. In 1994 he moved to CESDIS at NASA's Goddard Space Flight Center to start the Beowulf Project. He is currently the Chief Technology Officer at Scyld Computing Corporation.

Parallel particle in cell codes: A comparison of the code performance on a Cray T3E, an Onyx 2, and a Beowulf Cluster, Mark Dieckmann and Anders Ynnerman

Particle in cell (PIC) codes are frequently used to simulate plasma turbulence. They represent a plasma by computational particles; their numbers are orders of magnitude less than the numbers for a real plasma. Increasing the numbers of computational particles obviously enhances the reliability of the simulation results but it also increases the simulation time. The simulation time is such that it requires (expensive) massively parallel computer resources. Finding a cost efficient parallel computer system is thus of crucial importance. We have compared the performance of our simulation code on a Cray T3E, an Onyx 2 and a Beowulf Cluster. We have found that, for our code, the type of problem we consider determines the most suitable computer system. We discuss the parallelization method of our code and two extreme physical problems that either favor a Beowulf cluster or the Cray T3E.

Project CLUX: A 96 Node Alpha Cluster, Oskari Jääskeläinen

We discuss the process of acquiring a vendor supplied Beowulf cluster for scientific research as opposed to a do-it-yourself approach. The EU negotiated procedure and main factors influencing the selection of the system are described. Five tenders are compared from a technical and performance point of view, with a look on SMP bottlenecks and system reliability. The installed system consisting of 96 Alpha processor nodes and a fast Myrinet network is described, and the status of the project is presented together with experiences thus far.

Workstation/PC clusters in a lab setting - 1 million CPU hours in 10 years, Lennart Nilsson,

Our research is focused on the dynamics and interactions of biological macromolecules, such as protein-nucleic acid complexes. Since 1991 we have used workstations and more recently personal computers to run large scale molecular dynamics simulations of these systems. Our main application programs, in terms of CPU use, are CHARMM and Gaussian98. From the humble beginnings, with two IBM RS 6000 machines, we have used NQS to handle load balancing. Today we have 51 CPUs - 7 DEC Alphas, and 44 Intel PentiumII, 450MHZ - acting as compute servers to the NQS system. The Intel CPUs, under Linux, run parallel jobs over a switched FastEthernet. We are currently installing the next generation PC cluster based on dual 866 MHz Intel PentiumIII CPUs, and here we will also have a faster interconnect, SCI.

BProc - a new framework for Beowulf clusters, Donald Becker

Scyld has introduced a new framework and set of tools for Beowulf clusters. The Scyld Beowulf system uses 'BProc', Beowulf Process Space, as a focal point of a cluster operating system distribution intended to be widely deployed for production use. This talk will describe how BProc works, how we use it to simplify installation and administration, and the benefits it provides to developers.

Preliminary results from running the NSC benchmark suite on the PC-cluster Ingvar, Torgny Faxén

The talk will present some early experiences from porting and running portions of the NSC benchmark suite to NSC's new 32 processor Beowulf cluster Ingvar. Several of NSC's most frequently used applications are part of the suite. Comparisons with the T3E and an Origin-2000 will also be presented and discussed.

The n-D SCI Torus as a basis for truly Scalable Linux Clusters for Production Environments, Knut Omang

Scali's mission is to provide cost-effective, integrated and easy-to-manage cluster solutions. For applications with high communication demands, the interconnect is one crucial factor in the choice of solution. However, as important is the software support for optimal and easy use of the interconnect to achieve this theoretical performance. We will look closer at how the high end SCI based cluster solution from Scali scales with respect to performance as well as with respect to cost and complexity. Finally, a brief introduction to the user interface to Scali's clusters will be given.

High Performance Computing on the HPC2N Linux Cluster - Benchmarks and Applications in Computational Physics, Sven Öberg

High Performance Computing Center North (HPC2N) built a Beowulf cluster last summer to evaluate it's potential for high performance computing. The cluster hosts 8 dual pentium III processors with Wulfkit type of high performance communication. Wulfkit combines Dolphin's high-speed low latency SCI interconnect technology with Scali's SCI "tuned" MPI. I will present results from benchmarks, standard and application based. The cluster is compared with some frequently used platforms for HPC.

The X86 Architecture from a Beowulf Point of View, Peter Kjellström

The most common architecture used today for building cheap Beowulf clusters is the x86 architecture. The reason often being its price/performance advantage over other platforms. With its roots in the days of the first PC, the 8086, x86 has evolved (maintaining software compatibility) into today's Intel PentiumIII and AMD Athlon based platforms. This talk will try to give a reasonably detailed description of the platform as it is today, and at the same time look into the differences between the x86 and other, more classical high performance platforms.


Niclas Andersson