Last year's

2nd Workshop on Linux Clusters for Super Computing

Cluster for Applications and GRID Solutions

25-26 October, 2001
Hosted by
National Supercomputer Centre (NSC)
Linköping University, SWEDEN




& Abstracts





October 24

18:15 - 20:00 Creativity & Coincidence: CERN, the Web and the Internet
Ben Segal, CERN
This is arranged in cooperation with Lysator Upplysningen. The location of this talk is in C4 in building C (C-huset) at Campus Valla.
slides (PowerPoint)

October 25

The workshop will take place in Auditorium I on first floor in Collegium, Mjärdevi Science Park, Linköping.

Lunches and dinner will be served in the restaurant on ground floor in Collegium. Coffee and tea will be served outside the auditorium during registration and breaks.

09:00 Registration
Coffe and Tea
09:45 Welcome
Anders Ynnerman, NSC
10:00 Keynote: Current and Emerging Cluster Components
Mark Baker, University of Portsmouth
11:00 Linux Cluster implementation for CFD applications in aero engine industry
Christian Lundh, Volvo Aero Corporation
11:30 DSZOOM, a step towards shared memory programming on beowulfs?
Henrik Löf, Uppsala Universitet
12:00 LUNCH
13:30 Grid Computing: The European DataGrid Project
Ben Segal, CERN
abstract slides (PowerPoint)
14:30 Lunarc's experience of Beowulf clusters and thoughts about GRID computing
Göran Sandberg, Lunarc, Lund University
15:00 BREAK
Coffee and Tea
15:30 Monte Carlo Simulation of Radiation Transport
Per Kjäll, Elekta AB
abstract slides (PowerPoint)
16:00 Linux-cluster in an Aerodynamic Design Environment
Mattias Sillén, Saab Aerospace
abstract slides (PDF)
16:30 BREAK
16:45 Designing high performance Beowulfs with today's hardware
Peter Kjellström, NSC
17:15 Clusters at PDC - Towards Large Scale Scientific Computing
Per Öster, Center for Parallel Computers, Royal Institute of Technology
18:00 BOFs*
BOF #1: GRID, Anders Ynnerman
BOF #2: Hardware Details and Issues, Peter Kjellström
BOF #3: User and Production Environment,
BOF #4: [TBD*]
19:00 Dinner

October 26

08:30 RunBeast - Managing Remote Simulations
Iakov Nakhimovski, IDA, Linöpings Universitet
09:00 Application of the Finite Difference Time Domain method to the
modelling of indirect lightning effects using a Linux cluster
Stefan Persson, Saab Avionics AB
09:30 An Integrated User Environment for Scientific Cluster Computing
Niclas Andersson, NSC
abstract slides (HTML)
10:00 BREAK
Coffee and Tea
10:30 Jini Meets the Grid
Mark Baker, University of Portsmouth
11:00 Early Experiences with Itanium Cluster
Steinar Trædal-Henden, Universitetet i Tromsø
abstract slides (PostScript)
11:30 How to Tame the Penguins
Niklas Jakobsson, Center for Parallel Computers, Royal Institute of Technology
12:00 LUNCH
13:30 Some Experiences of Using Linux Clusters for Applications in Nonlinear Solid Mechanics
Larsgunnar Nilsson, Linköpings Universitet & Engineering Research AB
[no abstract] slides (PDF)
14:00 CANCELLED ACCORD: Academic Cluster of Czestochowa for Research and Education
Roman Wyrzykowski, Technical University of Czestochowa
14:30 Computational geophysical fluid dynamics; are LINUX clusters an useful resource?
Göran Broström, Göteborgs Universitet
15:00 BREAK
Coffee and Tea
15:30 CANCELLED Monte Carlo Simulation for Robust Engineering - Changes in Technology and Economy of CAE
Petter Sahlin, TeraPort AB
abstract slides (PDF)
16:00 High Throughput Computing - Linux Clusters and Grids
Carl G. Tengwall, IBM
abstract slides (PDF)
16:30 Closing remarks
Anders Ynnermann
TBD = To Be Determined
BOF = Birds Of Feather


Keynote: Current and Emerging Cluster Components
Mark Baker

Clusters, based on commodity components, are now the dominant platform for scalable computing systems. In the last decade, contributors from the international community have develop a host of methodologies and created an array of hardware and software tools that through their synergy have made it possible to advance the price/performance, and portability of these new cluster systems. Today, a substantial number of the most powerful computers in the world are commodity clusters.

This talk will discuss and comment on the current and emerging state-of-the-art in the field of cluster computing. In addition, the talk will include observations on the various hardware and software components reported on at the international conference, Cluster 2001, held recently in Newport Beach, US. The talk will conclude by highlighting some of the hurdles that still need to be overcome and also comment on the immediate future of clusters.

Linux Cluster implementation for CFD applications in aero engine industry
Christian Lundh

The evolution of the Linux operating system in conjunction with new powerful computer platforms provides a new cost effective solution for high performance CFD-applications. The first cluster implementation at Volvo Aero was set up 2 years ago. Today a 128-node Linux Cluster serves as the backbone for the CFD computing power at Volvo Aero and both commercial and in-house developed solvers are used successfully. The presentation will describe some of the applications used together with a description of the cluster setup.

DSZOOM, a step towards shared memory programming on beowulfs?
Henrik Löf

Beowulf clusters have proven to provide cost-effective computational performance primarily to the HPC community. We will present benchmarking resluts from a typical low cost beowulf running HPC style code.

In an ongoing research project, led by professor Erik Hagersten at Uppsala University, we study software distributed shared memory systems (SDSMs). Given the beowulf presented above, we will talk about the future hardware and software required to efficiently run shared memory code on such a cluster and also present SDSM results from a SUN prototype cluster.

Grid Computing: The European DataGrid Project
Ben Segal

The European DataGrid Project is developing middleware solutions and testbeds to support globally distributed scientific exploration involving many Petabytes of data, many thousands of computer processors, and many hundreds of users. This involves innovative techniques for data replication and the management of widely variable types of distributed information. We will construct this environment by combining and extending newly emerging "Grid" technologies. The Project focuses on scientific applications from the areas of High Energy Physics, Earth Sciences and Bio-Informatics.

Lunarc's experience of Beowulf clusters and thoughts about GRID computing
Göran Sandberg

Lunarc, the Centre for Scientific and Technical Computing, is an organization that is part of Lund University. It aims at faciliting cooperation within the field of scientific and technical computing. It also supports research and education through providing computing facilities, particularly for tasks requiring high-performance computers. The many activities that are carried on came about through collaboration between the Divisions of Theoretical Chemistry, Physical Chemistry, and Structural Mechanics, starting in 1986. Some 80 users log onto the Lunarc computers regularly and submit jobs. They come from many different departments of Lund University, although the majority are from the Faculty of Science and the Faculty of Engineering.

LUNARC is in the process of shifting into a new technological phase, one made possible by recent developments in PC technology. This allows PCs to readily compete with the far more expensive workstations that form the basis for LUNARC's present SGI ORIGIN-2000 system. During the past year, LUNARC has tested the performance of clusters of PCs. For typical LUNARC applications, these have been found to be faster than the earlier system, by a factor of as much as 2.0. These are now in operation and have more than doubled LUNARC's computer resources.

The presentation will discuss our thoughts, based on our experience and the necessity for moving to a new paradigm concerning computational sciences including high performance computing. Resources, such as hardware, software, pre- and post-processors, must be distributed and used on demand. The initiated collaboration with NSC in these matters forms the necessary tool for realizing a tangible GRID computing project.

Monte Carlo Simulation of radiation transport
Per Kjäll

The raison d'être for simulation in the context of clinical radiation treatment is obviously that the results of the simulation, i.e. the simulated dose distribution, corresponds both in magnitude and geometrical extent, as closely as possible to what is actually obtained during treatment. From the point of view of the simulation, it is then clear that it is crucial that the radiation transport algorithm, the patient information and the modelling of the unit delivering the radiation are as accurate as possible. Monte Carlo simulation of radiation transport is at present the most powerful tool we have to investigate the transport of radiation through arbitrary mechanical structures as well as through patients. The problem with this approach has for a long time been the lack of necessary computing power. Clusters of computers is clearly a big step towards offering the required computing power. The presentation will first briefly describe the application and then the parallel design alternatives we are at present investigating.

Linux-cluster in an aerodynamic design environment
Mattias Sillén

The use of Computational Fluid Dynamics(CFD) simulations in the aerodynamic design process has increased significantly during the last 10 years. Key factors for this are the continous increase in computational power, advances in physical modelling of turbulent flow and improved efficiency of numerical algorithms. Many flow solver in the aerospace sector are based on algorithms well suited for parallelization and the CFD-community were early adopters of parallel computers. The price/performace offered by the PC-based Linux-clusters is impressive and well suited for departmental use. The talk will focus on aerodynamic flow simulations and shape optimization using parallel flow-solvers. The use of PC-cluster will be discussed and comparisons made with parallel computers like Cray T3E and SGI3000.

Designing high performance Beowulfs with today's hardware
Peter Kjellström

When deciding upon a Beowulf design there are a lot of choices to be made. First there are fundamental ones such as; should we buy OEM hardware or not and should we have SMP nodes or single processor ones. Then there are more hardware related questions, like which processor to use, memory architecture and cluster interconnect.

This talk will focus on finding answers to these questions for different situations, different computational needs. The discussion will strive to be as up to date as possible considering the very rapidly evolving hardware market.

Clusters at PDC - Towards Large Scale Scientific Computing
Per Öster

Large scale scientific computing is not made of commodity hardware alone - the major effort is still in front of us in terms of software development and systems integration. With this in mind, the experience of cluster technologies at PDC is presented and summarized. Among the topics are, cluster for bioinformatics, KTH Linux Laboratory and, a standard application programming interface for performance monitors (PAPI).

RunBeast - Managing Remote Simulations
Alexander Siemers and Iakov Nakhimovski

In many application fields the simulation process is computation extensive and fast computers, e.g. parallel computers or workstation clusters, are needed to gain results in reasonable time. These high performance resources can only be accessed remotely via Intranet or Internet.

RunBeast is a simple but effective client server application which simplifies running of remote simulations. It addresses all the major problems related to the data transfers over slow networks, unified access to different remote systems and administration across different organizational domains. The system is actively used at SKF in the context of the BEAST simulation toolbox.

Application of the Finite Difference Time Domain method to the modelling of indirect lightning effects using a Linux cluster
Stefan Persson

Lightning is one of the major threats that must be considered in the design and certification of an aircraft.

Computer simulations can be used in order to obtain a better understanding of the coupling mechanisms. Solving Maxwell's equations in 3D for large objects is very demanding on cpu power and memory usage and supercomputers are usually used for these types of large-scale electromagnetic simulations. In the present simulation a Linux-cluster has been used. The Finite Difference in the Time Domain (FDTD) method is used to model the coupling of the external threat to the internal aircraft wiring. An out-of-core method has been implemented to allow for the large memory requirements. The FDTD method with sub-models for thin wires has been used to calculate induced current inside the cockpit and fuselage of the SAAB 2000 aircraft. A comparison of the performance between using a supercomputer and a Linux-cluster has been done.

An Integrated User Environment for Scientific Cluster Computing
Niclas Andersson

In Beowulf systems running Linux, the system software is comprised of many disparate pieces of software not very well attuned to each other. There are several various compiler suites, communication packages, batch queueing systems, job schedulers, and accounting systems which have little or no knowledge of each other.

The NSC Cluster Environment is an initial attempt to bring these pieces of software together and present a more integrated environement to the user without introducing limiting constraints on the software components.

Jini Meets the Grid
Mark Baker

There is an increasing interest in integrating Java-based, and in particular Jini systems, with the emerging Grid infrastructures. In this talk we explore various ways of integrating the key components of each architecture, particularly their directory and information management services. In the first part of the talk we sketch out the Jini and Grid architectures and their services. We then review the components and services that Jini provides and compare these with those of the Grid. In the second part of the talk we critically explore four ways that Jini and the Grid could interact, here in particular we look at possible scenarios that can provide seamless interaction between a Jini and Grid environment. In the final part of the talk we summarise our findings and report on future work being undertaken to integrate Jini and the Grid.

Early experiences with Itanium cluster
Steinar Trædal-Henden

Itanium is the name of the new processor from Intel. With 64-bits it makes the traditional PC an even stronger competitor to the traditional UNIX computers. As applications that address large amount of data and memory, now can be run on a PC. This may give cluster computing an extra boost in the competition with the large shared memory machines.

Today it is possible to buy computers containing up to 16 Itanium cpu's, thus covering the spectrum up to the mid-range. In the future the processor will also be available in high-end servers.

Among others these aspects make the processor interesting and the computer center at the universiy of Tromsø

How to Tame the Penguins
Niklas Jakobsson

Linux clusters is becoming more and more popular in both commercial and scientific environments. Today there is a lot ways to install and manage clusters, one thing you can be sure of is that they will never solve all your installation and management problems. This talk outlines some of the problems you can encounter when installing a medium size linux cluster and it also gives some suggestions on how to solve them.

Some Experiences of Using Linux Clusters for Applications in Nonlinear Solid Mechanics
Larsgunnar Nilsson


ACCORD: Academic Cluster of Czestochowa for Research and Education, Roman Wyrzykowski

The ACCORD cluster was built this year in the Technical University of Czestochowa (Poland). At present ACCORD contains 18 Pentium III 750 MHz processors, or 9 server platforms Intel ISP2150 as SMP nodes, connected both by the fast MYRINET network and standard Fast Ethernet protocol. It is operating under the control of LINUX - Debian distribution). By the end of this year, ACCORD will be upgraded up to 34 processors with AMD ATHLON MP 1.2 GHz processors (Tyan motherboards). In the paper, we discuss performance results of numerical experiments with the ParallelNuscaS object-oriented package for parallel finite element modeling, developed at our University.

Computational geophysical fluid dynamics; are LINUX clusters an useful resource?
Göran Broström

Geophysical fluid dynamics (GFD) is presently one of the largest users of computational power in the world. In this presentation I will describe some of the differences between small scale fluid dynamics and the large scale GFD computations (i.e., Climate, Atmospheric, and Oceanographic applications), and present some state of the art GFD computations. Most GFD codes has been developed at large institutes and there has been a strong focus on writing codes that are efficient on parallel machines. However, some parts of the GFD code requires a substantial amount of data to be shared/transfered between processors. Thus, it is relevant to investigate how well GFD codes will scale on typical LINUX clusters. Some preliminary results show that the size of the problem (i.e., number of grid points) are important for how the GFD code scale on various machines. However, generally speaking, it is shown that fast communication cards (e.g., Wulfkit/SCALI or Myrinet networks) are needed to obtain good scaling for the GFD code on <10 processors LINUX clusters.

Monte Carlo Simulation for Robust Engineering - Changes in Technology and Economy of CAE
Petter Sahlin

Monte Carlo Simulation (MCS) is radically improving the quality and reliability of Finite Element Analysis at leading research and development organisations. Main deliverables of MCS are robust engineering, outlier identification, reliability assessment, correlation with tests and model validation. Successful implementations often lead to dramatic changes in the engineering workflow and strengthens the relationship between the organisations which develop and verifies the models used in CAE. This presentation will present the commercial and technological requirements which shape the use of MCS. And shows how and why cluster technology in general, and linux clusters in particular have been used, and will be used for successful implementations

High Throughput Computing - Linux Clusters and Grids
Carl G. Tengwall

Linux is important to IBM. It is an integral part of Internet, is rapidly becoming the application development platform of choice, and is increasingly being used in High Performance Computing. IBM is investing considerable resources into making Linux a success across all our server platforms. In order to advance the functionality of Linux to meet important requirements IBM has set up a Linux Technology Center that is working with the Linux world wide community. To date IBM has installed many Linux clusters including some of the very biggest, such as the 1024 node system at Royal Dutch Shell. The focus going forward is on providing prepackaged, prevalidated clusters that are built on state of the art reliable hardware. IBM is also investing in advancing the state of the art of Linux Clusters by porting key technologies that have been proven successful on the more than 10,000 UNIX clusters called IBM RS/6000 SP that we have delivered to all types of organizations. The talk will also briefly review the IBM Grid Initiative.

Niclas Andersson