Apply for time »
User guides
Parallel computing
E-mail lists

Parallel computing

Parallel computing is a large topic so it is hard to know where to start. Basically you can program in parallel using two different approaches:

  • Message passing using MPI
  • Shared memory using OpenMP
You can also combine the two. There are also other methods, which are less used at NSC:

Four excellent books, the first covering MPI, the second OpenMP, the third HPF, and the fourth PVM, are:

  • "Using MPI" by William Gropp, Ewing Lusk and Anthony Skjellum. The MIT Press.
    Barnes & Noble   Bokus
  • "Parallel programming in OpenMP" by Rohit Chandra, Leonardo Dagum, Dave Kohr, Dror Maydan, Jeff McDonald and Ramesh Menon. Morgan Kaufmann Publishers.
    Barnes & Noble   Bokus
  • "The High Performance Fortran Handbook" by Charles Koelbel, David Loveman, Robert Schreiber, Guy Steele, and Mary Zosel. The MIT Press.
    Barnes & Noble   Bokus
  • "PVM Parallel Virtual Machine" by Al Geist, Adam Beguelin, Jack Dongarra, et al. The MIT Press.
    Barnes & Noble   Bokus
Regarding parallel programming in general there are several books, for example take a look at the following web-address for more information:

A new book is "Parallel Programming in C with MPI and OpenMP" by Michael J. Quinn, Oregon State University, McGraw-Hill 2003, 543 pp.

Two further new books:

Introduction to Parallel Computing, A practical guide with examples in C.

Wesley Petersen, Seminar for Applied Mathematics, Department of Mathematics, ETHZ, Switzerland, and Peter Arbenz, Institute for Scientific Computing, Department Informatik, ETHZ, Switzerland

  • A practical student guide to scientific computing on parallel computers
  • Based on teaching notes from ETH Zurich
  • Explanation by clear and easy to follow examples in C and Fortran
  • Includes theoretical background to examples
  • Unique coverage of parallelism on microprocessors
  • Appendix includes glossary of terms, and notations and symbols
Contents: Basic issues; Applications; SIMD, Single Instruction Multiple Data; Shared Memory Parallelism; MIMD, Multiple Instruction Multiple Data; SSE Intrinsics for Floating Point; AltiVec Intrinsics for Floating Point; OpenMP commands; Summary of MPI commands; Fortran and C communication; Glossary of terms; Notation and symbols.

Oxford Texts in Applied and Engineering Mathematics, 278 pages, January 2004, ISBN 0-19-851577-4, Paperback,
0-19-851576-6, Hardback.

Parallel Scientific Computation: A Structured Approach using BSP and MPI

Rob Bisseling, Associate Professor, Mathematics Department, Utrecht University, The Netherlands.

This is the first text explaining how to use the bulk synchronous parallel (BSP) model and the freely available BSPlib communication library in parallel algorithm design and parallel programming. Aimed at upper level undergraduates, graduate students and researchers in mathematics, physics and computer science, the main topics treated in the book are core topics in the area of scientific computation and many additional topics are treated in numerous exercises.

The book contains five small but complete example programs written in BSPlib which illustrate the methods taught. An appendix on the message-passing interface (MPI) discusses how to program in a structured, bulk synchronous parallel style using the MPI communication library. It presents MPI equivalents of all the programs in the book. The complete programs of the book and their driver programs are freely available online in the packages BSPedupack and MPIedupack.

Contents: Introduction; LU decomposition; The fast Fourier transform; Sparse matrix-vector multiplication; Auxiliary BSPedupack functions; A quick reference guide to BSPlib; Programming in BSP style using MPI; References; Index.

Oxford University Press, February 2004, 305 pages, ISBN 0-19-852939-2, Hardback, GBP 45.00.


Page last modified: 2004-02-06 15:48
For more information contact us at