Further information from the conference chairman Morven Gentleman, Morven.Gentleman@iit.nrc.ca.
At one time, scientific computing applications were sufficiently simple that it was feasible to start from scratch for each new computational problem and construct a monolithic program specific to solving that particular problem. As these applications became more complex, it became necessary to amortize the development effort over many similar computational problems, and to be able to take advantage of highly specialized skills through employing off-the-shelf software components implemented by third parties. Thus for many years the software architecture of a scientific computing application has typically been that the computation is effected not by just a single program but by a suite of related programs operating on a common database. The individual programs within such a typical suite are structured from subprograms, some of which are obtained from libraries provided by various suppliers, some from public domain sources, and others which are unique to this suite of programs, representing the particular modeled science in question and the desired sequences of operations. In some cases, the user provides the main program that makes the appropriate calls, but in other cases the main program is a framework that can realize different computations by making appropriate calls to the available subroutines, including those provided by the user, and control over the sequence of calls and their arguments is through a user-provided script or other problem-oriented language.
Today, new options for architectures for scientific computation are becoming available, and some of the older paradigms may need to be re-thought. What are the implications of widespread connectivity to networks, in terms of the construction of scientific computing applications in the future? Do communication protocols such as OLE2 form an appropriate basis on which to build higher-level protocols (e.g., for mathematical and geometric objects) in pursuit of the goal of "plug and play" application inter-operability? Do we need to extend the notion of a common database to embrace federated databases that may be geographically or organizationally dispersed? How can we exploit concurrency and parallel execution for monitoring, visualization and steering purposes, for instance, as well as for straightforward performance? If, as some people argue, object-oriented computing provides a more appropriate programming basis than procedural languages, how can the properties (such as accuracy, reliability, portability, efficiency, etc.) of "traditional" subroutine libraries, so painstakingly pursued by their developers, be preserved?
The purpose of this meeting is to address questions of the above nature by bringing together practitioners of scientific computation with innovations in software architecture, those with experience in trying the new paradigms, and component vendors who must support them. We need to share the experience of what is and what is likely to remain effective and how it needs to be expressed.