Diagnostics

Rounding error and its effects can be studied by the techniques of J. H. Wilkinson [1963], W. Kahan [1972], and Françoise Chatelin and Valerie Frayssé [1966]. Rounding control, see Kulisch [2001], may be used as a diagnostic tool. See also the book by Jaulin et al [2001].

The technique of Chaitin-Chatelin is described in Chapter 6 and in "PRecision Estimation and Control In Scientific Engineering computing" or PRECISE from CERFACS. PRECISE is a set of tools provided to help the user set up computer experiments to explore the impact of finite precision on the quality of convergence of numerical methods. Because stability is at the heart of the phenomenon under study mathematical as well as numerical stabilities, PRECISE allows users to investigate stability by a straightforward randomization of selected data, then let the computer produce a sample of perturbed solutions and associated residuals, or a sample of perturbed spectra.

It allows users to perform a complete statistical backward error analysis on a numerical method or an algorithm to solve a general nonlinear problem of the form F(x) = y (matrix or polynomial equation), at regular points, and in the neighborhood of algebraic singularities. It provides an estimate of the distance to the nearest singularity viewed by the computer, as well as of the order of this singularity. In the case of matrix computations, it can also help to investigate robustness to spectral instability by means of graphical display of perturbed spectra. See also the report by McCoy and Toumazou [1997].

Iterative or recursive processes can be stable, conditionally stable, or unstable. In simple problems it is possible to perform a rigorous mathematical analysis. If instability occurs it can usually be ignored at first, but after a certain (rather low) number of iterations the error growth is considerable. Switching to higher precision will only delay the error growth.

Wayne Enright of the University of Toronto has written "Tools for the Verification of Approximate Solutions", see Chapter 7. A suite of tools to verify the accuracy and reliability of numerical solutions is developed. It is a collection of generic tools suitable for use with any method that would range in cost and complexity. It can be used to perform tasks such as to check the consistency of solutions generated with different accuracy requests and to check consistency of an associated piecewise polynomial interpolation of the numerical solution with the differential equation, which may be a system of ODEs or PDEs. The resulting suite of tools can be used to improve the practitioner's confidence in the results.

Sensitivity analysis is an important tool to study these concepts and their consequences. Accuracy and stability can be studied with analytic mathematical methods, see Higham [1996].

Kenneth Runesson of Chalmers University of Technology is investigating "A Paradigm for Error Estimation and Adaptability in Computational Mechanics". It is based on a posteriori error computation in finite element analysis. An important feature is the possibility to select "goal-oriented" error measures that are of interest to the engineer. The chosen error measure is used as a part of the associated adaptive refinement in space or space-time in order to meet a predefined stopping criterion (tolerance). A key feature of the error computation is the identification and solution of an auxiliary problem that is the dual of the actual problem whose solution is sought. In mechanics the dual solution is known as the "influence function". In fact, the dual problem is linear(ized) even when the primal problem is nonlinear, which seems to be an attractive feature for large scale problems in computational mechanics, where material and geometric nonlinearities are commonplace. Nevertheless, to compute the dual problem as efficiently as possible is a challenging and crucial task. The method can be used for error control in general classes of problems in coupled space and time.

Siegfried Rump of the Technical University of Hamburg-Harburg works on Self-Validating Methods, see Chapter 10, One aspect of software reliability, of course, is the correctness of a (numerical) result. Over the years, there has been increasing interest in solving mathematical problems with the aid of digital computers, for example the celebrated proof of the Kepler conjecture. This requires the solution of numerical problems by means of rigorous error bounds, and this is the goal of self-validating methods: to deliver correct results on digital computers - correct in a mathematical sense, covering all errors like representation, discretization, rounding errors and others. In turn, such methods establish inherent self-correcting properties because an incorrect implementation is likely to produce possibly narrow, but incorrect error bounds. A Matlab toolbox has been developed.

Self-validating methods should be sharply distinguished from any kind of arithmetic, e.g., interval arithmetic. SV-methods are in fact a collection of mathematical theorems providing the basis to implement algorithms with result verification. Those algorithms, in turn, verify the assumptions of the mentioned theorems. For this purpose they may (conveniently) use interval arithmetic, but may use traditional floating point as well.

Examples of self-validating methods can be found in the special issue of Linear Algebra and its Applications (LAA) on this subject, Volume 324, February 2001.

The area of testing was discussed at some of the working conferences arranged by IFIP WG 2.5; see especially the proceedings edited by Lloyd D. Fosdick [1979] and Ronald F. Boisvert [1997].

Bibliography

1. J. H. Wilkinson, Rounding Errors in Algebraic Processes, Her Majesty's Stationary Office, London 1963. Also published by Prentice-Hall, Englewood Cliffs, NJ, USA and reprinted by Dover, New York, 1994, ISBN 0-486-67999-3. German translation from Springer-Verlag, Berlin, 1969.

2. W. Kahan, A survey of error analysis, in "Proc. IFIP Congress, Ljubljana", Information Processing 71, North-Holland, Amsterdam, The Netherlands, 1972, pp. 1214-1239. ISBN 0-7204-2063-6.

3. Françoise Chaitin-Chatelin and Valerie Fraysse, Lectures on Finite Precision Computations, SIAM 1996. ISBN 0-89871-358-7.
http://www.ec-securehost.com/SIAM/SE01.html#SE01

4. Ulrich Kulisch, Advanced Arithmetic for the Digital Computer -- Interval Arithmetic Revisited, (61 pages), 2000.
Published in "Perspectives on Enclosure Methods", edited by U. Kulisch, R. Lohner, and A. Facius. Springer 2001, 345 pp, ISBN 3-211-83590-3.

5. Luc Jaulin, Michel Keiffer, Olivier Didrit, and Eric Walter, Applied Interval Analysis, Springer-Verlag, 2001, ISBN 1-85233-219-0.
http://www.springer.de/engineering/book/978-1-85233-219-8

6. R. A. McCoy and V. Toumazou, PRECISE User's Guide, CERFACS Technical Report TR/PA/97/38.
http://www.cerfacs.fr/algor/reports/1997/TR_PA_97_38.ps.gz

7. Nicholas J. Higham, Accuracy and Stability of Numerical Algorithms, SIAM 1996. ISBN 0-89871-355-2.
http://www.ec-securehost.com/SIAM/ot80.html#ot80 Second Edition, ISBN 0-89871-521-0

8. Lloyd D. Fosdick, editor, "Performance Evaluation of Numerical Software", Proceedings of the IFIP TC 2.5 Working Conference in Baden, Austria, 11 - 15 December 1978. North-Holland Publishing Company, Amsterdam - New York - Oxford, 1979, ISBN 0-444-85330-8.
Contents available in http://wg25.taa.univie.ac.at/ifip/contents1.txt

9. R. F. Boisvert, editor, "Quality of Numerical Software: Assessment and Enhancement", Proceedings of the IFIP TC 2/WG 2.5 Working Conference in Oxford, England, 8 - 12 July 1996. Chapman & Hall, London and New York 1997, ISBN 0-412-80530-8.
Contents available in http://wg25.taa.univie.ac.at/ifip/contents7.html


IFIP WG 2.5 Project 68 on "Accuracy and Reliability in Scientific Computing".
Last modified: 5 December 2018

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Remark: