The floating-point arithmetic on computers is designed to approximate the corresponding operations over the real numbers as close as possible. In this paper it is shown by means of counterexamples that this need not to be true for existing machines. For achieving good numerical results a floating-point arithmetic approximating the real operations as close as possible is probably best. For achieving verifications on computers, at least a precisely defined computer arithmetic is indispensable. In this paper we first introduce the Kulisch/Miranker theory, which represents a sound basis for computer arithmetic. Each operation is precisely defined and, moreover, is of maximum accuracy. That means, the, computed result is the floating-point number of the working precision closest to the infinite precise result. The theory also covers directed roundings allowing computations with intervals. These properties hold true for the floating-point numbers of single and double precision as well as for the vectors, matrices and complex extensions over those. In the second part of the paper we demonstrate the theoretical basis for what we call ‘Higher Order Computer Arithmetic’. This is an inclusion theory allowing the development of algorithms to compute bounds for the solution of various problems in numerical analysis. These bounds are automatically verified to be correct and they are of high accuracy. Very often they are of maximum accuracy, that means the left and right bounds of all components of the solution are adjacent in the floating-point screen. Moreover existence and uniqueness of a solution within the computed bounds is automatically verified by the algorithm. If this verification is not possible, a respective message is given. We develop the theory and give algorithms for the solution of systems of linear and nonlinear equations. As demonstrated by examples even for extremely ill-conditioned problems existence and uniqueness of the solution is verified within bounds of least significant bit accuracy.
Standard error estimates in numerical linear algebra are often of the form γk|R||S| where R,S are known matrices and γk:=ku/(1-u) with u denoting the relative rounding error unit. Recently we showed that for a number of standard problems γk can be replaced by ku for any order of computation and without restriction on the dimension. Such problems include LU- and Cholesky decomposition, triangular system solving by substitution, matrix multiplication and more. The theoretical bound implies a practically computable bound by estimating the error in the floating-point computation of ku|R||S|. Standard techniques, however, imply again a restriction on the dimension. In this note we derive simple computable bounds being valid without restriction on the dimension. As the bounds are mathematically rigorous, they may serve in computer assisted proofs.
We present a model problem for global optimization in a specified number of unknowns. We give constraint and unconstraint formulations. The problem arose from structured condition numbers for linear systems of equations with Toeplitz matrix. We present a simple algorithm using additional information on the problem to find local minimizers which presumably are global. Without this it seems quite hard to find the global minimum numerically. For dimensions up to n=18 rigorous lower bounds for the problem are presented.