A Note on the Perturbation of arithmetic expressions

In this paper we present the theoretical foundation of forward error analysis of numerical algorithms under;  Approximations in "built-in" functions.  Rounding errors in arithmetic floating-point operations.  Perturbations of data. The error analysis is based on linearization method. The fundamental tools of the forward error analysis are system of linear absolute and relative a prior and a posteriori error equations and associated condition numbers constituting optimal of possible cumulative round – off errors. The condition numbers enable simple general, quantitative bounds definitions of numerical stability. The theoretical results have been applied a Gaussian elimination, and have proved to be very effective means of both a priori and a posteriori error analysis.

(1) Under perturbations, an evaluation algorithm yields approximations of that can be written in the form ( ) ( ) ……... (2) Called the local error are the relative errors of data input.We shall assume that the local errors are bounded by | | for t=o,….n,where are suitable non negative weights and ƞ is an accuracy, constant.

Remark
Many of these theoretical properties do not hold in the presence of rounding errors.Where: Cond (A) = ‖ ‖ ‖ ‖ condition number.[4], Cond ‖ ‖ = denote the spectral norm of A (The norm of a matrix is in some sense a measure of the magnitude of the matrix).

‖ ‖ | ( )| ⁄
The spectral norm.(The notation ( ) denotes an eigenvalue of note that for any real matrix A the matrix is symmetric and nonnegative definite.) this is in correspondence with the use of so-called relative or logarithmic derivatives [5].

Error estimates for numerical algorithms:
Roundoff errors arise because it is impossible to represent all real numbers exactly on a machine with finite memory (which is what all practical digital computers are) Given two numbers a, b, and arbitrary approximations a′,b′, of a,b the following absolute and relative a priori and a posteriori errors are defined:

Forward Elimination:
The following investigation deals with the error analysis of Gaussian elimination for the solution of regular linear systems.1-Ax = y : ∑ i= 1 (1)n.with real coefficients and right-hand sides first the common forward elimination is analyzed which reduces the given system with one or several right -hand sides to a triangular linear system it is well known that the determinant of A can also computed in this way.Thus let ̂ = ( ) be a rectangular n (n+h) matrix such that 2-i= 1 (1) n.For the solution of the above linear system (1) set h=1 and 3-i = 1(1) n.For the error analysis of computing the determinant put, h = 0 such that ̂ = = A by forward elimination a sequence of matrices  (5).By these equations the linear error approximation are uniquely determined as functions of the absolute data errors.

3.
(k < i), (i ≤ k), and thus the triangular factorization A=Lu of A is obtained.The solutions of the linear system are then determined successively by: 4.
( (i = n,…,1).The linear error approximation S in ( 6) is a first order approximation of the absolute error Δx =X-X of the computed solution X, that is, the first order approximation S of the absolute error of the computed solution vector X permits the representation [8].8. S = , using the error terms.9. ∑ 10.
(i<k), I = 1,….n of arithmetic floating-point operations in (5).12. Δx = s + O ( )2 Accordingly, the residual of the linear system Ax=y has the form [8,9]. 13.Ax -y = AΔx = As + O ( )2 By (6) and triangular factorization A = Lu, the linear residual approximation t = As has the representation.14. t =As = LF.Thus s,t can be decomposed into.15.S= .The terms represent the error and residual contributions of all data error and all rounding errors of the floating-point arithmetic in forward elimination, and the error and residual contributions of rounding error occurring in back substitution.

Condition Numbers [9].
It is presupposed in the following that the absolute data errors , of the coefficients and the right-hand sides of the linear system are bounded by: 1 Where is a data accuracy and , are non-relative weights.Analogously is assumed that the relative rounding errors of the arithmetic floating-point operations of the numerical solution of the linear system are bounded by, (the following inequalities is the theoretical background for the condition numbers).Where Gramer′s rule is not.It is proved that the relative condition numbers, the stability constants (2), (3), and the above pivotal strategies are invariant under scaling of the linear system.

Discussion:
In order to study approximation feasibility in the iterations methods (see references), it is clear from the above that the example under discussion is diverted by applying indirect methods and have given diverted results different from the correct approximate results.This is clear from the oscillated graph as the results are diverted and are opposite to the results of the direct methods noting that the iterations methods are approximate if the sequence of solutions increase in their accuracy and approach a fixed limit, otherwise it is considered diverted.It is possible to apply another idea on the solution and even develop it by increasing the number of decimal places and by starting with the direct method to solve the system of linear equations, then following it by the iterations method as the numerical value we have achieved by applying the direct method which is the initial value for the iterations method following it.The opposite is not true and in this way we can obtain a more accurate solution by less effort.The iterations methods have been tested and the (JACOPLOT) program has been executed on many examples.The basic programs have been executed on the personal computer (NEC system).The following is a sample of the outputs together with a printed graph which shows failure of the iterations method to solve this system as the results are oscillated as shown in the graphs (see enclosures).
Starting points are representations and elements of the errors in elementary arithmetic operations, +, -, *, /, and "built-in" function occurring in the floatingpoint arithmetic of computers.Condition numbers are defined for the simplest algorithms, consisting of the input of one or two operands followed by an arithmetic operation or function.Estimates for the remainder terms of Taylor′s formula are established.By neglecting remainder terms in the general error equations, Propagation of error is the effect of errors on the uncertainty of a function based on them.When the variables are the values of experimental measurements they have uncertainties due to measurement limitations which propagate to the combination of variables in the function.The algorithm (A) determines uniquely mapping A in .The solution of the linear absolute error equations are obtained by means of the solution operators L= ( ) and of the relative error equation by L= ( a= a′ -a Pa = (a′-a) / a, a ≠ 0 A posteriori Δ a′ = a -a′ pa′ = (a -a′) / a′, a′ ≠ 0 Δ (aob) = a′ ob′ -aob, P(aob) = (a′ob′aob), O = +, -, *, / of sums, differences, products, and quotients under perturbations of the operands a, b.It is well known that: Δ (a ±b) = Δ a ± Δ b, Δ (ab) = (b Δ a) + (a Δ b) + (Δ a Δ b) Δ (a/b) = 1 / (b + Δ b) (Δa -(a/b) Δb) P (a ± b) = (a/c) ± (b/c) , P (ab) = + + ( ) P (a/b) = ( -) / (1+ ) Where c = a ± b.The numerical computation of aob first requires input or computing of operands a, b carried out approximately.
Fig.(1)Bifurcation diagram of the problem instead of Ri read Ki