An algebraic model for the propagation of errors in matrix calculus

Abstract We assume that every element of a matrix has a small, individual error, and model it by an external number, which is the sum of a nonstandard real number and a neutrix, the latter being a convex (external) additive group. The algebraic properties of external numbers formalize common error analysis, with rules for calculation which are a sort of mellowed form of the axioms for real numbers. We model the propagation of errors in matrix calculus by the calculus of matrices with external numbers, and study its algebraic properties. Many classical properties continue to hold, sometimes stated in terms of inclusion instead of equality. There are notable exceptions, for which we give counterexamples and investigate suitable adaptations. In particular we study addition and multiplication of matrices, determinants, near inverses, and generalized notions of linear independence and rank.


Introduction
In this article imprecisions in entries of matrices are modelled by (scalar) neutrices, which are convex subgroups of the set of nonstandard real numbers, most of them are external sets. They are a sort of generalized zeros. Each entry of a matrix is an external number, which is the pointwise (Minkowski) sum of a (nonstandard) real number and a neutrix. Every entry has its own individual neutrix, modelling the diversity of imprecisions. The intrinsic vagueness is respected by the Sorites property of neutrices, which are invariant by some shifts. Examples of neutrices are the external set of in nitesimals and the external set £ of numbers smaller in absolute value than some standard real number, as well as all multiples of them, but there exist other types of neutrices [25]. The term neutrix is borrowed from Van der Corput, and we were inspired by his Ars Negligendi [5].
Within the setting of external numbers we study the e ects of error propagation in calculations with matrices and determinants.
The calculus of external numbers originates from error analysis, which is more or less informal. "Provisional" rules for addition, subtraction, multiplication and division are for instance given in [35], and they lead only to a weak algebraic structure. In the context of external numbers these rules are formalized as Minkowski operations. The fact that neutrices are convex additive groups enables us to build a much stronger algebraic structure, called Complete Arithmetical Solid in [11]. Addition and multiplication satisfy the properties of a completely regular commutative semigroup [32], and adapted forms of distributivity, order relation, Dedekind completeness and the Archimedean property are shown to hold.
We cannot hope that such strong rules hold for matrix calculus, still the matrices form a regular commutative semigroup for addition: the usual laws for addition are valid, but the sum of a matrix and its additive "inverse" will be a matrix of neutrices, and not the zero-matrix. Also in many cases the common laws for multiplication of matrices hold. Problems may appear when multiplying matrices with entries of di erent sign, in particular when some entries are almost equal in absolute value but opposite, or when the matrix has a small determinant. Still many algebraic properties hold under quite general conditions, typically entries should not be nearly opposite, a notion de ned in Section 2. Sometimes algebraic properties hold in the form of inclusions instead of equalities.
We pay special attention to invertibility, linear dependence and independence, and rank.
In analogy to addition, generically we cannot hope that the product of two matrices yields the identity matrix. We speak of a near inverse if we obtain the identity matrix up to neutrices included in . We give conditions for near inverses to exist, in terms of not too small determinants.
We give a straightforward de nition for linear independence of vectors of external numbers, and relate it to classical linear independence and dependence of vectors of representatives, i.e. real numbers which are elements of the external numbers.
There are several notions of rank of a matrix of external numbers. The row rank s is de ned in the common way, using linear independence. The minor rank is de ned using the non-singularity of minors. In fact a mixed notion called strict rank happens to be the more operational. We give conditions for its existence, and show that then the row rank is equal to the minor rank.
This article has the following structure. In Section 2 we present some properties of neutrices and external numbers, which are needed for the remaining sections. Some results are recalled, some are new. In Section 3 we show that almost all common properties of operations on matrices hold for non-negative matrices, and give general conditions for these properties to hold beyond. Section 4 deals with the determinant and its minors. In Section 5 we study nearly invertible matrices. In Section 6 we extend the notions of linear dependence and independence to external vectors. Section 7 discusses several notions of rank and their relationships. In Section 8 we relate brie y our approach to other forms of dealing with imprecisions and errors.

Neutrices and external numbers
We recall the de nitions of neutrices and external numbers, and some basic properties as regards to algebraic rules and the order relation. We derive some new properties which are useful to matrix calculus. For more details on neutrices and external numbers we refer to [2,[10][11][12]25]. Remark 2.1. Throughout this article we use the symbol ⊆ for inclusion and ⊂ for strict inclusion.
Neutrices and external numbers are well-de ned external sets in the axiomatic system HST for nonstandard analysis as given by Kanovei and Reeken in [24]. This is an extension of a bounded form of Nelson's Internal Set Theory IST [29]. This theory extends common set theory ZFC by adding an unde ned predicate "standard" to the language of set theory, and three new Axiom schemes. Introductions to IST are contained in e.g. [9], [8] or [26]. An important feature is that in nite sets always have nonstandard elements. In particular nonstandard numbers are already present within R. Limited numbers are real numbers bounded in absolute value by standard natural numbers. Real numbers larger in absolute value than limited numbers are called unlimited. Its reciprocals, together with , are called in nitesimal. Limited numbers which are not in nitesimal are called appreciable.
A (scalar) neutrix is an additive convex subgroup of R. Except for { } and R, all neutrices are external sets. The set of all limited numbers £ and the set of all in nitesimals are neutrices. Note that £ and are not sets in the sense of ZFC, for they are bounded subsets of R with no lowest upper bound. Let ε ∈ R be a positive in nitesimal. Some other neutrices are ε , ε£, st(n) ∈N [−ε n , ε n ] =£ε ∞̸ ,
An external number is the Minkowski-sum of a real number and a neutrix. So each external number has the form α = a + A = {a + x|x ∈ A}, where A is called the neutrix part of α, denoted by N(α), and a ∈ R is called a representative of α. If N(α) = { }, we may identify {a} and a, so that the real numbers are external numbers. If ∉ α = a + N(α), we call α zeroless and then (1) The collection of all neutrices is not an external set, but a de nable class, denoted by N. Also the external numbers form a class, denoted by E.
Addition, subtraction, multiplication and division are given by the Minkowski operations of De nition 2.2 below.
De nition 2.2. Let α = a + A, β = b + B be two external numbers and A, B be two neutrices.
Neutrices are ordered by inclusion, and the maximums are taken in this sense. If α or β are zeroless, in Denition 2.2.2 we may neglect the neutrix product AB. The rules of De nition 2.2 re ect the common rules for the propagation of errors of error analysis. In [35] they are called "provisional rules", for this analysis is informal, to hold approximately and somewhat ad hoc, using common sense. In contrast, in terms of external numbers, the equalities of De nition 2.2 are part of formal mathematics and permit us to prove much more general laws, which lead to the notion of Complete Arithmetical Solid in [11]. This structure is a completely regular commutative semigroup [32] for addition and multiplication, and distributivity, the order relation, Dedekind completeness and the Archimedean property hold in modi ed forms.
By Theorem 2.10 distributivity certainly holds with respect to external numbers of the same sign, but we may weaken this to nearly opposite numbers, as given by the next de nition.
For example, a real number b and − are nearly opposite, but they are not opposite with respect to . If a and b are two standard real numbers such that b ≠ −a, they are not nearly opposite. Proposition 2.14. Let α, β, γ ∈ E be such that α and β are not nearly opposite. Then (α + β)γ = αγ + βγ.
Proof. Let γ = c+C. The distributive law holds if α or β is neutricial. In case both are zeroless, we may suppose that |α| ≤ |β|. Then with β = b + B we have α b ≤ + . Also α and b are not nearly opposite, so + α b ⊂ @, hence by Proposition 2.9.2 also + R(β) + α b is a subset of @. Then + R(β) + α b is neither an absorber, nor an exploder of C. Then by Theorem 2.10 and Proposition 2.9.1.
Hence by Theorem 2.10 and Theorem 2.6 .

Matrices with external numbers
In this section operations on matrices with external numbers are studied. We start with addition, and show that it satis es the rules of a regular commutative semigroup. Then we study scalar multiplication and matrix multiplication. In many cases, in particular if the elements of the matrices are of the same sign, the same laws hold as for real matrices. External numbers satisfy the subdistributivity property, and the same is true for scalar multiplication and matrix multiplication. We present conditions for the distributivity property to hold. In contrast to the multiplication of external numbers, the associative property does not hold for scalar multiplication and matrix multiplication. We provide conditions so that the subassociativity property is valid, and conditions so that the associativity property is valid. We will consider matrices of the form where m, n ∈ N and α ij ∈ E for ≤ i ≤ m, ≤ j ≤ n; the natural numbers m, n are always supposed to be standard. We use the common notation A = (α ij )m×n. The transpose of the matrix A is de ned by A T = (ν ij )n×m with ν ij = α ji for ≤ i ≤ n, ≤ j ≤ m.

De nition 3.2. For matrices
Operations on Mm,n(E) are de ned similarly as in classical linear algebra.

De nition 3.3.
Let m, n, p ∈ N. Let A = (α ij )m×n ∈ Mm,n(E), B = (β ij )m×n ∈ Mm,n(E), C = (γ ij )n×p ∈ Mn,p(E) and λ ∈ E. Then The additive structure of Mm,n(E) re ects the additive structure of E, which is a commutative regular semigroup, and also a monoid, meaning that every element α = a + A has the individual neutral element A = α − α [10], but there exists also a universal neutral element in the form of .
Proposition 3.4. The structure Mm,n(E) is a commutative regular semigroup for addition. In fact, let A, B, C ∈ Mm,n(E). Then The structure Mm,n(E) is also a monoid, with neutral element O.
Proof. The associative law and commutative law for addition hold for external numbers, hence also for matrices. This makes Mm,n(E) a commutative semigroup for addition. As for Parts 3 and 4, let A ∈ Mm,n(E). In the remaining part of this section we study multiplication and its interaction with addition. We will see that most of the usual properties hold for non-negative matrices and non-negative scalars, and outside these classes they still hold under quite general conditions. For any external number α one has .α = and .α = α; also the multiplication of external numbers is associative. With these properties, the proofs of the next propositions are straightforward.
It follows from the fact that the multiplication of external numbers is not distributive that scalar multiplication and the multiplication of matrices is not distributive over addition. Theorem 3.8 below presents conditions such that the distributive property does hold.
De nition 3.7. Let A = (α ij )m×n , B = (β ij )m×n ∈ Mm,n(E). The matrices A and B are said to be not nearly opposite if α ij and β ij are not nearly opposite for all ≤ i ≤ m, ≤ j ≤ n.
Note that matrices with entries of the same sign, and in particular non-negative matrices are not nearly opposite.  Proof. Part 1 and Part 2 follow directly from Theorem 2.10. As for Part 3, let A(B + C) = (µ ij )m×n, AB = (λ ij )m×p and AC = (ν ij )m×p. It follows from Theorem 2.10 in case max ≤i≤m ≤j≤n R(α ij ) ≤ min ≤i≤m ≤j≤n max{R(β ij ), R(γ ij )}, and from Proposition 2.14 in case B, C are not nearly opposite, that α ij (βrs + γrs) = α ij βrs + α ij γrs whenever ≤ i ≤ m, ≤ j, r ≤ n, ≤ s ≤ p. As a result, The next corollary gives conditions for distributivity in the case of zeroless matrices, in terms of minimal or maximal relative uncertainty.
Then the result follows from Part 1 of Theorem 3.8. 2. The result follows from the fact that max ≤i≤m ≤j≤n R(α ij ) ≤ A/α and from Part 2 of Theorem 3.8.
3. As in the proof of Part 1, formula (3) holds for all ≤ i ≤ n, ≤ j ≤ p. Then the distributivity property is a consequence of Part 2 and Part 3 of Theorem 3.8.
The subdistributivity property for external numbers implies the following general properties of subdistributivity for scalar multiplication and multiplication of matrices. The proofs are immediate.
The fact that the distributivity law is not valid in general implies that the multiplication of matrices is not associative. The following example is taken from [22, p.35]. So (AB)C ≠ A(BC).
However, the subdistributivity of multiplication of external numbers, as shown in Corollary 2.7, implies the following properties of inclusion.
If A is a real matrix, or else by non-negativity of BC, the last sum is equal to 2. The proof is similar to the proof of Part 1.
We below provide conditions for the associative law for the multiplication of matrices to be valid. Proof. In case A and C are both real matrices, the left-hand side and right-hand side distributivity properties (4) and (5) hold by Proposition 2.12, because real numbers are precise. In case B is a neutricial matrix they hold because BC and AB are neutricial. The properties (4) and (5) also hold if A, B, C are all non-negative matrices, because distributivity always holds when external numbers have the same sign. Then the result follows from Theorem 3.13.
Obviously, the above associative property continues to hold if the entries of each matrix all have the same sign. The conditions of Corollary 3.14 are in a sense minimal conditions for associativity. They guarantee that the distributive properties of (4) and (5) hold term-by-term, but this is not necessary. To illustrate this, con- distributivity holds for a term of the sum with bigger neutrix than the neutrix of another term for which proper subdistributivity may holds. Then the equality of sums (4) still hold, and this is su cient to be able to prove the corresponding associativity property.
An important class of matrices is given by the non-negative matrices. It follows from the above results that the class of non-negative matrices satis es all axioms of a vector space, except for the existence of inverse elements for addition, such a space was called a semi-vector space in [15]. Also distributivity and associativity of multiplication are respected.

Determinants
We de ne determinants of matrices with external numbers in the usual way through sums of signed products of entries. We show that this value does not always correspond to the set of determinants of representatives.
Common techniques for calculation often use distributivity, so they need to be applied with care, for they may modify the neutrix part. We show that this is the case for the Laplace expansion. Using this expansion we derive a lower bound for minors, and also an upper bound is derived. Then we give conditions for the validity of the sum property for determinants. To calculate determinants in practice often the operations of Gauss elimination are applied, in order to obtain a triangular matrix; in this context this means a matrix with neutrices below or above the diagonal. This process searches for opposite terms, a context where the distributivity law is no longer valid. The use of Gauss elimination with real coe cients is helpful, but even then we need sometimes conditions on the order of magnitude of minors and neutrix parts. In the nal part we give a condition implying that the determinant of a triangular matrix equals the product of the elements on the diagonal.

. De nition of the determinant
where Sn is the set of all permutations of { , . . . , n}. We often denote the determinant of the matrix A by ∆.
It is not true in general that the above de nition of determinant corresponds to the set of values of determinants of representatives. We have equality in the case of n = and n = , but for n ≥ we make repeated use of the same representatives in di erent products, and thus do not respect the Minkovski rules of De nition 2.2 properly.
Then we have the equality for the determinant is a Minkowski sum of Minkowski products. Now let n = and Then it may no longer hold that the sum of products (6) is equal to the sum of products of representatives. As for the latter, the Minkovski rules of De nition 2.2 are not applied properly, because, say, for the terms α α α and −α α α we choose repeatedly the same representative a of α , as a consequence we get only part of the value given by (6). In particular this means that the value obtained by the Rule of Sarrus does not need to correspond to the set of values given by the Rule of Sarrus applied to representatives. We give here an example.
Then det(A) = . Let ε and A ε be de ned by The following properties of determinants are obvious and proved using similar arguments as in classical algebra.
The determinant of a matrix which has a row of neutrices is a neutrix. 4. The determinant of a matrix which has two identical rows (columns) is a neutrix. We use the following notation for minors.
We may use the standard notation ∆ i,j to denote the (i, j)-minor of A given by the determinant of (n − ) × (n − ) submatrix of A which results from removing the i th row and the j th column of A.

. Laplace expansion
Because of subdistributivity, the Laplace expansion of a determinant along a column or a row may not be equal to the determinant. For example, if we expand the determinant in Example 4.2 along the rst column we obtain that So using products of representatives or the Laplace expansion possibly reduces the neutrix part, and even may turn a neutricial determinant into a zeroless value. We come back to this subject when we discuss singular and non-singular matrices in Section 5.
In general the Laplace expansion of a determinant along a column (row) is always included in the determinant.
Proof. It follows from Part 2 of Proposition 4.3 that it su ces to prove the proposition for j = . Let Sn be the set of all permutations of { , . . . , n} and σ ∈ Sn. The Laplace expansion along the rst column and the property of subdistributivity yield Equality for the Laplace expansion holds, if we expand along a column (row) such that the relative uncertainty of all elements in this column are less than or equal to those of all the remaining elements.
Proof. We only prove the theorem for k = , the other cases are similar. The Laplace expansion along column k = yields Put for all ≤ i ≤ n. By Proposition 2.11 and assumption (9), Then by Proposition 2.12

. Reduced matrices and minors
We extend the notion of reduced matrix to matrices such that the maximal absolute value of an element is of the form of + A, with A ⊆ a small neutrix. We give lower bounds for minors, which are useful when studying Gauss elimination. Also upper bounds are given, as well for the associated neutrices.
Reduced matrices have in each column (row) a minor of (n − ) th -order at least of the same order of magnitude as the determinant. The result has some relevance for Gauss elimination, for pivots may be expressed in terms of minors [18], so it is better to be able to choose them not too small.
The proposition below gives an upper bound for all minors of a reduced matrix, and also for the corresponding neutrix parts.
When k = n we obtain that N(∆) ⊆ A.

. Addition property
The addition property det(C) = det(A) + det(B) when B is equal to A, except for one line, and C is obtained from A and B by summing with respect to this line does not hold in full generality as shown in Example 4.10 below. General conditions for this addition property to hold are stated in the next proposition.
or β kj and γ kj are not nearly opposite for all ≤ j ≤ n, then (11) holds with equality instead of inclusion.
If β kj and γ kj are not nearly opposite, we also have (13). This means that for all σ ∈ Sn , As a result,

. On Gauss-elimination
The operations of Gauss-elimination as regards to determinants of a matrix A can be e ectuated for rows and columns, and because det(A) = det(A T ), without restriction of generality we may consider only operations of rows. The e ect of interchanging two rows of a matrix has been indicated in Proposition 4.3.2.
Because of subdistributivity, the operations of multiplying a row by an external number, and of adding a multiple of one row to another, may generate inclusions instead of equalities. In the rst case we may avoid this by taking the external number to be su ciently sharp, in particular by taking a real number. Even this may not be su cient in the second case in the presence of too big neutrices or too big elements in the matrix. Bounds are given to guarantee equality.
We start with the operation of multiplying a row by a scalar. Proof. One has

Proposition 4.12. Let α be an external number and A = (α ij )n×n ∈ Mn(E). Assume that R(α)
Put λσ = α σ( ) · · · α (i− )σ(i− ) α iσ(i) α (i+ )σ(i+ ) · · · α nσ(n) . Then R(λσ) = max ≤i≤n R(α iσ(i) ) by Proposition 2.11. By the assumption, for all σ ∈ Sn . Then by Proposition 2.12 Formulas (14) and (15)  The operation of adding a scalar multiple of one row of a matrix of real numbers may again lead to inclusions, for we may blow up neutrices. For example, let A = and ω be an unlimited number. Let B be the matrix which is obtained from the matrix A by adding a multiple ω of the second row to the rst one.
Then B = + ω + ω . We see that det(A) = + while det(B) = ω , so a zeroless determinant is transformed into a neutrix containing it. We present a general property on how determinants behave under the addition of a multiple of a row to another row, and derive from it a condition of invariance. Proof. 1. Let ≤ k ≠ p ≤ n. Let B be the matrix obtained from A by replacing the row k with a copy of row p; then B takes the form Now det(B) is a neutrix since B has two identical rows. Because |α| is zeroless, we may choose a non-zero representative a ∈ α such that |α ij /a| ≤ + for all ≤ i, j ≤ n. Let R be obtained from B by dividing every entry by a, then R is a reduced matrix. Note that det α pj α pj α pj α pj ⊆ aA for ≤ j < j ≤ n. This implies that det(R) ⊆ A/a. Also, by Proposition 4.12 and Proposition 2.9.1 det(B) = a n det(R) ⊆ a n A/a = a n− A = α n− A.

In addition, if λα n− A ⊆ N(det(A)), by (17) it holds that λdet(B) ⊆ N(det(A)). Then we obtain from (18) that det(A ) = det(A).
Observe that the rst condition of Theorem 4.13 is automatically satis ed if λ ∈ R. For reduced matrices the rst condition in Part 2 of Theorem 4.13 is simpli ed into λA ⊆ N(det(A)).

. Determinants of triangular matrices
Classically we use Gauss-elimination to transform a determinant into a determinant of a triangular matrix, and then the determinant is the product of the elements on the diagonal. In the context of external numbers the techniques of Subsection 4.5 generate neutrices instead of zeros, and by Theorem 4.13 the neutrix of the determinant may be modi ed. Proposition 4.15 shows that the determinant of a triangular matrix may not be equal to the product of the entries on the diagonal, and again it will be needed to add a neutrix.
De nition 4.14. Let A = (α ij )n×n ∈ Mn(E). The matrix A is called upper triangular if α ij is a neutrix for all ≤ j < i ≤ n. The matrix A is called lower triangular if α ij is a neutrix for all ≤ i < j ≤ n. An upper triangular or lower triangular matrix is called a triangular matrix.

A triangular matrix with determinant equal to a neutrix is given by
where ω is an unlimited number. Then = · ≠ det(A) = ω . Next proposition gives an upper bound for such neutrices.

Inverse matrices
The additive inverse and multiplicative inverse of an external number α are de ned up to a neutrix, for α − α = N(α) and, if α is zeroless, one has α/α = + R(α) with R(α) ⊆ . Proposition 3.4 shows that the additive inverse of a matrix of external numbers exists up to a neutricial matrix. We de ne the multiplicative inverse of a matrix of external numbers also with respect to a neutrix, contained in . This neutrix is an upper bound for the precision that can be obtained and the (not unique) inverse is de ned in terms of inclusion.
The relationship between invertible matrices and non-singular matrices (matrices with zeroless determinant) is investigated, as well as the possibility to determine inverses with the help of cofactors. Theorem 5.6 states that such inverses exist if the maximal absolute value of the elements of the matrix is zeroless, and the determinant is not too small.
The matrix A is said to be invertible with respect to N if there exists a square matrix B = (β ij )n×n such that AB ⊆ In(N) and BA ⊆ In(N). Then B is called an inverse matrix of A with respect to N and with abuse of notation we may write B = A − N .
If A is invertible with respect to N ⊆ , it is invertible with respect to every neutrix M with N ⊆ M ⊆ . In case A is a real square matrix, the inverse matrix of A with respect to becomes the classical one and we simply write A − .

De nition 5.3.
Let n ∈ N and A ∈ Mn(E). Let C ij = (− ) i+j ∆ i,j for ≤ i, j ≤ n. We call C = (C ij )n×n the cofactor matrix of A.
Even if A is a non-singular matrix, the matrix det(A) C T is not always an inverse matrix of A with respect to a neutrix. Indeed, let ε > be in nitesimal and A = ε . Then det(A) = ε is zeroless, so A is non-singular.
We Proof. Suppose that A is singular. Then ∈ det(A). By (7) there exists a representative matrix P of A such that det(P) = . Let B be an inverse matrix of A with respect to some neutrix N ⊆ , and Q be a representative matrix of B. Then det(PQ) = det(P)det(Q) = .
On the other hand, one has AB ⊆ I (N). Now PQ is a representative matrix of I (N), so det(PQ) ≠ , contradicting (21). Hence A is non-singular.
The result above does not hold any more for n > . This is related to the fact that the set of determinants of representatives of a given matrix A may be strictly contained in det(A). In fact, every matrix of representatives of a singular matrix may be non-singular. For example, consider the matrix A = It is singular with determinant equal to , but it was shown that the set of determinants of matrices of representatives is equal to ( + )ε, so they are all non-singular. Also A is invertible with respect to . Indeed, we may take Theorem 5.6 below gives conditions such that non-singular matrices are invertible. If the matrix A is reduced, a converse holds if det(A) is not so small as to be an absorber of A. In general det(A) should not be an absorber of α n A, where α is supposed zeroless. Theorem 5.6. Let A = (α ij )n×n ∈ Mn(E) be a non-singular matrix. Assume that 1. α is zeroless.

det(A) is not an absorber of α n A. Then A is invertible with respect to A/α and B ≡ det(A)
C T is an inverse matrix with respect to A/α.
Proof. Note that A/α ⊆ , because α is zeroless. We rst assume that A is a reduced, non-singular matrix. Let A = (α ij )n×n with α ij = a ij + A ij . Let P = (a ij )n×n, K = (A ij )n×n and ∆ = d + D with d = det(P) ≠ . Let Q = (b ij )n×n be the inverse matrix of P, then b ij = c T ij d , with R = (c ij )n×n the matrix of cofactors for P. Then the cofactor matrix is of the form C = (c ij + C ij )n×n ≡ (γ ij )n×n, and we de ne M = (C ij )n×n and B = det(A) for ≤ i, j ≤ n. Let L = (B ij )n×n and let In be the identity matrix of order n.
We show that B ij ⊆ A ⊆ for all ≤ i, j ≤ n. Observe that D ⊆ A and C ij ⊆ A for ≤ i, j ≤ n by Proposition 4.9.2, and that Proposition 4.9.1 implies that d is limited and γ ij ⊆£ for all ≤ i, j ≤ n. So is not an absorber of A, so neither is d, and Indeed, since P ⊆ (£)n×n and L ⊆ (A)n×n, we derive that Also K ⊆ (A) n×n, which implies that In addition, Then (22) follows from (23)- (25). As a consequence AB = PQ + PL + KQ + KL ⊆ In + (A)n×n = In (N). Similarly, we derive that BA ⊆ In(N).
We now assume that A = (α ij )n×n ∈ Mn(E) is an arbitrary non-singular matrix such that α is zeroless. Let a ∈ α, which is non-zero. Then A = aG where G = (α ij /a)n×n ≡ (η ij ) is a reduced matrix. Because A is non-singular, the matrix G is non-singular. Also η = α/a is zeroless. Let η ij = g ij + G ij for all ≤ i, j ≤ n and implying that det(G) is not an absorber of G. Since G is reduced, by the above argument G − = det(G) D T is an inverse matrix of G with respect to G ⊆ , where D is the cofactor matrix of G. Let H = a G − = (h ij + H ij )n×n.
Then H = detA C T , with C the cofactor matrix of A. Then H is an inverse matrix of A with respect to G, because AH = aG a G − = GG − ⊆ In(G), and similarly, HA ⊆ In(G).
The choice of the representative matrix P of A in the proof of Theorem 5.6 is arbitrary and P − is always a representative of A − . The nal result of this section is an obvious consequence of the fact that (P − ) − = P.
Corollary 5.7. Let A = (α ij )n×n ∈ Mn(E) be invertible with respect to a neutrix N ⊆ and let A − N be an inverse matrix with respect to N of A. Then A − N is invertible with respect to N and A is an inverse matrix of A − with respect to N.

Linear dependence and independence
In this section we de ne the notions of linear dependence and linear independence for sets of vectors with external numbers. We give a characterization in terms of sets of vectors of representatives, and show that several common properties of linear independence continue to hold. Remark 6.1. In the present and next section it is always assumed that the number of components of a vector, and the cardinality of sets of vectors is standard nite.
We start by introducing some useful notions for external vectors.
Neutrix vectors can be seen as generalizations of the zero vector, and they are used in the following de nition of linear dependence.

De nition 6.3. A set of vectors
there exist real numbers t , ..., tm ∈ R, at least one of them being non-zero, and a neutrix vector A such that t α + · · · + tm αm = A.
Otherwise, the set V is called linearly independent.
In case {α , . . . , αm} ⊂ R n , this notion coincides with the conventional notion of linear algebra, for A, being a sum of real vectors, must be the zero vector. In the following characterization for linear independence A also must be the zero vector.

Proposition 6.4. A set V = {α , · · · , αm} of vectors in E n is linearly independent if and only if the equality t α + · · · + tm αm = A, where A is a neutrix vector, implies t = · · · = tm = and A is the zero vector.
Proof. Assume that V is a linearly independent set of vectors and t α + · · · + tm αm = A, where A is a neutrix vector. If there exists k ∈ { , . . . , n} such that t k ≠ , by De nition 6.3 the set V is linearly dependent, a contradiction. Hence t = · · · = tn = and A = ( , . . . , ).
The next proposition generalizes some common elementary properties of linear dependence and independence to external vectors. The proofs are obvious.
Proposition 6.7. Let S = {ξ , · · · , ξm} be a set of vectors in E n , and k ∈ N be standard.

If the set S is linearly dependent, any set of k vectors including S is linearly dependent. 3. If the set S is linearly independent, any set of vectors included in S is linearly independent.
In order to decide whether a set of vectors is linearly independent one often writes the set of vectors in matrix form, and then the usual tools are determinants and Gauss-elimination. We already saw that in order to be operational for matrices of external numbers, these tools should satisfy some conditions. So it is of interest to characterize linear independence and dependence of vectors in E n via representatives, i.e. real numbers, and this is done in the theorem below. Consequently ( , ..., ) ∈ t ξ + t ξ + · · · + tm ξm . Hence there exist vectors x i ∈ ξ i , i = , ..., m such that t x + t x + · · · + tm xm = . That is, the set {x , ..., xm} is linearly dependent. Conversely, suppose that there exists a linearly dependent set of vectors V = {x , ..., xm} ⊂ R n , with x i ∈ ξ i for ≤ i ≤ m; then let x i = (x i , ..., x in ) and ξ ij = x ij + X ij , where ≤ j ≤ n. There exist real numbers t , ..., tm, at least one of them being non-zero, such that t x + t x + · · · + tm xm = . Then t x j + · · · + tm x mj = for ≤ j ≤ n. So t ξ j + · · · + tm ξ mj =t (x j + X j ) + · · · + tm(x mj + X mj ) =t x j + · · · + tm x mj + t X j + · · · + tm X mj =t X j + · · · + tm X mj ≡ A j , where A j is a neutrix for ≤ j ≤ n. Hence {ξ , ..., ξm} is linearly dependent.
2. The result follows from Part 1 by contraposition.
Observe that a set of linearly dependent vectors may have a set of linearly independent representative vectors. Then {ξ , ξ } is linearly dependent by Proposition 6.7.1. Now we take x = (ε, ) ∈ ξ and x = ξ . Then {x , x } is linearly independent.
We end with a generalization of a common property, which is a consequence of Theorem 6.8.
Proposition 6.10. Let S = {ξ , · · · , ξm} be a set of vectors in E n , where m ∈ N is standard. If m > n the set S is linearly dependent.
Proof. Suppose S is linearly independent. For ≤ j ≤ m, let x j be a representative vector of ξ j . By Theorem 6.8 the set {x , · · · , xm} is linearly independent, a contradiction. Hence S is linearly dependent.

Notions of rank
Four notions of rank of a matrix are given. Three of them are obvious generalizations of classical properties, the row rank, which is the rank of the set of row vectors, the column rank with analogous de nition and the minor rank, which is based on the maximal dimension of zeroless minors. However they may not be equal. For this reason the fourth notion of rank is introduced, called strict rank, based on both the minors and the rank of a representative matrix. The strict rank is not always de ned, but if it exists, it is more operational than the other notions. For example, it permits to prove that the row rank and the minor rank are equal, and then they are also equal to the column rank, and in [36] it was helpful in solving singular systems of linear equations with coe cients and second member in terms of external numbers (the exible systems of [21]). Below we compare the ranks in various circumstances. The relation between determinants and determinants of representative matrices not being obvious for higher dimensions, some results are only derived for × and × matrices.
De nition 7.1. Let A = (α ij ) be an m × n matrix over E.
1. The row-rank of A is the maximal cardinality of a linearly independent set of row vectors of A and is denoted by r(A), corresponding to the common notation of rank for sets of real vectors. Example 7.3. We reconsider the matrix

The column-rank of
where ε , ε ≠ , of Example 4.2. Theorem 6.8.2 applied to the representative matrices (8) shows that the set of row vectors of A is linearly independent, so r(A) = . Analogously c(A) = . However we saw that det(A) = , and + ε is a non-singular minor. So mr(A) = , hence the minor rank is less than the row rank. Also the rank of the representative matrices (8), being equal to , is di erent from the minor rank. This means that the strict rank of A is not well-de ned.
In the remaining part of this section we investigate the relationship between the various ranks. In general the minor rank is less than or equal to the row rank (Theorem 7.5 ), and they are equal if the strict rank is wellde ned (Theorem 7.6); then they are also equal to the column rank. We also have equality in some special cases of low rank (Propositions 7.9, 7.10 and 7.11) and if the matrix is non-singular, i.e. if the minor-rank is maximal (Theorem 7.4). We start with the latter theorem, since it is used further on. .., ξ r } is linearly independent. In order to prove that the set of vectors {ξ , . . . , ξr} is linearly independent, assume that t ξ +· · ·+ tr ξr = (A , . . . , An), with A , . . . , An neutrices. Then t α j + t α j + · · · + tr α rj = A j for ≤ j ≤ n. It follows that t ξ + · · · + tr ξ r = (A , . . . , Ar). Because {ξ , . . . , ξ r } is linearly independent, it holds that t = · · · = tr = .
Hence the set of vectors {ξ , ..., ξr} is linearly independent by Proposition 6.4. Let A = (α ij )m×n ≡ (a ij + A ij )m×n ∈ Mm,n(E). It was observed in Section 4 that only for m = n ≤ there is a straightforward relation between the determinants given by De nition 6 and determinants of representative matrices. This suggests that only for matrices of low rank there exists an obvious relation between the row rank and the minor rank, which can be established without recurring to the strict rank. Proposition 7.9 is a converse to Theorem 7.4 for n = and n = Proposition 7.10 considers the case that if the minor rank is equal to , it is equal to the row rank, and then also equal to the strict rank, and Theorem 7.11 the converse case for ranks or . We start with some notation.
Notation 7.8. Let A = (α ij )m×n ≡ (a ij + A ij )m×n ∈ Mm,n(E). For ≤ i ≤ m we denote the i th row vector by α i ≡ (α i , · · · , α in ), and write A ≡ max ≤i≤m A i and A C = min ≤j≤n ≤i≤m A ij . Proposition 7.9. Let A = (α ij )n×n ∈ Mn,n(E). Assume that r(A) = n and that n = or n = . Then mr(A) = n.
Proof. The proposition is obvious if n = . For n = , by assumption {α , α } is linearly independent. Let a = (a , a ) be a representative vector of α , and a = (a , a ) be a representative vector of α . By Theorem 6.8.2 the set {a , a } is linearly independent. Hence a a −a a ≠ . Because a , a , a , a are arbitrary, it follows from (7) that det(A) = α α − α α is zeroless. Hence mr(A) = .
Example 7.3 shows that Proposition 7.9 does no longer hold for n ≥ ; it also shows that the next proposition is not valid for n ≥ .
Hence formula (27) is true for j = . Let k ∈ N, < k ≤ n be arbitrary. We need to prove that there is a column a k = (a k , a ik ) T such that a pk ∈ α pk for p ∈ { , i} and det a a k a i a ik = , where a , a i are de ned by (28). Again because mr(A) = , the determinant det α α k α i α ik is a neutrix. As a result, there exists a representative matrix a a k a i a ik with a ij ∈ α ij such that det a a k a i a ik = .
As for case (i), we put Observe rst that ε q ∈ A q for all q ∈ { , i}. We show that also ε ik ∈ A ik . By (30) one has Because ε p ∈ A p ⊆ A for p ∈ { , i} and |a hk | ≤ |α hk | ≤ + for h ∈ { , i}, it holds that t ∈ A . Also d = a ∈ α . We conclude that Hence a pk ∈ α pk for all p ∈ { , i} with a .k = (a k , a ik ) T ≡ (a k , a ik + ε ik ) T . In addition a .k satis es formula (29), for det a a k a i a ik + ε ik = t + det a a i ε ik As for case (ii), without loss of generality we assume that |α | is maximal. Put u = (a , a i ). The set of column vectors u = (a , a i ) T , u k = (a k , a ik ) T is linearly dependent. As a consequence, there exist real numbers s and δ , δ i ∈ A such that where δ ≡ (δ , δ i ) ∈ (A, A) and s = a k /a . Moreover |s| ≤ + , since |α | is maximal. So sδ ∈ (A, A).
Then a qk ∈ α qk for q ∈ { , i}. By (31) one has u k = su , so {u , u k } is linearly dependent. Hence det a a k a i a ik = , which amounts again to (29). In both cases, because k is arbitrary, formula (27) holds for ≤ j ≤ n. We conclude that the set of vectors {a , ap} is linearly dependent. Then {α , αp} is linearly dependent for all p ∈ { , . . . , m} by Theorem 6.8. So a linear independent set of row vectors of A cannot have more than one element, hence r(A) = . The fact that sr(A) = follows by Theorem 7.7. Proposition 7.11. Let A = (α ij )m×n ∈ Mm×n(E). Assume that r(A) = r ≤ min{m, n}. If (i) r = or (ii) r = and all A ij are equal to some neutrix A, then mr(A) = r. As a result, sr(A) = r.
(ii) By Theorem 7.5 it holds that mr(A) ≤ r(A) = . Suppose that all minors of order are neutricial. Then mr(A) ≤ . If mr(A) = , then r = , a contradiction. If mr(A) = , by part (ii) of Theorem 7.10 also r(A) = , again a contradiction. Hence there exists a minor of order 2 which is zeroless. This means that mr(A) ≥ . Combining, we obtain that mr(A) = .

Other approaches to error analysis in matrix calculus
Our approach to error analysis of matrices is characterized by treating, at every entry, an error as a set of numbers around a speci c value, resulting in a rather strong algebraic structure for error propagation, to which basic notions of linear algebra can be adapted. In this section we intend to situate this approach with respect to existing methods, in particular classical asymptotic theory, Van der Corput's neutrices of functions, interval calculus, parametrization and probabilistic methods.
First we note that due to the Sorites property [12,13,37] of neutrices, we tend to model imprecisions (say, coming from measuring and rounding o ) more than uncertainties, which may have other sources, like imperfect models in the case of simpli cations of too complex reality, or the impossibility to take into account intrinsic stochastic aspects [34]. Also our approach is theoretical, and aims at a description of the behavior of errors. In concrete situations it must be interpreted before it can be implemented in numerical analysis and computer calculations; what is small, what can be neglected?
We share these problems of interpretation with common asymptotics based on neglection of Oh's and oh's, which have been de ned in terms of groups of functions in [3], and Van der Corput's neutrix theory [5], where also other groups of functions (for instance oscillatory functions) may be neglected. In these settings algebraic operations are well-de ned, but they do not lead to structures as strong as a Complete Arithmetical Solid. For example, a set of functions in general does not allow for an order relation. Also, due the functional dependence there are serious complications when trying to handle multiple errors individually, and it seems that there exist no thorough applications to the propagation of errors in linear algebra or matrix calculus.
Common error analysis models errors more or less informally as small intervals around a value, resulting from a measurement or an estimation [35]. This enables individual treatment of errors and algebraic operations on them, which have essentially the same form as the Minkowski operations of De nition 2.2. The informal nature of error analysis inhibits the development of a strong algebraic calculus and the formulation of the basic notions of linear algebra. On the other hand, the implementation as an interval is obvious, though discussion is possible on the interpretation of "small". Proper interval calculus [1,14,27,30] is part of formal mathematics, and stronger algebraic properties hold for operations. However it is no longer built on the Minkowski operations of De nition 2.2, and due to problems of subdistributivity and intersection not in all cases simple laws can be given, moreover the algebraic operations do not need to respect order. Interval analysis of matrix operations has been studied [30], though not from the point of view of algebraic properties. Of all the approaches the implementation of interval calculus is perhaps the most straightforward.
Methods of attributing to imprecise factors one or more parameters taking values in de nite intervals, have been proposed for, in particular, linear programming. Multiparameter methods enable individual treatment of errors and have been proposed by among others Gass and Saaty [19,20] and Nedoma and Gall [17]. By their functional nature the implementation is straightforward, and their numerical implications are intensively studied in e.g. [6]. Though one of the issues is the study of degeneracy [17], it has been recognized that excessive complications seem to avoid the development of a thorough algebraic theory.
Fuzzy set theory [31,38] treats uncertainties and imprecisions by dealing with sets in the form of representative functions other than characteristic functions. By nature this approach does not address the Sorites property, but permits individual treatment of errors. The method has been applied to matrix calculus. Operations are clearly de ned, and have been studied from the numerical point of view [31]. There does not seem to exist a strong algebraic theory of matrix computations, including the basic notions of linear algebra.
There is a large variety of statistic and stochastic approaches to the analysis of errors [4,23,33], and they are used to study uncertainties and imprecisions of several kinds, also within matrix calculus [7,16,28]. Computer simulations facilitate their implementation, but the establishment of a theory of linear algebra in the setting of error propagation with individual errors again seems to be complicated by the fact that probability distributions are functional, thus behaving less appropriately under algebraic operations.
Summarizing, we defend that the approach by external numbers respects the imprecision of errors, while allowing for a calculus for error propagation of moderate complexity, which yields insights at an intermediate level between qualitative and quantitative analysis. This calculus has stronger algebraic properties than other approaches, which however are mostly easier to implement.