ǫ -Arithmetics for Real Vectors and Linear Processing of Real Vector-Valued Signals

In this paper, we introduce a new concept, namely ǫ -arithmetics, for real vectors of any ﬁxed dimension. The basic idea is to use vectors of rational values (called rational vectors) to approximate vectors of real values of the same dimension within ǫ range. For rational vectors of a ﬁxed dimension m , they can form a ﬁeld that is an m th order extension Q ( α ) of the rational ﬁeld Q where α has its minimal polynomial of degree m over Q . Then, the arithmetics, such as addition, subtraction, multiplication, and division, of real vectors can be deﬁned by using that of their approximated rational vectors within ǫ range. We also deﬁne complex conjugate of a real vector and then inner product and convolution of two real vectors and two real vector sequences (signals) of ﬁnite length. With these newly deﬁned concepts for real vectors, linear processing, such as linear ﬁltering, ARMA modeling, and least squares ﬁtting, can be implemented to real vector-valued signals, which will broaden the existing linear processing to scalar-valued signals.


I. INTRODUCTION
Real numbers form a field R, i.e., they have arithmetics of addition, subtraction, multiplication, and division, and the multiplication of two real numbers is commutative.This leads to the convenience in solving systems of linear equations of real coefficients, which has a fundamental X.-G.Xia is with the Department of Electrical and Computer Engineering, University of Delaware, Newark, DE 19716, USA (e-mail: xxia@ece.udel.edu).
DRAFT importance in many science and engineering applications.A natural question is what happens to vectors of real values, i.e., real vectors.When the dimension of real vectors is 2, they also form a field that is the complex number field, by simply introducing the imaginary unit i = √ −1 as follows.For a two dimensional real vector [x, y], let z = x + iy.Then, all z form the complex number field C and all the arithmetics of real vectors [x, y] correspond to that of complex numbers z.
However, for real vectors of dimension 3 or higher, they do not form a field, although all 4 dimensional real vectors form the domain of quaternionic numbers and all 8 dimensional real vectors form the domain of octonionic numbers [11].There exist arithmetics for 4 dimensional real vectors and also for 8 dimensional real vectors but their multiplications are not commutative.
This non-cummtativity may cause a lot of inconvenience for the arithmetics of real vectors.
Although higher dimensional real vectors do not form a field, if the real field is replaced by a smaller subfield or a finite field, the corresponding vectors may form a field.For example, assume all the components of vectors of a fixed dimension m take values in a finite field E.
Let α be a root of a primitive polynomial p(x) of degree m over E [1], [2].Then, all the m dimensional vectors of components taking values in E form another finite field F that is an extension of field E of degree [F : E] = m.This plays the fundamental role in Reed-Solomon (RS) codes and BCH codes in error correction coding [3].The rationale is explained in more details in Section V.
Another example is rational vectors, i.e., vectors with all components taking values in the rational field Q.For m dimensional rational vectors, let α have its minimal polynomial of degree m over Q.Then, all the m dimensional rational vectors form a field Q(α) that is an algebraic number field and is an extension of Q of degree m.Therefore, all the m dimensional rational vectors have the 4 arithmetics as real numbers and the multiplication of any two m dimensional rational vectors is commutative.
Since real values can be approximated by rational values to an arbitrary precision, in this paper we propose to use rational vectors to approximate real vectors in their arithmetics within an ǫ range, called ǫ-arithmetics.We also define the complex conjugate of a real vector and then define inner product and linear convolution of two real vectors and two real vector sequences (called vector-valued signals).Thus, the proposed ǫ arithmetics provide similar linear processing for general real vector-valued signals as that for complex-valued signals.This will broaden the linear processing of the conventional real or complex-valued signals, such as linear filtering, DRAFT ARMA modeling, and least squares fitting to a set of data.
Note that algebraic number fields have been applied in signal processing, mostly in fast algorithms [4], [5], communications, such as coding [6] and space-time coding [7]- [9], and cryptography, such as lattice based cryptography [10].However, all these existing applications are different from what is proposed in this paper for the arithmetics of addition, subtraction, multiplication, and division (in what follows, arithmetics always mean these four operations) of real vectors, which are used to linearly process real vector-valued signals of any fixed vector size with real vector-valued coefficients of the same vector size, similar to the conventional real or complex scalar-valued signals.
This paper is organized as follows.In Section II, we define ǫ-arithmetics for real vectors.In Section III, we define complex conjugate of a real vector and inner products for two real vectors and two real vector sequences.In Section IV, we define convolution and linear filtering for finite length real vector-valued signals.In Section V, we have some discussions.In Section V, we conclude this paper.

II. ǫ-ARITHMETICS
We first briefly introduce an algebraic number field, an extension of the rational field Q.Let α be an algebraic number with its minimal polynomial p(x) of degree m over the rational field Q, i.e., p(x) is the polynomial of lowest degree with coefficients in the rational field Q such that α is a root of the polynomial.Then, is an algeraic number field and an extension of Q of degree m.Thus, all elements in Q(α) have the 4 conventional arithmetics and the multiplication is commutative.Let Q[x] denote the ring of all polynomials over Q, i.e., polynomials with rational coefficients.Then, the field Q(α) ), the field of all polynomials under modulo p(x) operation, i.e., We next introduce the arithmetics for rational vectors.For any m dimensional rational vector and it is clear that this mapping is one-to-one and onto.We define the arithmetics of these m dimensional rational vectors as that of their mapped elements in the algebraic number field Q(α): where q j = [q j1 , q j2 , • • • , q jm ] ∈ Q m for j = 1, 2, and • is an arithmetic operation, i.e., one of addition, subtraction, multiplication and division.Since Q(α) is a field, the right hand side of (2) is also an element in Q(α), and thus has an expression q 1 + q 2 α + • • • + q m α m−1 for some ), the addition, subtraction, multiplication of two rational vectors of dimension m are easy to implement.To do division, we only need to know how to implement the inverse of a non-zero rational vector Since p(x) is a minimal polynomial over Q, q(x) and p(x) are co-prime over Q.From Bézout's identity and using Euclidean algorithm, one can find 1 two polynomials of rational coefficients which implies u(α)q(α) = 1 mod p(α), and thus we have q(α) = (q(α) the inverse of a real vector can be found in the Appendix.With the inverse calculation of a nonzero m dimensional rational vector, the divisions of non-zero m dimensional rational vectors follow immediately.
Note that, since the algebraic number field Q(α) is a subfield of the complex number field C, the arithmetic, •, in (2) on the algebraic number field Q(α) is also the same as that on the complex number field, i.e., the conventional +, −, ×, ÷ for complex numbers.
From the above definition of the arithmetics for rational vectors, one can see that different algebraic numbers α define different arithmetics for the same rational vectors.Let us see two and let α 2 = exp(2πi/5), and the minimal polynomial is p 2 (x) = x 4 + x 3 + x 2 + x + 1.In fact, α 2 is a cyclotomic number and its generated algebraic number field is a cyclotomic field [2].
For simplicity, consider two and their multiplications following (2) and the two algebraic numbers α 1 and α 2 .For the multiplication of the two rational vectors with α 1 , following (2) and α 4 1 = 10α 2 1 − 1 due to p 1 (α 1 ) = 0, we have For the multiplication of the two rational vectors with α 2 , following (2) and One can see that the above two multiplication results of the same two rational vectors are much different.

DRAFT
We now introduce ǫ-arithmetics for real vectors for a fixed algebraic number α with its minimal polynomial of degree m over Q.Consider the m dimensional real vector space R m for m ≥ 3.
Let ǫ > 0 be an arbitrary small positive number.Let be two arbitrary m dimensional real vectors.Their ǫ-arithmetics are defined as follows.
Find two m dimensional rational vectors q 1 , q 2 ∈ Q m in the ǫ ranges of r 1 , r 2 : where • is a norm for m dimensional vectors, such as l 2 or l ∞ norm, and if any r j is rational, then q j = r j , j = 1 or/and 2.
The ǫ-arithmetic operation for two m dimensional vectors r 1 and r 2 is defined as where • is an arithmetic operation, such as +, −, ×, ÷, and From the above definition of ǫ-arithmetics, clearly the ǫ-arithmetic result of two real vectors r j , j = 1, 2, is not unique, even for a fixed algebraic number α in (2).This is because a rational vector q j in (6) in the ǫ range of the real vector r j is not unique.In fact, the above ǫ-arithmetics in ( 7) can be defined as a set-valued mappings, where r 1 • r 2 is equal to a set of q 1 • q 2 in (7) for non-empty sets of q 1 and q 2 in the ǫ ranges of r 1 and r 2 in (6), respectively.Although this is the case, since rational numbers are dense in the real field R, this ǫ can be made arbitrarily small, such as a computer numerical error level.In this case, within the numerical precision range, the error (or difference) in the ǫ-arithmetics from different rational vector approximations in the ǫ ranges of two real vectors is negligible, or just the computer numerical error.Note that this may be similarly done as in [5] by using arbitrary large integers in getting rational vectors with a desired precision.
We know that all real values stored in computers must be rational.Therefore, in terms of practical computations on computers, the ǫ-arithmetics for real vectors can be made the same as the arithmethics for rational vectors.Thus, for notational convenience and without confusion in understanding, for two real vectors , their arithmetic • is also written as DRAFT and use the following abusively for r Note that because the ǫ-arithmetics for real vectors using approximated rational vectors are defined in (2) with only a fixed m finite opeartions, they are robust to the approximation errors.
For a real vector that is in an ǫ range of 0, then it is treated as 0. Otherwise, its division is also robust to an approximation error.This means that when ǫ is small enough, the differences of ǫ-arithmetic operations using different rational vector approximations of real vectors is negligible in practical calculations of ǫ-arithmetics.A detailed Matlab code to compute the product of two real vectors can be found in the Appendix.
With the above ǫ-arithmetics for real vectors, one is able to systematically solve systems of linear equations over real vectors: where a lj , b l are known real vectors of dimension m in R m and x j are unknown real vectors of dimension m to solve.It may have applications in, for example, the least squares fitting to a set of data, which may be broader than the conventional least squares fitting.
As a remark, for convenience, the above real vector r ∈ R m always corresponds to the algebraic number α with its minimal polynomial p(x) ∈ Q[x] of degree m, unless otherwise specified.

III. COMPLEX CONJUGATE AND INNER PRODUCT OF REAL VECTORS
In this section, we define complex conjugate and inner product for real vectors when the algebraic number α is a real number or cyclotomic number exp(2πi/p) for a prime number p with p > 2.
When the algebraic number α is real, the complex conjugate of a real vector r is defined as itself, i.e., r * = r.When the algebraic number α is cyclotomic exp(2πi/p) for a prime number p with p > 2, the complex conjugate of α is DRAFT since, in this case, the minimal polynomial is . Then, for a real vector where m = p − 1. Clearly (r * ) * = r.Let us see a simple example when p = 3.Then, m = 2 and This is just for an illustration, since for m = 2 dimensional real vectors, one may simply use the complex field C for the arithmetics.
For two real vectors , their inner product is defined as where r * 2 is the complex conjugate of r 2 .Clearly, we have r 1 , r 2 = ( r 2 , r 1 ) * .For two real vector sequences rj = {r j,l } 1≤l≤L of dimension m and length L for j = 1, 2, their inner product is defined as: where r * 2,l is the complex conjugate of r 2,l .Two real vectors r i , i = 1, 2, are called orthogonal if their inner product defined in (13) is 0.
Two real vector sequences ri , i = 1, 2, of length L are called orthogonal, if their inner product defined in ( 14) is 0.
For the inner product of two real vectors defined in (13), it can be expanded as DRAFT Let us see an example of 4 dimensional real vectors in R 4 with α = √ 2 + √ 3 and its minimal polynomial p(x) = x 4 − 10x 2 + 1.In this case, by some algebra the inner product of When From (15), the right hand side is the product of two polynomials and the coefficients of the product of two polynomials would come from the convolution of the two vectors in general, i.e., the inner product of the real vectors would be their convolution.From ( 16), one can see that it is clearly not true here, which is due to the modulo p(x) = x 4 − 10x 2 + 1 operation, i.e., α 4 = 10α 2 − 1, in the product of two polynomials.
If the product of two polynomials in the right hand side of ( 15) was under the modulo p(x) = x 4 − 1, i.e., α 4 = 1, then, the inner product r 1 , r 2 would be the circular convolution of DRAFT the two vectors: and thus the inner product of two real vectors cannot be a circular convolution in general.
As another comparison, we consider the algebraic number α = α 2 in Section II, i.e., α = exp(2πi/5).Then, α 5 = 1 and α 4 = −(α 3 + α 2 + α + 1).In this case, by some algebra we have For two general m dimensional real vectors r 1 and r 2 with a general algebraic number α with its minimal polynomial p(x DRAFT where a i ∈ Q, 1 ≤ i ≤ m, then their inner product can be similarly defined as (15): where, after the polynomial expansions, each r i , 1 ≤ i ≤ m, is a linear function of r 1k r 2n , For the inner product of two real vectors, from (15), one can see that the right hand side is a product of two complex numbers.Thus, r 1 , r 2 = 0 if and only if either r 1 = 0 or r 2 = 0.
Then, r, r = 0 if and only if r = 0.And two real vectors are orthogonal if and only if one of them is 0. It is also not hard to see that for a real vector sequence r, r, r = 0 if and only if r = 0, i.e., the 0 sequence.
For two rational vector sequences qi , i = 1, 2, of length L, from ( 14), (13), and ( 15), they are orthogonal if and only if their corresponding sequences of complex numbers are orthogonal.
This implies that there are at most L many orthogonal rational vector sequences of length L.
Furthermore, from Gram-Schmidt orthogonalization procedure, it is not hard to see that there exist L many orthogonal real vector sequences of length L.

IV. CONVOLUTION AND LINEAR FILTERING OF REAL VECTOR-VALUED SIGNALS
With the above inner product for real vectors, it is easy to define a convolution of two real vector sequences (or called real vector-valued signals).Let r j (n), j = 1, 2, be two real vectorvalued signals of finite length, i.e., for each integer n in a finite range, r j (n) is a real vector defined as before and 0 for other n for j = 1, 2. Their convolution is defined as It is clear that when the vector size is 2 and the primitive element α = i, the above convolution coincides with the convolution of two complex-valued signals.For a general vector size, since the ǫ-arithmetic operations of real vectors, i.e., approximated rational vectors, are all commutative, all the properties of convolutions for complex-valued signals hold for general real vector-valued DRAFT signals.Thus, if one real vector-valued signal is treated as a filter impulse response, say h(n) = r 1 (n), and the other is an input signal, s(n) = r 2 (n), the above convolution is the linear filtering of a real vector-valued input signal s(n) to a real vector-valued system h(n).
Note that the above definition of convolution only applies to finite length real vector-valued signals.This is because in the ǫ-arithmetics for real vectors, it uses rational approximations within an ǫ range.When there are infinitely many ǫ-arithmetics in a summation, the approximation error may blow up, no matter how small an ǫ is.Fortunately, in practice all signals in computations have finite length and thus the above convolution for finite length real vector-valued signals is sufficient.
V. SOME DISCUSSIONS After the earlier definitions of arithmetics, inner product, convolution, and linear filtering for real vectors and vector-valued signals, one might ask whether there are any applications.To explain some of their potential applications, we first explain how the arithmetics of vectors of finite many elements (in finite fields) are used in discovering better error correction codes.
In error correction codes, it is well-known that in order to have computationally efficient encoding and decoding, one usually uses linear codes [3].Before using vector arithmetics (or finite fields), linear block error correction codes were binary linear block codes where generator matrices are binary, i.e., matrix entries are either 0 or 1.This may strongly limit the possibilities and choices of good generator matrices.In order to have more choices for good generator matrices, it is critical to expand the choices of binary elements of the entries in a generator matrix.One common way to do so is to generalize binary scalar values 0 and 1 to vectors of binary values, i.e., binary vectors.Then, the question is how to do the arithmetics for binary vectors.In error correction coding, to correct errors it is important to be able to solve systems of linear equations over, in this case, binary vectors.To do so, it is important to have all arithmetics for binary vectors.As what was mentioned in Introduction, the field extension over the binary field {0, 1} provides such a property, i.e., it provides all the arithmetics for vectors of finite many elements.This is the foundation on how RS codes and BCH codes are constructed [3].Without finite fields (or arithmetics of vectors of finite many elements), it would not exist RS codes or BCH codes that have been widely used in our daily life electronics.Let us see an example for the above rationale.Consider a binary linear block code with a generator matrix of size 3 × 2, i.e., 2 input binary symbols produce 3 output binary symbols.In DRAFT this case, there are total 2 6 = 64 possible generator matrices to choose.Now we replace every element in a 3 × 2 binary generator matrix by a binary vector of size m = 4, i.e., an element in Galois field GF(2 4 ).As an example, consider the following binary 3 × 2 generator matrix: .
We now replace 0 by [0, 0, 0, 0] and 1 by [1, 0, 0, 0] in the above binary matrix and obtain In this case, the encoding is to multiply this 3 × 2 matrix with a 2 × 1 information symbol vector of two binary vectors of size 4 from the right.
Note that a 2 × 1 information symbol vector of two binary vectors of size 4 is the same as a 8 × 1 binary vector as a whole.Thus, if the above encoding over GF (2 4 ) needs to be compared with a linear binary encoding, for example, one might ask whether it corresponds to a valid binary linear block code?If it does, the binary generator matrix would be      1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 , by lining up the binary components in the binary vectors of size 4 in each row in the matrix in (19) into a new row.One can then clearly see that this binary matrix is 3 × 8 and not even invertible from the left, i.e., it cannot be a valid generator matrix for a decodable binary linear block code.In other words, the linear encoding over finite field GF(2 4 ) does not correspond to a binary linear block code.There are much more valid linear block codes over a larger size finite field than those over the binary field.
The same idea as the binary vectors can be applied to real vectors.The vectorized linear processing broadens the scalar linear processing and provides much more degrees of freedom and convenience, while the linear arithmetics of real vectors provide the convenience in finding solutions in practical applications.In addition to the linear filtering proposed in Section IV, another application is the least squares fitting to a set of data.With the ǫ-arithmetics defined for real vectors in this paper, compared to the conventional least squares scalar fitting, the least DRAFT squares vector fitting is more flexible and therefore, better fitting performance can be expected.
A similar application is the linear autoregressive and moving average (ARMA) model fitting in time series.In summary, the ǫ-arithmetics introduced in this paper will open the door to enlarge the pool of all the existing linear processing techniques of scalar-valued signals.
Another potential application is in image processing, such as image compression.Usually, image pixels are correlated in two dimensions, while the conventional vector based signal processing techniques, such as vector transforms (VT) first proposed and applied in compression by Li in [12], [13] and investigated later in [14]- [20], vector Karhunen-Loève transform (KLT) [19], and vector-valued wavelets [21], may only decorrelate signals in one dimension.And they are usually applied row-wise and coloumn-wise separately.An extended VT called average optimal VT was proposed in [20] for a two dimensional image by doing average in one dimension first and doing VT in the other dimension second to the blocks of the image.
With the arithmetics for real vectors proposed in this paper, for a block of an image, we may treat it as a real vector-valued signal, i.e., consider its first dimenion as a vector value r and the second dimension as the vector [r 1 , r 2 , • • • , r L ] of the vector values r i for i = 1, 2, ..., L.
Then, we may apply a vector-valued signal processing technique, such as VT, vector KLT, or vector-valued wavelets, to this vector.This proposed processing might be possible to decorrelate a two dimensional image along both dimensions simultaneously.We believe that it will provide a significantly new technique for image processing.Note that what is proposed in this paper is different from matrix KLT and matrix-valued wavelets for processing matrix-valued signals directly in [22].
As a remark, an issue about the ǫ-arithmetics for real vectors defined in this paper is the choice of an algebraic number α that determines the ǫ-arithmetics for real vectors.As mentioned earlier, different algebraic numbers α provide different ǫ-arithmetics for real vectors and the difference may be large.Then, the question is which algebraic number α is needed or better.The answer to this question might depend on the application scenario.

VI. CONCLUDING REMARKS
It is known that real vectors of dimension higher than 2 do not form a field.In other words, one cannot do arithmetics for real vectors of dimension higher than 2 similar to that for real numbers.
This may limit the capability of solving systems of linear equations over real vectors of a finite DRAFT dimension.In this paper, we have proposed ǫ-arithmetics for real vectors by using approximations of rational vectors that form algebraic number fields.We have also defined the complex conjugate of a real vector, inner product, and convolutions of two real vectors and two real vector sequences (or called vector-valued signals).This may lead to systematic linear processing for real vectorvalued signals with real vector-valued coefficients, such as linear filtering, least squares fitting, and ARMA modeling to real vector-valued signals.The ǫ-arithmetics proposed in this paper for real vectors provide the same convenience as that for complex numbers in linear processing, while they broaden the conventional linear processing of scalar-valued signals.It is believed that the study in this paper will open a door to the signal processing community.

APPENDIX: MATLAB CODES TO COMPUTE REAL VECTOR MULTIPLICATION AND INVERSE
Below are Matlab function codes to compute the product of real vectors p 1 and p 2 , and the inverse p −1 of a real vector p, for a given minimal polynomial q(x).The vector of the coefficients of q(x) is q.In all real vectors, i.e., all coefficient vectors of polynomials, in the following Matlab codes, their components are in the decreasing order: q(1) = q m , q(2) = q m−1 , ..., q(m) = q 1 , if q(x) = m−1 k=0 q k+1 x k .This is opposite to the descriptions in this paper, where they are in the increasing order.In other words, the orders of vector components in the paper and in the following Matlab codes are the reverse of each other.

Real Vector Multiplication
p1 and p2 are are two real vectors of dimension m to multiply.p1, p2, and q are row coefficient vectors of polynomials as p(x) and q(x).q(x) is a minimal polynomial of degree m − 1.
In a polynomial, the order of x is from high to low.For example, if p = [2, 1, 3], then p(x) = 2x 2 + x + 3.
The output mvec is the product of p1 and p2.Inverse of a Real Vector p is a real vector of dimension m to have its inverse.
p and q are row coefficient vectors of polynomials p(x) and q(x), respectively.q(x) is a minimal polynomial of degree m − 1.
In a polynomial, the order of x is from high to low.For example, if p = [2, 1, 3], then p(x) = 2x 2 + x + 3.
The output ivec is the inverse of real vector p.