Root Projection of One-Sided Time Series

Until recently it has been impossible to accurately determine the roots of polynomials of high degree, even for polynomials derived from the Z transform of time series where the dynamic range of the coefficients is generally less than 100 dB. In a companion paper, two new programs for solving such polynomials were discussed and applied to signature analysis of one-sided time series [1], We present here another technique, that of root projection (RP), together with a Gram-Schmidt method for implementing it on vectors of large dimension. This technique utilizes the roots of the Z transform of a one-sided time series to construct a weighted least squares modification of the time series whose Z transform has an appropriately modified root distribution. Such a modification can be employed in a manner which is very useful for filtering and deconvolution applications [2]. Examples given here include the use of boundary root projection for front end noise reduction and a generalization of Prony’s method.


Introduction
Lety be a complex number. We define 'y' to be the "geometric sequence" vector with components y^=y", n=0,..., N. Then given a time series of length N + 1 represented by the vector a:N, ci(y)=S^a; and the condition that y be a root of a(y) is: ^''a=Q. (1) Given a (possibly complex) time series a:N and any set of complex roots of eq (1), ji,..., yM, Af ^ //, we shall construct a linear projection to modify a:N to another time series b:N so that 6(ya) = 0, a = l,..., M, and so that {a-bfGia-bY, (2) where "*" means complex conjugate, is a minimum for any positive definite Hermitian weighting matrix G. In other words b:N is the series closest to a:N in a weighted least squares sense, which has a chosen set of complex numbers among the roots of its Y transform. We refer to this process as root projection (RP).
In a slightly more general context consider a (possibly complex) time series a:N, a set of complex numbers yi,..., yu with M^N, and a set of complex values vi,..., VM. We construct a time series b:N whose difference from a:N is minimal among those series with 6()'«) = Ua, a = l,..,,M.
Because of the slight complication due to the complex numbers involved, we shall set and solve the above minimization problem using Lagrangian multipliers. We want to find the time series b:N such that: b:N=!p|?[(a -xfG\a -x)* + 11=1 a=l J Using index notation and differentiating eq (3) with respect Xj * and A *, we have: Setting >-^=(y")*,3;3t = (y^)*, and G^' = (G-'y, solving for 6,-in the first of eqs (4), substituting into the second equation and solving for A^: whence: bi =ai -ajy^Q'^y%"G;;} + VcQ'^%,G^.
To make the connection to linear vector space theory, we start with complex N space, S'', having an inner product given by the positive-definite Hermitian matrix G: and complex M space, &', having an inner product given by the Hermitian matrix Q "' (assumed, here, to be non-singular) with an inner product definition similar to eq (7) and with vectors such as v (with components Va, a = 1,..., M) [3]. We define the mapping Y from S'' to (£*' by: Y has an adjoint mapping Y^, from ^^ to (S" given by the condition (y+ufG^a* = v''((2-y(ya)*, from which yt = G-iy*^e-'.
In matrix notation eq (6) becomes: 6 = (/-?)a+y+v (10) P = Y^Y,P^=P, the projection property, P^-P, following directly from eq (6). It also follows directly from eq (6) that the range of / -P is contained in the null space of Y^, while the range of P is contained in the range of Y^. Since both I -P and P as well as the null space of y and the range of Y^ decompose E'^ (the latter being an orthogonal decomposition), I-P and P are orthogonal mappings onto the respective spaces. The mapping / -P is, then, the desired root projection.'

A Gram-Schmidt Algorithm for Root Projection
In order to carry out root projection, we need an orthogonal basis for the range of Y' in the unitary space with metric G. From eq (8) we see that this space is spanned by vectors of the form G~^'y2. For any two such vectors: where {G-\f=G-\ Thus, an orthonormal basis chosen from the vectors G "f^ by applying the Gram-Schmidt process using the ordinary unitary inner product will determine an orthogonal basis for the range of Y* in the unitary space with metric G by multiplying each vector by G ~i. However, when only a limited number of vectors need to be projected, the projection can be carried out more efficiently by: i) transforming the vector a to G^a ii) projecting G'ia in the Euclidean norm using the modified basis G "^ Hi) transforming the result back by multiplication with G 4.
The root projection process is especially simple when the set oiya's is closed under complex conjugation (i.e., ifya belongs to the set, so does_y *) for the case of simple time or frequency weighting. Because of the closure under complex conjugation, the y^ vectors can be replaced by the vectors consisting of their real and imaginary parts, ^{y^) and ' A concise exposition of the linear algebra required for root projection, with G =1 can also be given in terms of the QR decomposition [4] (although we use the "transposed" form of that in [4]). S(5^). By the time weighting case we mean that G "' and G"^ are diagonal (and, therefore, positive and real), so that no unitary transformation is required. In that case each of the rows is multiplied by the same factor. Thus, the basis for the range of Y^ can be obtained by applying the real form of the Gram-Schmidt process to the vectors yi(G~2y^) and ^(G '^y^). The root projection of a real time series, outlined in eq (13), will, in this case, also be real.
In the case of frequency weighting we mean scalar weights applied to the DFT components of a time series. The DFT process is, itself, a unitary transformation, and the orthogonalization proces^ can be made real by the trick of putting 9l(a*)v'2 in place^of the DFT component a* and putting '^{ak)\/2 in place of the DFT component aT^. The real dot product of these series is the same as the unitary dot product of the DFT's. A diagonalized frequency weighting metric can then be applied to these transformed DFT series and the diagonalization process carried out. If projection is to be carried out on a real time series, it can be done in these transformed coordinates, otherwise, the new orthogonal DFT coordinates have to be reconstructed before doing the projection. Of course, the projected DFT series has to be reconverted to time series form after projection.
The real form of root projection can also be obtained by using QR algorithms such as given in LINPACK [4]. However, the implementation of the QR algorithms in LINPACK is CPU core consuming for large N and does not easily allow storage of the orthogonal matrix. These disadvantages are overcome using the Gram-Schmidt method given above. Employing a form of the Gram-Schmidt algorithm suggested by G.W. Stewart [5,6], a dynamically dimensioned Fortran 77 code was constructed allowing efficient use of a user selected buffer with the option of saving the orthogonalized basis vectors on disc for rapid repeated projection [7]. For >»" with modulus greater than one, projection was carried out using the root reciprocal to avoid overflow. In this case the series starts at_y""'' and goes up to 1, a procedure equivalent to reversing the series to be projected.

Examples
We have principally used root projection for deconvolution. That topic is discussed in the third paper in this series [2]. Here we give four illustrative examples of root projection. The fourth example. applying root projection to Prony's method may have some practical use. However, a detailed study has not been conducted.
1. Figure 1 shows a normalized 101 point Gaussian with 60 dB dynamic range (ratio of center to edge points). In figure 2  vector of figure 2, which is one of the basis elements, is orthogonal to all of the 799 geometric root vectors. If we use the geometric root vectors built from the Gaussian of figure 1 and apply root projection to the rounded-off series built from the series in figure 3, the noise of the resultant series will be reduced in magnitude, since it is orthogonal to the 100 geometric root vectors formed from the roots of the y transform of the Gaussian. The standard deviation of that noise is 5.6 x 10"'', almost exactly the theoretical estimate of (5)    Note that the remaining error is proportional to the original 2. The use of an extensive causal boundary root set for front end filtering is shown in figures 5a and 5b.
To the Gaussian filtered curve of figure 2,3a in [1] we add the -40 dB noise distribution shown in figure 5 a and apply root filtermg using only the causal boundary roots from figure 2.1b of [1]. The noise after filtering showing extensive front end reduction is shown in figure 5b, 3. A more general example of root projection employing both time and frequency weighting is afforded by Constructing an approximation to an optimal maximal-ripple lowrpass filter. Figure 6 shows such a filter of 151 elements designed using the Remez exchange algorithm of McClellan et al. [8]. In this case the passband was set for 20% of the Nyquist frequency and the stopband for frequencies over 33% of the Nyquist frequency. We start with a delta series shifted so that the value 1-0 lies at position #76 in the center of the time interval. To use unweighted projection and 101 roots evenly spaced from -60' to +60 ° on the unit circle would produce the familiar series obtained -0.15+ 0.00 100.00 200.00 300.00 ^00.00 POINT NUMBER To remove the passband ripple from the weighted filter we subtract from it the shifted delta series. The resulting series should be close to zero in the passband region. We place 31 roots evenly in the passband and again place two extra pairs of roots near the edge of the band. Applying frequency weighted projection with a simple square frequency weight to hold the stopband attenuation in place, we obtain (after re-adding in the shifted delta series) the doubly root projected filter shown in figure 8, where it is compared with the unweighted doubly projected and optimal filters. The weighted series closely resembles the optimal series of figure 6 with leakage at the ends of the time interval of -8.0x10"^ as opposed to ~2.0xl0~^ for the unweighted filter. The leakage for the optimal filter is ~ 6.0x10"*. The attenuation in the stopband is 70 dB and the ripple in the passband is 1.0x10"'' for the weighted filter while the unweighted filter has 36 dB attenuation and 8.0 X10"" ripple. The optimal filter has 116 dB attenuation and 5.0 x 10"^ ripple. The spectra for the three filters are compared in figure 9.
The root patterns in the stopband for the optimal filter are shown in figure 10. As can be seen, they also cluster near the edge of the stopband, but they are not evenly spaced throughout the stopband. Similarly, the roots of the optimal filter minus the shifted delta series, shown in figure 11, exhibit the same uneven spacing and clustering in the passband region.
Here yj is thought of as a complex number of modulus less than l,yj=e^^, real()^) <0, and T a sampling interval (See, e.g., [9]). Eq (14) can be rewritten, taking advantage of the cyclotomic polynomial expression, as where Q(y) is a polynomial of degree « whose roots are fyj)'^, and P(y)+y'^*^R(y) is a polynomial of degree N+n whose N + l-n coefficients from that of 3'" to that of >''' are zero.  ,.-x-xx». .,>^-
The goal, then, is to solve the algebraic equation: where P, Q, and R are all unknown. Only the integer n is given. This is generally a very ill-conditioned process, but when the>';'s are real or occur in conjugate pairs, the following root projection procedure is often successful: a. Use an initial estimate for the coefficients, c, of b. Project the series made from P +>»''+^i? into the range of F using W and the geometric row vectors made from the roots of FO'). The projected polynomial will have negligible values for the coefficients between n +1 and N + 1 (depending on the noise in the data) and will be divisible by F(y). Further, P(y) can be read off from the projection and Q(y) found by simple division (using FFT methods, for instance, as discussed in [2]). As long as: i) the projection is stable, ii) the projected vector is not zero, and Hi) the values of the weights are large enough to produce an answer within the noise for similar cases with no data noise, then the values of KI, K2, AI, and A2 have very little effect on the calculated JJ/S andAj's. In the ordinary Prony case there are as many degrees of freedom added in R as are restricted between P and R (one degree is an irrelevant multiplicative constant since F is expressed as a ratio). This method has been tried for several examples with both real and complex poles with maximum to minimum root amplitude ratios up to 60:1 and with minimum root separation down to 0.01. Using double precision data (with Kl = 10^ K2=10^ Ai = 10^, and A2 = 10'^) it works well up to about eight poles after which the positions of the larger poles start to degenerate (due to the ill-conditioning of the prob-lem). The representation of the time series coefficients of F remains accurate to a relative error of about 10"^^, which represents approximately the cumulative accuracy of the algorithms involved. Using data given to 16 bit precision (around 5 place accuracy), the method is able to resolve about 3 poles (where KI = 10^ K2 = 10^ Al = 10'^ and A2 = 10*) with the positions of the larger poles again beginning to degenerate first.

Summary
The idea of root projection (RP) was introduced to permit least-squares modification of a one-sided time series allowing a set of given complex numbers to be roots of the Y transform of the time series. The general framework of the method was presented, and techniques were given to adapt the method to the Gram-Schmidt algorithm. For time and frequency least-squares weighting, a dynamically dimensioned form of the Gram-Schmidt algorithm has been developed to carry out root projection. The code has been employed for root projection on time series with up to 1600 points and achieved better than 12 place accuracy. Illustrative examples of root projection were shown for noise reduction and filter construction as well as a least-squares extension of Prony's method.