Two-sided bounds on some output-related quantities in linear stochastically excited vibration systems with application of the differential calculus of norms

x S (t) = Sx(t) can be viewed as a transformation of the state vector x(t) that is mapped by the rectangular matrix S into the output vector xs(t). It is known that, under certain conditions, the solution x(t) is a random vector that can be completely described by its mean vector, m x (t): = m x(t) , and its covariance matrix, P x (t): = P x(t) . If matrix A is asymptotically stable, then m x (t) → 0 (t → ∞) and P x S (t) → P S (t → ∞), where P S is a positive (semi-)definite matrix. Similar results will be derived for some output-related quantities. The obtained results are of special interest to applied mathematicians and engineers.

ABOUT THE AUTHOR Ludwig Kohaupt received the equivalent to the master's degree (Diplom-Mathematiker) in Mathematics in 1971 and the equivalent to the PhD (Dr.phil.nat.) in 1973 from the University of Frankfurt/Main. From 1974 until 1979, Kohaupt was a teacher in Mathematics and Physics at a Secondary School. During that time (from 1977 until 1979), he was also an auditor at the Technical University of Darmstadt in Engineering Subjects, such as Mechanics, and especially Dynamics. From 1979until 1990, he joined the Mercedes-Benz car company in Stuttgart as a Computational Engineer, where he worked in areas such as Dynamics (vibration of car models), Cam Design, Gearing, and Engine Design. Then, in 1990, Dr. Kohaupt combined his preceding experiences by taking over a professorship at the Beuth University of Technology Berlin (formerly known as TFH Berlin). He retired on April 01, 2014.

PUBLIC INTEREST STATEMENT
When a dynamical system with solution vector x of length n describes an engineering problem, only a few components of x are needed, as a rule. But, nevertheless, the whole pertinent initial value problem must be solved. In order to obtain only the components of interest, one defines an output matrix, say S, that selects them from x by defining the new output vector x S : = Sx showing its importance. This equation is called output equation. For example, if the engineer wants to apply only the first, second, and nth component, then one defines S as S = [e 1 , e 2 , e n ] T where e i means the ith unit vector for i = 1, 2, n.
In the present paper, new two-sided estimates on the mean vector and covariance matrix pertinent to the output vector x S in linear stochastically excited vibration systems are derived that parallel those associated with x obtained recently.

Introduction
In order to make the paper more easily readable for a large readership, we first introduce the notions of output vector and output equation common to engineers. When a dynamical system with solution vector x of length n describes an engineering problem, only a few components of x are needed, as a rule. But, nevertheless, the whole pertinent initial value problem must be solved. In order to obtain only the components of interest, one defines an output or transformation matrix, say S, that selects them from x by defining the output x S : = Sx. This equation is called output equation. For example, if the engineer wants to use only the first, second, and nth component, then one defines S as S = [e 1 , e 2 , e n ] T where e i means the ith unit vector for i = 1, 2, n. In other words, by employing the output equation x S = Sx, a subset of components can be selected from the whole set of degrees of freedom which is usually necessary in practice. Of course, one can also define S such that it forms linear combinations of components of x. Whereas, in the preceding paper Kohaupt (2015b), the whole vector x was analyzed, in the present paper, it is replaced by the output x S . The given comments on x S show why this is important.
In this paper, a linear stochastic vibration model of the form ẋ(t) = Ax(t) + b(t), x(0) = x 0 , with output equation x S (t) = Sx(t) is investigated, where A is a real system matrix, b(t) white noise excitation, and x 0 an initial vector that can be completely characterized by its mean vector m 0 and its covariance matrix P 0 . Likewise, the solution x(t), also called response, is a random vector that can be described by its mean vector m x (t): = m x(t) , and its covariance matrix, P x (t): = P x (t) . For asymptotically stable matrices A, it is known that m x (t) → 0 (t → ∞) and P x (t) → P (t → ∞), where P is a positive (semi-)definite matrix. Similarly, for the output or transformed quantity x S (t), one has m x S (t) → 0 (t → ∞) and P x S (t) → P S (t → ∞) with a positive (semi-)definite matrix P S . The asymptotic behavior of m x (t) and P x (t) − P was studied in Kohaupt (2015b).
In this paper, we investigate the asymptotic behavior of m x S (t) and P x S (t) − P S . As appropriate norms for the investigation of this problem, again the Euclidean norm for m x S (t) and the spectral norm for P x S (t) − P S is the respective natural choice; both norms are denoted by ‖ ⋅ ‖ 2 .
The main new points of the paper are • the determination of two-sided bounds on m x S (t) and P x S (t) − P S , • the derivation of formulas for the right norm derivatives D k + ‖P x S (t) − P S ‖ 2 , k = 0, 1, 2, and • the application of these results to the computation of the best constants in the two-sided bounds.
• Special attention is paid to conditions ensuring the positiveness of the constants in the lower bounds when S is only rectangular and not square regular.
The paper is structured as follows.
In Section 2, the linear stochastically excited vibration model with output equation is presented.
Then, in Section 3, the transformed quantities m x S (t) and P x S (t) − P S are determined from m x (t) and P x (t) − P, respectively, by appropriate use of the output matrix S as transformation matrix. Section 4 derives two-sided bounds on x S (t) = Sx(t) with ẋ(t) = Ax(t), x(0) = x 0 , as a preparation to derive two-sided bounds on m x S (t) in Section 6. Section 5 determines two-sided bounds on Φ S (t): = SΦ(t) with Φ (t) = AΦ(t), Φ(0) = E, as a preparation to derive two-sided bounds on P x S (t) − P S in Section 7. Section 8 studies the local regularity of P x S (t) − P S . Then in Section 9, as the main result, formulas for the right norm derivatives D k + ‖P x S (t) − P S ‖ 2 , k = 0, 1, 2 are obtained. Section 10, for the specified data in the stochastically exited model, presents applications, where the differential calculus of norms is employed by computing the best constants in the new two-sided bounds on m x S (t) and P x S (t) − P S .

The linear stochastically excited vibration system with output equation
In order to make the paper as far as possible self-contained, we summarize the known facts on linear stochastically excited systems. In the presentation, we closely follow the line of Müller and Schiehlen (1985, Sections 9.1 and 9.2).
So, let us depart from the deterministic model in state-space form with system matrix A ∈ IR n×n , the state vector x(t) ∈ IR n and the excitation vector b(t) ∈ IR n , t ≥ 0, the output matrix S ∈ IR l×n , and the output vector x S (t). We call (2) output equation. It can be understood as a transformation making of x(t) the transformed quantity x S (t) by applying the transformation matrix S to x(t).
Now, we replace the deterministic excitation b(t) by a stochastic excitation in the form of white noise. Thus, b(t) can be completely described by the mean vector m b (t) and the central correlation matrix N b (t, ) with where Q = Q b is the n × n intensity matrix of the excitation and (t − ) the -function (more precisely, the -functional).
From the central correlation matrix, for = t one obtains the positive semi-definite covariance matrix At this point, we mention that the definition of a real positive semi-definite matrix includes its symmetry. (1) When the excitation is white noise, the deterministic initial value problem (1) can be formally maintained as the theory of linear stochastic differential equations shows. However, the initial state x 0 must be introduced as Gaussian random vector, which is to be independent of the excitation; here, the sign ∼ means that the initial state x 0 is completely described by its mean vector m 0 and its covariance matrix P 0 . More precisely: x 0 is a Gaussian random vector whose density function is completely determined by m 0 and P 0 alone.
The stochastic response of the system (1) is formally given by where − besides the fundamental matrix Φ(t) = e At and the initial vector x 0 − a stochastic integral occurs.
It can be shown that the stochastic response x(t) is a non-stationary Gauss-Markov process that can be described by the mean vector m x (t): = m x(t) and the correlation matrix N x (t, ): = N (x(t),x( )) . For = t, we get the covariance matrix P x (t): = P x(t) .
If the system is asymptotically stable, the properties of first and second order for the stochastic response x(t) we need are given by where the positive semi-definite n × n matrix P satisfies the Lyapunov matrix equation This is a special case of the matrix equation AX + XB = C, whose solution can be obtained by a method of Ma (1966). For the special case of diagonalizable matrices A and B, this is shortly described in Kohaupt (2015b, Appendix A.1).
For asymptotically stable matrices A, one has lim t→∞ Φ(t) = 0 and thus by (7) and (8), and In Kohaupt (2015b), we have investigated the asymptotic behavior of m x (t) and P x (t) − P.
In this paper, we want to derive formulas for m x S (t) and P x S (t) corresponding to those of (7) and (8) and study their asymptotic behavior. This will be done in the next five sections, that is, in Sections 3-7.

The output-related quantities m x S (t) and P x S (t)
In this section, we determine the output-related quantities m x S (t) and P x S (t) from the corresponding quantities m x (t) and P x (t) by appropriate use of the output matrix S as the transformation matrix.
The results of this section are known to mechanical engineers, but are added for the sake of completeness, especially for mathematicians.
One obtains the following lemma.
Lemma 1 (Formulas for m x S (t) and P x S (t)) Let S ∈ IR l×n and x(t) the solution vector of (1). Further, let x S (t) be given by (2), i.e.
Then, one has with and Proof (i) One has where here E denotes the expectation of a random vector. Using (7), this leads to (11).
(ii) Next, we show that, for the central correlation matrices N x (t, ) and N x S (t, ), the identity holds. This is because Thus, (15) is proven. Setting = t, this implies Taking into account (8), this leads to (12). Remark Let the system matrix A be asymptotically stable. Then, Φ(t) → 0, (t → ∞) and thus from (12) and (13), as well as , as a preparation for Section 6. There, two-sided bounds on m x S (t) will be given based on those for x S (t) here.
For the positiveness of the constants in the lower bounds, we discuss two cases: the special case when matrix A is diagonalizable and the case of a general square matrix A.

Let S ∈ I C l×n and
We obtain Let ‖ ⋅ ‖ be any vector norm.
Then, there exists a constant X S,0 ≥ 0 and for every > 0 a constant X S,1 ( ) > 0 such that If A is diagonalizable, then = 0 may be chosen, and we write X S,1 instead of X S,1 ( = 0).
If S is square and regular, then X S,0 > 0.
Further, if S is square and regular, then instead of (19) we get Thus, apparently, An interesting and important question is under what conditions the constant X S,0 is positive when S is only rectangular, but not necessarily square and regular. To assert that X S,0 is positive, additional conditions have to be imposed. We consider two cases.
Case 1: Diagonalizable matrix A In this case, we need the following hypotheses on A from Kohaupt (2011, Section 3.1).
(HS) the eigenvectors p 1 , … , p n ;p 1 , … ,p n form a basis of I C m .
As a preparation to the subsequent derivations, we collect some definitions resp. notations and representations for the solution vector x(t) from Kohaupt (2006Kohaupt ( , 2011.

Representation of the basis
Under the hypotheses (H1), (H2), and (HS), from Kohaupt (2011), we obtain the following real basis functions for the ODE ẋ = A x: are the decompositions of k and p k into their real and imaginary parts. As in Kohaupt (2011), the indices are chosen such that n+k = k , p n+k = p k , k = 1, … , n.
The spectral abscissa of A with respect to the initial vector x 0 ∈ IR n Let, u * k , k = 1, … , m = 2n be the eigenvectors of A * corresponding to the eigenvalues k , k = 1, … , m = 2n. Under (H1), (H2), and (HS), the solution x(t) of (1) has the form with uniquely determined coefficients c 1k , k = 1, … , m = 2n. Using the relations (see Kohaupt, 2011, Section 3.1 for the last relation), then according to Kohaupt (2011), the spectral abscissa of A with respect to the initial vector x 0 ∈ IR n is given by In the sequel, we need the following index sets: and Appropriate representation of x(t) We have with (cf. Kohaupt, 2011). Thus, due to (28) and (22), with k = 1, ⋯ , n.

Appropriate representation of y(t) and ẏ(t) (needed in the Appendix)
Let Herewith, one obtains for a corresponding estimate on x(t), compare Kohaupt (2011, (10)).
Then, there exists a positive constant X S,0 such that for sufficiently large t 1 ≥ t 0 .
• We mention that the quantities f k (t) depend on the initial vector x 0 through their coefficients c (r) k c (i) k (Kohaupt, 2011, (8)). To stress this fact, one can write Case 2 General square matrix A In this case, we need the following hypotheses on A from Kohaupt (2011, Section 3.2).
(H1ˊ) m = 2n and A ∈ IR m×m , In the case of a general square matrix A, we also have to collect some definitions resp. notations and representations of x(t) from Kohaupt (2006Kohaupt ( , 2011.

Representation of the basis
Under the hypotheses (H1 � ), (H2 � ), and (HS � ), from Kohaupt (2011) we obtain the following real basis functions for the ODE ẋ = A x: is the decomposition of p (l) k into its real and imaginary part.

Appropriate representation of y(t) and ẏ(t) (needed in the Appendix
Then, from (48) Then, similarly as in Kohaupt (2011, Section 3.2), there exists a constant X S,0 > 0 such that for sufficiently large t 1 ≥ t 0 .
Thus, we obtain

Theorem 3 (Positiveness of the constant X S,0 in lower bound if A general square)
Let the hypotheses (H1 � ), (H2 � ), and (HS � ) for A be fulfilled, 0 ≠ x 0 ∈ IR m , S ∈ IR l×m , A be a general square matrix as well as condition (55) be satisfied.
Then, there exists a positive constant X S,0 such that for sufficiently large t 1 ≥ t 0 .
Remark Sufficient algebraic conditions for (37) resp. (55) will be given in the Appendix; they are independent of the initial vector x 0 and the time t.
Moreover, for the positiveness of the constant in the lower bound, we discuss two cases: the special case when matrix A is diagonalizable and the case when A is general square.
We obtain

Theorem 4 (Two-sided bound on P x S (t) − P S based on ‖Φ S (t)‖ 2 2 )
Let A ∈ IR n×n , let Φ(t) = e At be the associated fundamental matrix with Φ(0) = E where E is the identity matrix. Further, let P 0 , P ∈ IR n×n be the covariance matrices from Section 2.
The two-sided bounds on Φ S (t) = SΦ(t) can be done in any norm. Let the matrix norm | ⋅ | ∞ be given by Then, Thus, the two-sided bound on Φ S (t) has been reduced to two-sided bounds on ‖S j (t)‖ ∞ , j = 1, … , n.
Similarly to Theorem 1, we obtain

Theorem 5 (Two-sided bound on Φ S (t) = SΦ(t) by e [A]t ) Let
A ∈ I C n×n and Φ(t) be the fundamental matrix of A with Φ(0) = E, i.e. let Φ(t) be the solution of the initial value problem Φ (t) = AΦ(t), Φ(0) = E and Φ S (t) = SΦ(t).   Then, there exists a constant S,0 ≥ 0 and for every > 0 a constant S,1 ( ) > 0 such that where [A] is the spectral abscissa of A.
If A is diagonalizable, then = 0 may be chosen, and we write S,1 instead of S,1 ( = 0).
If S is square and regular, then S,0 > 0.
Proof From (18) and (63), there exist constants S,0,j ≥ 0 and for every > 0 constants S,1,j ( ) > 0 such that Define and Then, taking into account (64) and the relation (cf. Kohaupt, 2006, Proof of Theorem 7) as well as the equivalence of norms in finite-dimensional spaces, the two-sided bound (65) follows. The rest is clear from Theorem 1. □ Corresponding to Theorems 2 and 3, we obtain the following two theorems. (37) be satisfied with f k (t) = f k,e j (t), j = 1, … , m.

Theorem 6 (Positiveness of the constant S,0 in lower bound if A diagonalizable) Let the hypotheses (H1), (H2), and (HS) for A be fulfilled, S ∈ IR l×m , A be diagonalizable as well as condition
Then, there exists a positive constant S,0 such that for sufficiently large t 1 ≥ t 0 .

Two-sided bounds on m x S (t)
According to Equation (11)

Two-sided bounds on P x S (t) − P S = S (t)(P 0 − P) T S (t)
Based on Theorems 4 and 5, we obtain Corollary 1 (Two-sided bounds on P x S (t) − P S based on e 2 [A]t ) Let A ∈ IR n×n , let Φ(t) = e At be the associated fundamental matrix with Φ(0) = E, where E is the identity matrix, as well as S ∈ IR l×n and Φ S (t) = SΦ(t). Further, let P 0 , P ∈ IR n×n be the covariance matrices from Section 2.
Then, there exists a constant p S,0 ≥ 0 and for every > 0 a constant p S,1 ( ) > 0 such that If P 0 − P and S are regular, then p S,0 > 0.
Remark If S ∈ IR l×n is not square regular, under additional conditions stated in Theorems 6 and 7, it can also be asserted that p S,0 > 0.

Applications
In this section, we apply the new two-sided bounds on ‖P x S (t) − P S ‖ 2 obtained in Section 7 as well as the differential calculus of norms developed in Sections 8 and 9 to a linear stochastic vibration model with output equation for asymptotically stable system matrix and white noise excitation vector.
In Subsection 10.1, the stochastic vibration model as well as its state-space form is given, in Subsection 10.2 the transformation matrix S in chosen and in Section 10.3 the data are specified. In Section 10.4, the positiveness of the constants X S,0 and S,0 in the lower bounds is verified. In Section 10.5, computations with the chosen data are carried out, such as the computation of P and P 0 − P as well as the computation of the curves y = D k + ‖P x S (t) − P S ‖ 2 , k = 0, 1, 2 and of the curve y = ‖P x S (t) − P S ‖ 2 along with its best upper and lower bounds for the two ranges t ∈ [0;5] and t ∈ [5;26]. In Section 10.6, computational aspects are shortly discussed.

The stochastic vibration model and its state-space form
Consider the multi-mass vibration model in Figure 1.
The associated initial-value problem is given by where y = [y 1 , … , y n ] T and f (t) = [f 1 (t), … , f n (t)] T as well as

Figure 1. Multi-mass vibration model.
Here, y is the displacement vector, f(t) the applied force, and M, B, and K are the mass, damping, and stiffness matrices, as the case may be.
In the state-space description, one obtains with x = [y T , z T ] T , z =ẏ, and x 0 = [y T 0 , z T 0 ] T , z 0 =ẏ 0 , where the initial vector x 0 = [y T 0 , z T 0 ] T is characterized by the mean vector m 0 and the covariance matrix P 0 .
The system matrix A and the excitation vector b(t) are given by respectively. The vector x(t) is called state vector.
The (symmetric positive semi-definite) intensity matrix Q = Q b is obtained from the (symmetric positive semi-definite) intensity matrix Q f by See Müller and Schiehlen (1985, (9.65)) and the derivation of this relation in Kohaupt (2015b, Appendix A.5).

The transformation matrix S and the output equation x S (t) = Sx(t)
We depart from the equation of motion in vector form, namely Mÿ + Bẏ + K y = f (t), and rewrite it as Following Müller and Schiehlen (1985, (9.56), (9.57)), for a one-mass model with base excitation, we call ÿ a the absolute acceleration of our vibration system; it can be written as with the transformation matrix Our output equation therefore is where here S ∈ IR n×2n is a rectangular, but not a square regular matrix.

Data for the model
As of now, we specify the values as and Then, (if n is even), and We choose n = 5 in this paper so that the state-space vector x(t) has the dimension m = 2n = 10.
x S (t) = Sx(t), similarly as in Kohaupt (2002) for y 0 and ẏ 0 . For the 10 × 10 matrix P 0 , we choose The white-noise force vector f(t) is specified as so that its intensity matrix Q f ∈ IR n×n with q f ,nn = :q has the form

We choose
With M = E, this leads to (see Kohaupt, 2015b, Appendix A.5)

Positiveness of the constants
Therefore, q 5 ,q 5 are linearly independent. Thus, by Lemma A.1 and Theorem 2 resp. Theorem 6, the constants X S,0 and S,0 are positive. Therefore, also the constant p S,0 is positive.

Computations with the specified data
(i) Bounds on y = Φ S (t)m 0 in the vector norm ‖ ⋅ ‖ 2 Upper bounds on y = Φ(t)m 0 in the vector norm ‖ ⋅ ‖ 2 for the two cases (I) and (II) of m 0 are given in Kohaupt (2002, Figures 2 and 3). There, we had a deterministic problem with f (t) = 0 and the solution vector x(t) = Φ(t) x 0 , where x 0 there had the same data as m 0 here. We mention that for the specified data, m 0 [A] = [A] = in both cases (Kohaupt, 2006, p. 154) for a method to prove this. For the sake of brevity, we do not compute or plot the lower or upper bounds and thus the two-sided bounds on y = Φ S (t)m 0 but leave this to the reader.
(ii) Computation of P and P 0 − P The computation of these matrices was already done in Kohaupt (2015b, Subsection 3). There, we saw that P is symmetric and P 0 − P symmetric and regular (but not positive definite). Matrix P 0 − P is needed to compute the curve t) − P S ‖ 2 , k = 0, 1, 2 for the given data is done according to Section 9 with C = P 0 − P. The pertinent curves are illustrated in Figures 2-4.
We have checked the results numerically by difference quotients. More precisely, setting and we have investigated the approximations we obtain as well as and so that the computational results for y = D k + ‖P x S (t) − P S ‖ 2 , k = 0, 1, 2 with t = 2.5 are well underpinned by the difference quotients. As we see, the approximation of D 2 + g(t) = D 2 + ‖P x S (t) − P S ‖ 2 by h D g(t) is much better than by 2 h 2 g(t), which was to be expected, of course.
(iv) Bounds on y = P x S (t) − P S = Φ S (t) (P 0 − P) Φ T S (t) in the spectral norm ‖ ⋅ ‖ 2 Let : = [A] be the spectral abscissa of the system matrix A. With the given data, we obtain so that the system matrix A is asymptotically stable.
The upper bound on y = ‖P x S (t) − P S ‖ 2 = ‖Φ S (t) (P 0 − P) Φ T S (t)‖ 2 is given by y = p S,1 ( ) e 2( + )t , t ≥ 0. Here, = 0 can be chosen since matrix A is diagonalizable. But, in the programs, we have chosen the machine precision = eps = 2 −52 ≐ 2.2204 × 10 −16 of MATLAB in order not to be bothered by this question. With 1, (t): = p S,1 ( ) e 2( + )t , t ≥ 0, the optimal constant p S,1 ( ) in the upper bound is obtained by the two conditions where t c is the place of contact between the curves. This is a system of two nonlinear equations in the two unknowns t c and p S,1 ( ). By eliminating 1, (t c ), this system is reduced to the determination of the zero of h g(t): =  In Figure 6, the curve y = ‖P x S (t) − P S ‖ 2 along with the best upper and lower bounds are illustrated with stepsize Δt = 0.01. The upper bound is valid for t ≥ t 1 ≐ 6.031.

Computational aspects
In this subsection, we say something about the computer equipment and the computation time for some operations.
(i) As to the computer equipment, the following hardware was available: an Intel Pentium D(3.20 GHz, 800 MHz Front-Side-Bus, 2x2MB DDR2-SDRAM with 533 MHz high-speed memories). As software package, we used MATLAB, Version 6.5.
(ii) The computation time t of an operation was determined by the command sequence t i = clock; operation; t = etime(clock, t i ); it is put out in seconds rounded to two decimal places by Matlab. For the computation of the eigenvalues of matrix A, we used the command [XA, DA] = eig(A); the pertinent computation time is less than 0.01 s. To determine Φ(t) = e At , we employed Matlab routine expm. For the computation of the 501 values t, y, yu, yl in Figure 6, it took t(Table for Figure 5) = 1.17s. Here, t stands for the time value running from t 0 = 0 to t e = 25 with stepsize Δt = 0.1; y stands for the value of ‖P x S (t) − P S ‖ 2 , yu for the value of the best upper bound p S,1 ( ) e 2( + )t and yl for the value of the lower bound p S,0 e 2 t . For the computation of the 2501 values t, y, yu, yl in Figure 6, it took t(Table for Figure 6) = 6.35s.

Conclusion
In the present paper, a linear stochastic vibration system of the form ẋ(t) = Ax(t) + b(t), x(0) = x 0 , with output equation x S (t) = Sx(t) was investigated, where A is the system matrix and b(t) white noise excitation. The output equation x S (t) = Sx(t) is viewed as a transformation of the state vector x(t) mapped by the rectangular matrix S into the output vector x S (t). If the system matrix A is asymptotically stable, then the mean vector m x S (t) and the covariance matrix P x S (t) both converge with m x S (t) → 0 (t → ∞) and P x S (t) → P S (t → ∞) for some symmetric positive (semi-)definite matrix P S . This raises the question of the asymptotic behavior of both quantities. The pertinent investigations are made in the Euclidean norm ‖ ⋅ ‖ 2 for m x S (t) and in the spectral norm, also denoted by ‖ ⋅ ‖ 2 , for P x S (t) − P S . The main new points are the derivation of two-sided bounds on both quantities, the determination of the right norm derivatives D k + ‖P x S (t) − P S ‖ 2 , k = 0, 1, 2 and, as application, the computation of the best constants in the bounds. In the presentation, the author exhibits the relations between the quantities m x (t), P x (t) − P, and the formulas for D k + ‖P x (t) − P‖ 2 , on the one hand, and the corresponding output-related quantities m x S (t), P x S (t) − P S , and D k + ‖P x S (t) − P S ‖ 2 , on the other hand. As a result, we obtain that there is a close relationship between these quantities. Special attention is paid to the positiveness of the constants in the lower bounds if the transformation matrix is only rectangular and not necessarily square and regular. In the Appendix, a sufficient algebraic condition for the positiveness of the constants in the lower bounds is derived that is independent of the initial vector and the time variable. To make sure that the (new) formulas for D k + ‖P x S (t) − P S ‖ 2 are correct, we have checked them by various difference quotients. They underpin the correctness of the numerical values for the specified data.
The computation time to generate the last figure with a 10 × 10 matrix A is about 6 seconds. Of course, in engineering practice, much larger models occur. As in earlier papers, we mention that in this case engineers usually employ a method called condensation to reduce the size of the matrices.
We have shifted the details of the positiveness of the constants in the lower bounds to the Appendix in order to make the paper easier to comprehend.
The numerical values were given in order that the reader can check the results.
Altogether, the results of the paper should be of interest to applied mathematicians and particularly to engineers.