Two-sided bounds on the mean vector and covariance matrix in linear stochastically excited vibration systems with application of the differential calculus of norms

ẋ(t) = Ax(t) + b(t), x(0) = x0, with system matrix A and white noise excitation b(t), under certain conditions, the solution x(t) is a random vector that can be completely described by its mean vector, mx(t): = mx(t), and its covariance matrix, Px(t): = Px(t). If matrix A is asymptotically stable, then mx(t) → 0 (t → ∞) and Px(t) → P (t → ∞), where P is a positive (semi-)definite matrix. As the main new points, in this paper, we derive two-sided bounds on ‖mx(t)‖2 and ‖Px(t) − P‖2 as well as formulas for the right norm derivatives D + ‖Px(t) − P‖2, k = 0, 1, 2, and apply these results to the computation of the best constants in the two-sided bounds. The obtained results are of special interest to applied mathematicians and engineers.


PUBLIC INTEREST STATEMENT
In recent years, the author has developed a differential calculus for norms of vector and matrix functions. More precisely, differentiability properties of these quantities were derived for various vector and matrix norms, and formulas for the pertinent (right-hand, resp. left-hand) derivatives were obtained. These results have been applied to a number of linear and non-linear problems by computing the best constants in two-sided bounds on the solution of the pertinent initial value problems. In the present paper, the application area is extended to stochastically excited vibration systems. Specifically, new twosided estimates on the mean vector and the co-variance matrix are derived, and the optimal constants in these bounds are computed in a numerical example employing the differential calculus of norms.

Introduction
In this paper, linear stochastic vibration models of the form ẋ(t) = Ax(t) + b(t), x(0) = x 0 , with real system matrix A and white noise excitation b(t) are investigated, in which the initial vector x 0 can be completely characterized by its mean vector m 0 and its covariance matrix P 0 . Likewise, the solution x(t), also called response, is a random vector that can be described by its mean vector m x (t): = m x(t) , and its covariance matrix, P x (t): = P x (t) . For asymptotically stable matrices A, it is known that m x (t) → 0 (t → ∞) and P x (t) → P (t → ∞), where P is a positive (semi-)definite matrix. This leads to the question of the asymptotic behavior of m x (t) and P x (t) − P. As appropriate norms for the investigation of this problem, the Euclidean norm for m x (t) and the spectral norm for P x (t) − P is the respective natural choice; both norms are denoted by ‖ ⋅ ‖ 2 .
The main new points of the paper are • the determination of two-sided bounds on ‖m x (t)‖ 2 and ‖P x (t) − P‖ 2 , • the derivation of formulas for the right norm derivatives D k + ‖P x (t) − P‖ 2 , k = 0, 1, 2, and • the application of these results to the computation of the best constants in the two-sided bounds.
The paper is structured as follows.
In Section 2, the linear stochastically excited vibration model in state-space form is presented.
Then, in Section 3, new two-sided bounds on ‖m x (t)‖ 2 are determined. In Section 4, preliminary work for two-sided bounds on ‖P x (t) − P‖ 2 is made that is employed in Section 5 to derive new two-sided bounds on ‖P x (t) − P‖ 2 itself. In Section 6, the local regularity of ‖P x (t) − P‖ 2 is studied. In Section 7, as the new result, formulas for the right norm derivatives D k + ‖P x (t) − P‖ 2 , k = 0, 1, 2 are obtained. In Section 8, for the specified data in the stochastically exited model, the differential calculus of norms is applied to compute the best constants in the new two-sided bounds on ‖m x (t)‖ 2 and ‖P x (t) − P‖ 2 .
In Section 9, conclusions are drawn. Finally, in Appendix A, more details on some items are given.

The linear stochastically excited vibration system
In order to make the paper as far as possible self-contained, we summarize the known facts on linear stochastically excited systems. In the presentation, we follow closely the line of Müller and Schiehlen (1976, Sections 9.1 and 9.2).
So, let us depart from the deterministic model in state-space form with system matrix A ∈ ℝ n×n , the state vector x(t) ∈ ℝ n and the excitation vector b(t) ∈ ℝ n , t ≥ 0. Now, we replace the deterministic excitation b(t) by a stochastic excitation in the form of white noise. Thus, b(t) can be completely described by the mean vector m b (t) and the central correlation matrix N b (t, ) with where Q = Q b is the n × n intensity matrix of the excitation and (t − ) the -function (more precisely, the -functional).
From the central correlation matrix, one obtains for = t the positive semi-definite covariance matrix (1) At this point, we mention that the definition of a real positive semi-definite matrix includes its symmetry.
When the excitation is white noise, the deterministic initial value problem (1) can be formally maintained as the theory of linear stochastic differential equations shows. However, the initial state x 0 must be introduced as Gaussian random vector, which is to be independent of the excitation (2); here, the sign ∼ means that the initial state x 0 is completely described by its mean vector m 0 and its covariance matrix P 0 . More precisely: x 0 is a Gaussian random vector whose density function is completely determined by m 0 and P 0 alone.
The stochastic response of the system (1) is formally given by where besides the fundamental matrix Φ(t) = e At and the initial vector x 0 -a stochastic integral occurs.
It can be shown that the stochastic response x(t) is a non-stationary Gauss-Markov process that can be described by the mean vector m x (t): = m x(t) and the correlation matrix N x (t, ): = N (x(t), x( )) . For = t, we get the covariance matrix P x (t): = P x(t) .
If the system is asymptotically stable, the properties of first and second order for the stochastic response x(t) we need are given by where the positive semi-definite n × n matrix P satisfies the Lyapunov matrix equation This is a special case of the matrix equation AX + XB = C, whose solution can be obtained by a method of Ma, cf. (1966). For the special case of diagonalizable matrices A and B, this is shortly described in Appendix A1.
For asymptotically stable matrix A, one has lim t→∞ Φ(t) = 0 and thus from (6), and Therefore, it is of interest to investigate the asymptotic behavior of m x (t) and P x (t) − P. This investigation will be done in the next sections by giving two-sided bounds on both quantities in appropriate norms.
Even though the two-sided bounds on m x (t) can be obtained by just applying known estimates, they will be stated for the sake of completeness in Section 3.
As opposed to this, the determination of two-sided bounds on P x (t) − P leads to a new interesting problem and will be solved in two steps described in Sections 4 and 5. (4)

Two-sided bounds on m x (t)
According to Equation (6 1 ), we have From Kohaupt (2006, Theorem 8), one obtains two-sided bounds on m x (t).
To see this, let m 0 ≠ 0 and ‖ ⋅ ‖ 2 the Euclidean norm in ℝ n . Then, there exists a constant X 0 > 0 and for every > 0 an constant X 1, > 0 such that

Preliminary work for two-sided bounds on P x (t) − P
In this section, we derive two-sided bounds that are of general interest beyond their application in Section 5. Therefore, more general assumptions than needed there will be made. We obtain the following lemma.
Lemma 1 (Two-sided bounds on ‖Ψ * CΨ‖ 2 ) Let C ∈ C | n×n with C * = C, where C * is the adjoint of C. Further, let ‖ ⋅ ‖ 2 be the spectral norm of a matrix.

Then, the two-sided bound is valid where
Proof Decisive tool is the fact that for A ∈ C | n×n with A * = A one has the two representations In the following, this will be applied to Ψ * C Ψ.

(ii) Lower bound:
One has For Ψ = 0, this lower bound remains valid.

Remark In Lemma 1, it is known that
Similarly, one can derive a chain of relations for c 0 = inf ‖v‖ 2 =1 �(C v, v)�, as the next lemma shows.

Lemma 2 (Chain of relations for
Then, Now, let C be additionally regular, let ‖ ⋅ ‖ denote any vector norm as well as the associated sub norm.
Then, this is especially valid also for the spectral norm ‖ ⋅ ‖ 2 .
The proof will be given in Appendix A2.

Two-sided bounds on
In this section, the results of Section 4 are employed to estimate ‖P x (t) − P‖ 2 above and below by ‖Φ(t)‖ 2 2 as well as by e 2( [A]+ )t resp. e 2 [A]t where [A] is the spectral abscissa of matrix A. New will be the quadratic asymptotic behavior of ‖P x (t) − P‖ 2 .
We show the following lemma.
Let A ∈ ℝ n×n , let Φ(t) = e At be the associated fundamental matrix with Φ(0) = E where E is the identity matrix. Further, let P 0 , P ∈ ℝ n×n be the covariance matrices from Section 2. Then, Proof We obtain Theorem 3 by applying Lemmas 1 and 2 with Ψ = Φ * (t) = Φ T (t) = e A T t , Ψ * (t) = Φ(t) = e At and C = P 0 − P.
Further, two-sided bounds can be derived by using Kohaupt (2006, Theorem 8). Thus, there is a constant 0 > 0 and for every > 0 a constant 1, < 0 such that This leads to the following corollary.
Corollary 4 (Two-sided bounds on P Let A ∈ ℝ n×n , let Φ(t) = e At be the associated fundamental matrix with Φ(0) = E where E is the identity matrix. Further, let P 0 , P ∈ ℝ n×n be the covariance matrices from Section 2.
Then, there exists a constant p 0 ≥ 0 and for every > 0 a constant p 1 ( ) ≥ 0 such that Remark Due to the equivalence of norms in finite-dimensional spaces, corresponding bounds as in Theorem 3 and Corollary 4 are valid also in all other (not necessarily multiplicative) matrix norms. Of course, besides the spectral norm ‖ ⋅ ‖ 2 , also the Frobenius norm ‖ ⋅ ‖ F = � ⋅ � 2 (cf. Kohaupt, 2003) is of special interest in the context of stochastically excited systems.

Local regularity of the function ‖P x (t) − P‖ 2
We have the following lemma which states loosely speaking that for every t 0 ≥ 0, the function Formulas for the norm derivatives D k + ‖P x (t) − P‖ 2 , k = 0, 1, 2 In this section, in a first step, for complex matrices A and C with C * = C, we define a matrix function Ψ(t): = Φ(t)CΦ * (t) and derive formulas for the right norm derivatives D k Even though the last one is also valid for C * ≠ C, the first one leads to simpler formulas. In a second step, the obtained formulas are employed for C = P 0 − P ∈ ℝ n×n and A ∈ ℝ n×n to deliver the formulas for D k Let C ∈ C | n×n with C * = C. Then, the eigenvalues j (C), j = 1, ⋯ , n of C are real, and for the spectral norm of C, one has the formula and thus Now, let A ∈ C | n×n , Φ(t) = e At its fundamental matrix, and define Then, Ψ(t) ∈ C | n×n with Ψ * (t) = Ψ(t) and thus cf. (Achieser & Glasman, 1968, Chapter II.2, p. 62) or (Kantorowitsch & Akilow, 1964).
We mention that without C * = C, one would have the formula Of course, this formula remains valid for C * = C, but is more complicated and probably numerically less good than the first representation of ‖Ψ(t)‖ 2 . The computation of D k + ‖Ψ(t)‖ 2 by the last formula would be similar as in Kohaupt (2001) In order to get a formula for D k + ‖Ψ(t)‖ 2 in terms of the given matrices A and C, at the beginning, we follow a similar line as in Kohaupt (2001, Section 3, pp. 6-7), however.
If one replaces t 0 by t, then one gets the functions t ↦ D k The norm derivatives D k + ‖P x (t) − P‖ 2 , k = 0, 1, 2 are obtained as the special case C ∈ ℝ n×n with C = P 0 − P and A ∈ ℝ n×n .
These formulas are needed in Section 8.

Applications
In this section, we apply the new two-sided bounds on ‖P x (t) − P‖ 2 obtained in Section 5 as well as the differential calculus of norms developed in Sections 6 and 7 to a linear stochastic vibration model with asymptotically stable system matrix and white noise excitation vector.
In Section 8.1, the stochastic vibration model as well as its state-space form is given, and in Section 8.2 the data are chosen. In Section 8.3, computations with the specified data are carried out, such as the computation of P and P 0 − P as well as the computation of the curves y = D k + ‖P x (t) − P‖ 2 , k = 0, 1, 2 and of the curve y = ‖P x (t) − P‖ 2 along with its best upper and lower bounds. In Section 8.4, computational aspects are shortly discussed.

The stochastic vibration model and its state-space form
Consider the multi-mass vibration model in Figure 1.
The associated initial-value problem is given by Here, y is the displacement vector, f (t) the applied force, and M, B, and K are the mass, damping, and stiffness matrices, as the case may be. Matrix M is regular.
In the state-space description, one obtains where the initial vector x 0 = [y T 0 , z T 0 ] T is characterized by the mean vector m 0 and the covariance matrix P 0 .
The system matrix A and the excitation vector b(t) are given by respectively. The vector x(t) is called state vector.
The (symmetric positive semi-definite) intensity matrix Q = Q b is obtained from the (symmetric positive semi-definite) intensity matrix Q f by (see Müller, 1976, Formulas (9.65)) and the derivation of this relation in Appendix A5.

Data for the model
As of now, we specify the values as Then, We choose n=5 in this paper so that the state-space vector x(t) has the dimension m = 2n = 10.
Remark In Sections 2-7, we have denoted the dimension of x(t) by n. From the context, the actual dimension should be clear.
For m 0 , we take with and similarly as in Kohaupt (2002) for y 0 and ẏ 0 . For the 10 × 10 matrix P 0 , we choose The white-noise force vector f (t) is specified as so that its intensity matrix Q f ∈ ℝ n×n with q f ,nn = :q has the form

We choose
With M = E, this leads to (see Appendix A5) In the Lyapunov equation BX + XA = C of Section 2, we employ the replacements to obtain the limiting covariance matrix X = P = lim t→∞ P x (t).  Figures 2 and 3). There, we had a deterministic problem with f (t) = 0 and the solution vector x(t) = Φ(t) x 0 , where x 0 there had the same data as m 0 here. We mention that for the specified data, m 0 [A] = [A] in both cases (cf. Kohaupt, 2006, p. 154) for a method to prove this. For the sake of brevity, we do not compute or plot the lower bounds and thus the two-sided bounds, but leave this to the reader.
(ii) Computation of P and P 0 − P as well as of their associated eigenproblems With the data of Section 2, we obtain The column vector of eigenvalues Λ P and the modal matrix X P , that is, the matrix whose columns are made up of the eigenvectors, are computed as and showing that P is positive definite. Correspondingly, The computation of y = D k + ‖P x (t) − P‖ 2 , k = 0, 1, 2 for the given data is done according to Section 7 with C = P 0 − P. The pertinent curves are illustrated in Figures 2-4. By inspection, there are no kinks (like in the curve y = We have checked the results numerically by difference quotients. More precisely, setting and we have investigated the approximations .5 are well underpinned by the difference quotients. As we see, the approximation of D 2 g(t) = D 2 ‖P x (t) − P‖ 2 by h D g(t) is much better than by 2 h 2 g(t), which was to be expected, of course.
(iv) Bounds on y = P x (t) − P = Φ(t) (P 0 − P) Φ T (t) in the spectral norm ‖ ⋅ ‖ 2 Let : = [A] be the spectral abscissa of the system matrix A. With the given data, we obtain so that the system matrix A is asymptotically stable.
With 1, (t): = p 1 ( )e 2( + )t , t ≥ 0, the optimal constant p 1 ( ) in the upper bound is obtained by the two conditions where t c is the place of contact between the curves. This is a system of two non-linear equations in the two unknowns t c and p 1 ( ).
After t c has been computed from the above equation, the best constant p 1 ( ) is obtained from From the initial guess t c,0 = 3.0, the computations deliver the values In a similar way, in the lower bound y = p 0 e 2 t , we compute the best constant p 0 and the place of contact t s . For the initial guess t s,0 = 6.0, the results are The curve y = ‖P x (t c ) − P‖ 2 along with the best upper and lower bounds is illustrated in Figure 5.
(v) Applicability of the second norm derivative D 2 The first norm derivative D + ‖P x (t) − P‖ 2 was employed in Point (iv). Apart from this, it can be applied to determine the relative extrema of the curve y = ‖P x (t c ) − P‖ 2 .
The second norm derivative D 2 + ‖P x (t) − P‖ 2 can be used to compute the inflexion points. The details are left to the reader.

Computational aspects
In this subsection, we say something about the computer equipment and the computation time for some operations.
(i) As to the computer equipment, the following hardware was available: an Intel Pentium D (3.20 GHz, 800 MHz Front-Side-Bus, 2x2MB DDR2-SDRAM with 533 MHz high-speed memories). As software package, we used MATLAB, Version 6.5.
(ii) The computation time t of an operation was determined by the command sequence t i = clock; operation; t = etime(clock, t i ); it is put out in seconds rounded to two decimal places by Matlab. For the computation of the eigenvalues of matrix A, we used the command [XA, DA] = eig(A); the pertinent computation time is less than 0.01s. To determine Φ(t) = e At , we employed Matlab routine expm. For the computation of the 251 values t, y, yu, yl in Figure 5, it took t(Table for Figure 5)= 0.83 s. Here, t stands for the time value running from t 0 = 0 to t e = 25 with stepsize Δt = 0.1; y stands for the value of ‖P x (t) − P‖ 2 , yu for the value of the best upper bound p 1 ( )e 2( + )t and yl for the value of the best lower bound p 0 e 2 t .

Conclusion
In the present paper, linear stochastic vibration systems of the form ẋ(t) = Ax(t) + b(t), x(0) = x 0 , are investigated driven by white noise b(t). If the system matrix A is asymptotically stable, then the mean vector m x (t) and the covariance matrix P x (t) both converge with m x (t) → 0 (t → ∞) and P x (t) → P (t → ∞) for some symmetric positive (semi-)definite matrix P. This raises the question of the asymptotic behavior of both quantities. The pertinent investigations are made in the Euclidean norm ‖ ⋅ ‖ 2 for m x (t) and in the spectral norm, also denoted by ‖ ⋅ ‖ 2 , for P x (t) − P. The main new points are the derivation of two-sided bounds on both quantities, the derivation of the right norm derivatives D k + ‖P x (t) − P‖ 2 , k = 0, 1, 2 and, as application, the computation of the best constants in the bounds. Since we have used a new way to determine the norm derivatives D k + ‖P x (t) − P‖ 2 , k = 0, 1, 2, we have checked the obtained formulas by various difference quotients. They underpin the correctness of the numerical values for the specified data.
It is reminded that the original system consists of a multi-mass vibration model with damping and white noise force excitation. By a standard method, it is cast into state-space form.
As illustration of the results, the curves y = D k + ‖P x (t) − P‖ 2 , k = 0, 1, 2 are plotted as well as the curve y = ‖P x (t) − P‖ 2 together with the best two-sided bounds.
The computation time to generate the last figure with a 10 × 10 matrix A is less than a second. Of course, in engineering practice, much larger models occur. As in earlier papers, we mention that in this case engineers usually employ a method called condensation to reduce the size of the matrices.
We have added an Appendix to exhibit more details on some items in order to make the paper easier to comprehend.
The numerical values were given in order that the reader can check the results.
Altogether, the results of the paper should be of interest to applied mathematicians and particularly to engineers. From this, we obtain the solution of the original matrix equation BX + XA = C by the relation Remarks (i) The solution X could depend on the transformation matrices U and V. But, it is actually independent of these matrices since the mapping L(X): = BX + XA :C | m×n ↦ C | m×n is injective. Therefore, the solution X is uniquely determined.
(ii) If B = A * and C = C * , then X = X * and X is even positive semi-definite. The proof is left to the reader.
(iii) If A ∈ I R n×n and B = A T as well as C ∈ I R n×n with C = C T , then X = X T and X is positive semidefinite.
Let > 0 be chosen arbitrarily. Then, which is proven by simplifying the right member of the equation. This entails if x + 1 Cx ≠ 0 and x − 1 Cx ≠ 0, ‖x‖ = 1. Thus, with y = x + 1 Cx and z = x − 1 Cx, if y ≠ 0 and z ≠ 0.

inf
This case is treated similarly as (S1).
Relation (22) with ≤ instead of ≥ is trivial. On the whole, the chain of Equation (14) is proven.
Now, let C ∈ C | n×n be regular, let ‖ ⋅ ‖ denote any vector norm and the associated sub matrix norm. Because for the range of C one has R(C) = C | n , then thus showing relation (15). On the whole, the proof of Lemma 2 is completed.
Remark We have seen that the method described by Heuser (1975) to derive the relation for C ∈ C | n×n with C = C * can be carried over to prove the relation As opposed to this, using Taylor's method in Taylor (1958, pp. 322-323), it seems to be impossible to prove the inf relations in a similar way as the sup relations.
Since Heuser's book is written in German, we think that it is worthwhile to make Heuser's proof idea accessible to a broad readership. Of course, the author cannot rule out that the above inf formulas have been derived before. But, he has not found such a derivation in literature.

A3. Series expansion of
We determine the coefficients j , j = 0, 1, 2 in the series expansion of Section 7, where we have set j : = j,max , j = 0, 1, 2. The derivation follows a line similar to that of Kohaupt (2001, pp. 6-7). We note that the operators Ψ(t) and T (0) , T (1) , T (2) are defined in a way different from that in Kohaupt (2001), however. This is the first difference.

Remark
In the formula for 1 , exactly those eigenvectors v (1) lie in M 0 for which rankP 0 = rank[P 0 , v (1) ]. (In Kohaupt (2001, p. 7, Remark), instead of P 0 it reads P 1 there, which is a typo.) Similarly one proceeds in the formula for 2 .
Theorem 6 (p = ∞, real vector function) Let s:ℝ + 0 → ℝ n be an n-dimensional real-valued vector function that is m times continuously differentiable, and let t 0 ∈ ℝ + 0 . Suppose additionally that each two components of s are either identical or intersect each other at most finitely often near t 0 . Further, let I −1 = {1, ⋯ , n} and I k be the set of all indices i k ∈ I k−1 where S (k) i from (23) attains its maximum, i.e. k = 1, ⋯ , m. Then, the right derivatives of t ↦ ‖s(t)‖ ∞ at t = t 0 ≥ 0 are given by Remark In our case, in Section 7, one has n = 2 and m = 2. Further, s(t), t 0 ≤ t ≤ t 0 + Δt 0 is analytic so that the additional condition is automatically fulfilled.