Skip to main content
Log in

Algorithms for polynomial spectral factorization and bounded-real balanced state space representations

  • Original Article
  • Published:
Mathematics of Control, Signals, and Systems Aims and scope Submit manuscript

Abstract

We illustrate an algorithm that starting from the image representation of a strictly bounded-real system computes a minimal balanced state variable, from which a minimal balanced state realization is readily obtained. The algorithm stems from an iterative procedure to compute a storage function, based on a technique to solve a generalization of the Nevanlinna interpolation problem.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Ball JA, Gohberg I, Rodman L (1990) Interpolation of rational matrix functions. Birkhäuser, Berlin

  2. Callier FM (1985) On polynomial matrix spectral factorization by symmetric extraction. IEEE Trans Autom Control 30:453–464

    Article  MathSciNet  MATH  Google Scholar 

  3. Coppel WA (1972) Linear systems. In: Notes on pure mathematics, vol 6. Australian National University, Canberra

  4. Cotroneo T, Willems JC (2003) The simulation problem for high order differential equations. Appl Math Comput 145:821–851

    Article  MathSciNet  MATH  Google Scholar 

  5. Desai UB, Pal D (1984) A transformation approach to stochastic model reduction. IEEE Trans Autom Control 29:1097–1100

    Google Scholar 

  6. Dym H (1989) J-Contractive matrix functions, reproducing kernel Hilbert spaces and interpolation. In: CBMS regional conference series in mathematics, vol 71. American Mathematical Society, Providence

  7. Fuhrmann PA (1976) Algebraic system theory: an analyst’s point of view. J Franklin Inst 301:521–540

    Article  MathSciNet  MATH  Google Scholar 

  8. Fuhrmann PA, Rapisarda P, Yamamoto Y (2007) On the state of behaviors. Linear Algebra Appl 424(2–3):570–614

    Google Scholar 

  9. Georgiou TT, Khargonekar PP (1987) Spectral factorization and Nevanlinna-Pick interpolation. SIAM J Control Optim 25(3):754–766

    Article  MathSciNet  MATH  Google Scholar 

  10. Georgiou TT (1989) Computational aspects of spectral factorization and the tangential Schur algorithm. IEEE Trans Circuits Syst 36(1):103–108

    Article  MathSciNet  MATH  Google Scholar 

  11. Georgiou TT, Khargonekar PP (1989) Spectral factorization of matrix valued functions using interpolation theory. IEEE Trans Circuits Syst 36(4):568–574

    Article  MathSciNet  MATH  Google Scholar 

  12. Hoffmann J, Fuhrmann PA (1996) On balanced realizations for the classes of positive real and bounded real transfer functions. Linear Algebra Appl 245:107–146

    Article  MathSciNet  MATH  Google Scholar 

  13. Fuhrmann PA, Ober R (1993) A functional approach to LQG balancing, model reduction and robust control. Int J Control 57:627–741

    Article  MathSciNet  MATH  Google Scholar 

  14. Glover K (1984) All optimal Hankel-norm approximations of. linear multivariable systems and their \(L_{\infty }\)-error bounds. Int J Control 39(6):1115–1195

    Article  MathSciNet  MATH  Google Scholar 

  15. Ha BM (2009) Model reduction in a behavioral framework. PhD thesis, University of Groningen, The Netherlands

  16. Jonckheere EA, Silverman LM (1983) A new set of invariants for linear systems: application to reduced order compensator design. IEEE Trans Autom Control 28:953–964

    Article  MathSciNet  MATH  Google Scholar 

  17. Kailath T (1980) Linear systems. Prentice-Hall, Englewood Cliffs

    MATH  Google Scholar 

  18. Kaneko O, Rapisarda P (2007) On the Takagi interpolation problem. Linear Algebra Appl 425(2–3): 453–470

    Article  MathSciNet  MATH  Google Scholar 

  19. Moore BC (1981) Principal component analysis in linear systems: controllability, observability, and model reduction. IEEE Trans Autom Control 26:17–32

    Google Scholar 

  20. Nevanlinna R (1919) Über beschränkte Funktionen, die in gegeben Punkten vorgeschrieben Werte annehmen. Ann Acad Sci Fenn Ser A 1 Mat Dissertationes 13

  21. Opdenacker PC, Jonckheere EA (1985) LQG balancing and reduced LQG compensation of symmetric passive systems. Int J Control 41:73–109

    Article  MathSciNet  MATH  Google Scholar 

  22. Opdenacker PC, Jonckheer EA (1988) A contraction mapping preserving balanced reduction scheme and its infinity norm error bounds. IEEE Trans Circuits Syst 35(2):184–189

    Article  MATH  Google Scholar 

  23. Pendharkar I, Pillai HK, Rapisarda P (2005) Vector-exponential time-series modeling for polynomial \(J\)-spectral factorization. In: Proceedings of the 44th IEEE conference on decision and control, and the European control conference 2005, Seville, Spain, December 12–15, 2005

  24. Pick G (1916) Über die beschränkungen Funktionen, welche durch vorgegebene Funktionswerte bewirkt werden. Math Ann 77:7–23

    Article  Google Scholar 

  25. Polderman JW, Willems JC (1997) Introduction to mathematical system theory: a behavioral approach. Springer, Berlin

    Google Scholar 

  26. Potapov VP (1960) The multiplicative structure of J-contractive matrix functions. Am Math Soc Transl (2) 15:131–243

    Google Scholar 

  27. Rapisarda P (1998) Linear differential behaviors. PhD thesis, University of Groningen

  28. Rapisarda P, Willems JC (1997) State maps for linear systems. SIAM J Control Optim 35(3):1053–1091

    Article  MathSciNet  MATH  Google Scholar 

  29. Rapisarda P, Willems JC (1997) The subspace Nevanlinna interpolation problem and the most powerful unfalsified model. Syst Control Lett 32(5):291–300

    Article  MathSciNet  MATH  Google Scholar 

  30. Rosenthal J, Schumacher JM (1997) Realization by inspection. IEEE Trans Autom Control 42: 1257–1263

    Article  MathSciNet  MATH  Google Scholar 

  31. Trentelman HL, Rapisarda P (2000) Pick matrix conditions for sign-definite solutions of the Algebraic Riccati equation. SIAM J Control Optim 40(3):969–991

    Article  MathSciNet  Google Scholar 

  32. Trentelman HL, Rapisarda P (1999) New algorithms for polynomial \(J\)-spectral factorization. Math Control Signals Syst 12:24–61

    Article  MathSciNet  MATH  Google Scholar 

  33. Trentelman HL, Willems JC (1997) Every storage function is a state function. Syst Control Lett 32:249–260

    Article  MathSciNet  MATH  Google Scholar 

  34. Uhlig F (1973) Simultaneous block-diagonalization of two real symmetric matrices. Linear Algebra Appl 7:281–289

    Google Scholar 

  35. Willems JC (2007) The behavioral approach to open and interconnected systems. IEEE Control Syst Mag 27(6):46–99

    Article  MathSciNet  Google Scholar 

  36. Willems JC, Rapisarda P (2002) Balanced state representations with polynomial algebra. In: Rantzer A, Byrnes CI (eds) Directions in mathematical systems theory and optimization. Springer lecture notes in control and information sciences, vol 286, pp 345–357

  37. Willems JC, Trentelman HL (1998) On quadratic differential forms. SIAM J Control Optim 36(5):1703–1749

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to P. Rapisarda.

Appendices

Appendix A: Notation and background material

1.1 A.1 Notation

The space of \(\mathtt n\) dimensional real, respectively, complex, vectors is denoted by \(\mathbb R ^\mathtt{n}\), respectively, \(\mathbb C ^\mathtt{n}\), and the space of \(\mathtt {m}\times \mathtt {n}\) real, respectively, complex, matrices, by \(\mathbb R ^\mathtt{{m}\times \mathtt {n}}\), respectively, \(\mathbb C ^\mathtt{{m}\times \mathtt {n}}\). Whenever one of the two dimensions is not specified, a bullet \(\bullet \) is used; for example, \(\mathbb R ^{\bullet \times \mathtt{{w}}}\) denotes the set of matrices with \(\mathtt{{w}}\) columns and with an arbitrary finite number of rows. Given two column vectors \(x\) and \(y\), we denote with col\((x,y)\) the vector obtained by stacking \(x\) over \(y\); a similar convention holds for the stacking of matrices with the same number of columns. If \(A\in \mathbb C ^{{\mathtt{{p}}}\times {\mathtt{{m}}}}\), then \(A^*\in \mathbb C ^{{\mathtt{{m}}}\times {\mathtt{{p}}}}\) denotes its complex conjugate transpose. If \(S=S^\top \), then we denote with \(\sigma _+(S)\) the number of positive eigenvalues of \(S\).

The ring of polynomials with real coefficients in the indeterminate \(\xi \) is denoted by \(\mathbb R [\xi ]\); the ring of two-variable polynomials with real coefficients in the indeterminates \(\zeta \) and \(\eta \) is denoted by \(\mathbb R [\zeta ,\eta ]\). The space of all \(\mathtt {n}\times \mathtt {m}\) polynomial matrices in the indeterminate \(\xi \) is denoted by \(\mathbb R ^\mathtt{{n}\times \mathtt {m}}[\xi ]\), and that consisting of all \(\mathtt {n}\times \mathtt {m}\) polynomial matrices in the indeterminates \(\zeta \) and \(\eta \) by \(\mathbb R ^\mathtt{{n}\times \mathtt {m}}[\zeta ,\eta ]\). To a polynomial matrix \(P(\xi ) = \sum _{k \in \mathbb Z_{+} } P_k \xi ^k\), we associate its coefficient matrix, defined as the block-column matrix \(\mathrm{mat}(P):=\begin{bmatrix}P_0&P_1&\dots&P_N&\dots \end{bmatrix}\). Observe that \(\mathrm{mat}(P)\) has only a finite number of nonzero entries; moreover, \(P(\xi ) = \mathrm{mat}(P)\text{ col}(I_{\mathtt{{w}}},I_{\mathtt{{w}}}\xi ,\ldots )\). If \(F\in \mathbb C ^{\bullet \times \bullet }[\xi ]\), we define \(F^\sim (\xi ):=F(-\xi )^*\).

We denote with \(\mathfrak{C }^{\infty }(\mathbb R ,\mathbb R ^\mathtt{{w}})\) the set of infinitely often differentiable functions from \(\mathbb R \) to \(\mathbb R ^\mathtt{{w}}\). The set of infinitely differentiable functions with compact support is denoted with \(\mathfrak D (\mathbb R ,\mathbb R ^{\mathtt{{w}}})\). The exponential function whose value at \(t\) is \(e^{\lambda t}\) is denoted with \(\exp _{\lambda }\).

1.2 A.2 Linear differential systems and their representations

A subspace \(\mathfrak{B }\) of \(\mathfrak{C }^{\infty }(\mathbb R ,\mathbb R ^{\mathtt{{w}}})\) is a linear differential behavior if it consists of the solutions of a system of linear, constant-coefficient differential equations; equivalently, if there exists a polynomial matrix \(R \in \mathbb R ^{\bullet \times \mathtt{{w}}}[\xi ]\) such that

$$\begin{aligned} \mathfrak{B }=\left\{ w\in \mathfrak{C }^{\infty }(\mathbb R ,\mathbb R ^{\mathtt{{w}}})\ | \ R\left(\frac{\mathrm{d}}{\mathrm{d}t}\right)w =0\right\} . \end{aligned}$$

We denote with \(\mathfrak{L }^\mathtt{{w}}\) the set of linear differential systems with \({\mathtt{{w}}}\) external variables. In this paper, we also consider complex behaviors, i.e. subspaces of \(\mathfrak{C }^{\infty }(\mathbb R ,\mathbb C ^{\mathtt{{w}}})\) described by polynomial matrices with complex coefficients; the definitions and results that follow can be adapted with obvious modifications to this case.

The representation \(\mathfrak{B }=\mathrm{ker}~R\left(\frac{\mathrm{d}}{\mathrm{d}t}\right)\) is called a kernel representation of \(\mathfrak{B }\). If \(\mathfrak{B }\) is controllable (for a definition, see [25]) then it also admits an image representation, i.e. \(\mathfrak{B }=\mathrm{im}~M\left(\frac{\mathrm{d}}{\mathrm{d}t}\right)\), where \(M\in \mathbb R ^{\mathtt{{w}}\times \mathtt{l}}[\xi ]\); equivalently,

$$\begin{aligned} \mathfrak{B }=\left\{ w\in \mathfrak{C }^{\infty }(\mathbb R ,\mathbb R ^\mathtt{{w}}) \mid \exists \ell \in \mathfrak{C }^{\infty }(\mathbb R , \mathbb R ^\mathtt{{l}}) \mathrm s.t. w= M(\frac{\mathrm{d}}{\mathrm{d}t})\ell \right\} . \end{aligned}$$
(15)

The variable \(\ell \) is called the latent variable of the system. In the following we denote the set of controllable behaviors with \(\mathtt{{w}}\) external variables by \(\mathfrak{L }^\mathtt{{w}}_\mathrm{cont}\). Given an image representation induced by a polynomial matrix \(M\), there exists a permutation matrix \(\Pi \) such that \(\Pi M=\mathrm{col}(D,N)\) with \(D\) nonsingular and \(ND^{-1}\) proper. The partition of the external variables associated with the permutation \(\Pi \) is then called an input–output partition for \(\mathfrak{B }=\mathrm{im}~M\left(\frac{\mathrm{d}}{\mathrm{d}t}\right)\) (see [25]).

The representation (15) is a special case of a hybrid or latent variable representation

$$\begin{aligned} \mathfrak{B }=\left\{ w\in \mathfrak{C }^{\infty }(\mathbb R ,\mathbb R ^\mathtt{{w}}) \mid \exists \ell \in \mathfrak{C }^{\infty }(\mathbb R , \mathbb R ^\mathtt{{l}}) \mathrm s.t. R\left(\frac{\mathrm{d}}{\mathrm{d}t}\right)w= M\left(\frac{\mathrm{d}}{\mathrm{d}t}\right)\ell \right\} \end{aligned}$$
(16)

where \(R\in \mathbb R ^{\bullet \times \mathtt{{w}}}[\xi ]\), \(M\in \mathbb R ^{\bullet \times \mathtt{l}}[\xi ]\). We call the behavior

$$\begin{aligned} \mathfrak{B }_\mathrm{full}=\left\{ (w,\ell )\in \mathfrak{C }^{\infty }(\mathbb R ,\mathbb R ^\mathtt{{w}+\mathtt {l}}) \mid R\left(\frac{\mathrm{d}}{\mathrm{d}t}\right)w= M\left(\frac{\mathrm{d}}{\mathrm{d}t}\right)\ell \right\} \end{aligned}$$

the full behavior of the hybrid representation.

A state system is a special type of latent variable system, in which the latent variable, typically denoted with \(x\), satisfies the axiom of state, stated as follows. Given full trajectories \((w_{i},x_{i})\), \(i=1,2\), define their concatenation at zero as the trajectory

$$\begin{aligned} (w_{1},x_{1})\wedge (w_{2},x_{2})(t):=\left\{ \begin{array}{l@{\quad }l@{\quad }l} (w_{1},x_{1})(t)&\text{ for} \ t<0\\ (w_{2},x_{2})(t)&\text{ for} \ t\ge 0 \end{array} \right.. \end{aligned}$$

Then \(x\) is a state variable (and \(\mathfrak{B }_\mathrm{full}\) a state system) if

$$\begin{aligned}&\left[ (w_{i},x_{i})\in \mathfrak{B }_\mathrm{full}, i=1,2\right] \text{ and} \left[x_{1},x_{2} \text{ continuous} \text{ at} 0\right] \text{ and} \left[x_{1}(0)=x_{2}(0)\right] \\&\quad \Longrightarrow \left[(w_{1},x_{1})\wedge (w_{2},x_{2})\in \overline{\mathfrak{B }_\mathrm{full}}\right] \end{aligned}$$

with \(\overline{\mathfrak{B }_\mathrm{full}}\) being the closure (in the topology of \(\mathfrak{L }^{loc}_1\)) of \(\mathfrak{B }_\mathrm{full}\).

A state system is said to be minimal if the state variable has minimal number of components among all state representations that have the same manifest behavior.

In [28] it was shown that a state variable (and in particular, a minimal one) for \(\mathfrak{B }\) can be obtained from the external- or full trajectories by applying to them a state map, defined as follows. Let \(X \in \mathbb R ^{\mathtt{{n}}\times \mathtt {w}}[\xi ]\) be such that the subspace \(\left\{ (w,X\left(\frac{\mathrm{d}}{\mathrm{d}t}\right)w)\mid w\in \mathfrak{B }\right\} \) of \(\mathfrak{C }^\infty (\mathbb R ,\mathbb R ^{\mathtt{{w}}+\mathtt{{n}}})\) is a state system; then \(X(\frac{\mathrm{d}}{\mathrm{d}t})\) is called a state map for \(\mathfrak{B }\), and \(X\left(\frac{\mathrm{d}}{\mathrm{d}t}\right)w\) is a state variable for \(\mathfrak{B }\). In this paper, we consider state maps for systems in image form; in this case it can be shown (see [28]) that a state map can be chosen acting on the latent variable \(\ell \) alone, and we consider state systems \(w=M\left(\frac{\mathrm{d}}{\mathrm{d}t}\right)\ell \), \(x=X\left(\frac{\mathrm{d}}{\mathrm{d}t}\right)\ell \), with \(x\) a state variable. The definition of minimal state map follows in a straightforward manner. In [28], algorithms are stated to construct a state map from the equations describing the system.

There are a number of important integer invariants associated with a behavior \(\mathfrak{B }\in \mathfrak{L }^\mathtt{{w}}\): the input cardinality denoted \(\mathtt{m}(\mathfrak{B })\); the output cardinality, denoted \(\mathtt{p}(\mathfrak{B })\); and the dimension of any minimal state variable for \(\mathfrak{B }\), also called the McMillan degree of \(\mathfrak{B }\), and denoted with \(\mathtt{n}(\mathfrak{B })\). Observe that the number of external variables \(\mathtt{{w}}\) equals \(\mathtt{{m}}(\mathfrak{B })+\mathtt{{p}}(\mathfrak{B })\). If \(\mathtt{{m}}(\mathfrak{B })=0\), the behavior is said to be autonomous; it can be proved that in this case \(\mathfrak{B }\) is finite-dimensional, and consists of vector polynomial-exponential trajectories, see [25]. Moreover, it can be shown that \(\mathtt{{m}}(\mathfrak{B })\) is the number of columns of the matrix \(M\) in any observable image representation \(\mathfrak{B }=\mathrm{im}~M\left(\frac{\mathrm{d}}{\mathrm{d}t}\right)\), i.e. one such that \(M(\lambda )\) has full column rank for all \(\lambda \in \mathbb C \). It can also be shown (see, for example sections 8–9 of [28]) that if \(M=\mathrm{col}(D,N)\) with \(D\) nonsingular and of maximal determinantal degree, then \(\deg (\det (D))=\mathtt{{n}}(\mathfrak{B })\), the McMillan degree of \(\mathfrak{B }\).

1.3 A.3 Quadratic differential forms

Let \(\Phi \in \mathbb R ^{\mathtt{w} \times \mathtt{w}}[\zeta ,\eta ]\), written out in terms of its coefficient matrices \(\Phi _{k,\ell }\) as the (finite) sum \( \Phi (\zeta , \eta ) = \sum _{k,\ell \in \mathbb Z_{+} } \Phi _{k,\ell }\zeta ^k\eta ^\ell \). It induces the map \( Q_{\Phi }: \mathfrak{C }^\infty \left(\mathbb R ,\mathbb R ^\mathtt{w}\right)\rightarrow \mathfrak{C }^\infty \left(\mathbb R ,\mathbb R \right)\), defined by \(Q_{\Phi }(w) = \sum _{k,\ell \in \mathbb Z_{+} }(\frac{d^k}{dt^k}w)^\top \Phi _{k,\ell }(\frac{d^\ell }{dt^\ell } w)\). This map is called the quadratic differential form \((\)QDF\()\) induced by \(\Phi \). When considering QDFs, we can without loss of generality assume that \(\Phi \) is symmetric, i.e. \(\Phi (\zeta ,\eta ) = \Phi (\eta ,\zeta )^{\top }\). We denote the set of real symmetric \(\mathtt{{w}}\)-dimensional two-variable polynomial matrices with \(\mathbb R _{s}^{\mathtt{{w}}\times \mathtt{{w}}}[\zeta ,\eta ]\).

We associate with \(\Phi (\zeta ,\eta ) = \sum _{k,\ell \in \mathbb Z_{+} } \Phi _{k,\ell }\zeta ^k\eta ^\ell \in \mathbb R ^{\mathtt{{w}}\times \mathtt{{w}}}[\zeta ,\eta ]\) its coefficient matrix, defined as the infinite block-matrix:

$$\begin{aligned} {\mathrm{mat}(\Phi )}:= \begin{bmatrix} \Phi _{0,0}&\cdots&\Phi _{0,N}&\cdots \\ \vdots&\vdots&\vdots&\vdots \\ \Phi _{N,0}&\cdots&\Phi _{N,N}&\cdots \\ \vdots&\vdots&\vdots&\vdots \end{bmatrix}. \end{aligned}$$

Observe that \({\mathrm{mat}(\Phi )}\) has only a finite number of nonzero entries, and that \(\Phi (\zeta ,\eta ) = \mathrm{col}(I_{\mathtt{{w}}} , I_{\mathtt{{w}}}\zeta ,\ldots ,I_{\mathtt{{w}}}\zeta ^{k}, \ldots )^\top {\mathrm{mat}(\Phi )}\mathrm{col}(I_{\mathtt{{w}}},I_{\mathtt{{w}}}\eta ,\ldots , I_{\mathtt{{w}}}\eta ^{k},\ldots )\).

It is easy to see that \(\Phi \) is symmetric if and only if \(\mathrm{mat}(\Phi ) = (\mathrm{mat}(\Phi ))^{\top }\); in this case, we can factor \(\mathrm{mat}(\Phi ) = \tilde{M}^{\top } \Sigma _{\Phi } \tilde{M}\) with \(\tilde{M}\) a matrix having a finite number of rows, full row rank, and an infinite number of columns; and \(\Sigma _{\Phi }\) a signature matrix. This factorization leads to \(\Phi (\zeta ,\eta ) = M^{\top }(\zeta ) \Sigma _{\Phi } M(\eta )\), where \(M(\xi ):=\tilde{M}\mathrm{col}(I_\mathtt{{w}},I_\mathtt{{w}}\xi ,\ldots )\) and is called a canonical symmetric factorization of \(\Phi \). A canonical symmetric factorization is not unique; they can all be obtained from a given one by replacing \(M(\xi )\) with \(UM(\xi )\), with \(U\in \mathbb R ^{\bullet \times \bullet }\) such that \(U^{\top } \Sigma _{\Phi } U = \Sigma _{\Phi }\).

Some features of the calculus of QDFs which will be used in this paper are the following. The first one is that of derivative of a QDF. The functional \(\frac{\mathrm{d}}{\mathrm{d}t}L_{\Phi }\) defined by \((\frac{\mathrm{d}}{\mathrm{d}t}Q_{\Phi })(w):= \frac{\mathrm{d}}{\mathrm{d}t}(Q_{\Phi }(w))\) is again a QDF. It is easy to see that the two-variable polynomial matrix inducing it is \((\zeta + \eta )\Phi (\zeta ,\eta )\).

Next, we introduce the notion of integral of a QDF. In order to make sure that the integral exists, we assume that the QDF acts on \(\mathfrak{D }(\mathbb R ,\mathbb R ^\mathtt{{w}})\). The integral of \(Q_{\Phi }\) maps \(\mathfrak{D }(\mathbb R ,\mathbb R ^\mathtt{{w}}) \) to \(\mathbb R \) and is defined as \(\int Q_{\Phi } (w) := \int _{-\infty }^{\infty } Q_{\Phi } (w) dt\).

Finally, we show how to associate a QDF with a behavior \(\mathfrak{B }\in \mathfrak{L }^\mathtt{{w}}_\mathrm{cont}\). Let \(\mathfrak{B }=\mathrm{im}~M\left(\frac{\mathrm{d}}{\mathrm{d}t}\right)\), and let \(\Phi \in \mathbb R _{s}^{\mathtt{{w}}\times \mathtt{{w}}}[\zeta ,\eta ]\). Define \(\Phi ^{\prime } \in \mathbb R _s^\mathtt{{l} \times \mathtt {l}}[\zeta ,\eta ]\) as

$$\begin{aligned} \Phi ^{\prime }(\zeta ,\eta ) := M^{\top }(\zeta ) \Phi (\zeta ,\eta ) M(\eta ); \end{aligned}$$

if \(w\) and \(\ell \) satisfy \(w=M(\frac{\mathrm{d}}{\mathrm{d}t})\ell \), then \(Q_{\Phi }(w) = Q_{\Phi ^{\prime }}(\ell )\). The introduction of the two-variable matrix \(\Phi \) allows to study the behavior \(Q_{\Phi }\) along \(\mathfrak{B }\) in terms of properties of the QDF \(Q_{\Phi ^\prime }\) acting on free trajectories of \(\mathfrak{C }^{\infty }(\mathbb R ,\mathbb R ^\mathtt{l})\).

1.4 A.4 Dissipative behaviors

 

Definition 6

Let \(\mathfrak{B }\in \mathfrak{L }^\mathtt{{w}}_\mathrm{cont}\) and \(\Sigma =\Sigma ^{\top }\in \mathbb R ^{{\mathtt{{w}}}\times {\mathtt{{w}}}}\). \(\mathfrak{B }\) is called \(\Sigma \)-dissipative if \(\int _\mathbb{R }Q_{\Sigma }(w) dt\ge 0\) for all \(w \in \mathfrak{B }\cap \mathfrak{D }(\mathbb R ,\mathbb R ^\mathtt{{w}})\). \(\mathfrak{B }\) is called strictly \(\Sigma \)-dissipative if there exists \(\varepsilon >0\) such that \(\int _\mathbb{R }Q_{\Sigma }(w) dt \ge \varepsilon \int _\mathbb{R }w^\top wdt\) for all \(w \in \mathfrak{B }\cap \mathfrak{D }(\mathbb R ,\mathbb R ^\mathtt{{w}})\). \(\mathfrak{B }\) is called strictly \(\Sigma \)-dissipative on \(\mathbb R _-\) if there exists \(\varepsilon >0\) such that \(\int _\mathbb{R _-}Q_{\Sigma }(w) dt \ge \varepsilon \int _\mathbb{R _-}w^\top wdt\) for all \(w \in \mathfrak{B }\cap \mathfrak{D }(\mathbb R _-,\mathbb R ^\mathtt{{w}})\).

Note that (strict) half-line dissipativity implies (strict) dissipativity, which in turn implies dissipativity. Dissipativity is related to the concept of storage function.

Definition 7

Let \(\Sigma =\Sigma ^\top \in \mathbb R ^{\mathtt{{w}}\times \mathtt{{w}}}\) and \(\mathfrak{B }\in \mathfrak{L }^\mathtt{{w}}_\mathrm{cont}\). Assume that \(\mathfrak{B }\) is \(\Sigma \)-dissipative; then the QDF \(Q_{\Psi }\) is a storage function if for all \(w \in \mathfrak{B }\) \(\frac{\mathrm{d}}{\mathrm{d}t}Q_{\Psi }(w) \le Q_{\Sigma }(w)\). A QDF \(Q_{\Delta }\) is a dissipation function if \(Q_{\Delta }(w) \ge 0\) for all \(w \in \mathfrak{B }\), and for all \(w \in \mathfrak{B }\cap \mathfrak{D }(\mathbb R ,\mathbb R ^{\mathtt{{w}}})\) it holds that \( \int _\mathbb{R } Q_{\Sigma }(w) = \int _\mathbb{R } Q_{\Delta }(w)\).

The following proposition gives a characterization of dissipativity in term of storage and dissipation functions.

Proposition 8

The following conditions are equivalent

  1. 1.

    \(\mathfrak{B }\) is \(\Sigma \)-dissipative,

  2. 2.

    \(\mathfrak{B }\) admits a storage function,

  3. 3.

    \(\mathfrak{B }\) admits a dissipation function.

Moreover, for every dissipation function \(Q_{\Delta }\) there exists a unique storage function \(Q_{\Psi }\), and for every storage function \(Q_{\Psi }\) there exists a unique dissipation function \(Q_{\Delta }\), such that for all \(w \in \mathfrak{B }\) the dissipation equality \( \frac{\mathrm{d}}{\mathrm{d}t}Q_{\Psi }(w) = Q_{\Phi }(w) - Q_{\Delta }(w) \) holds.

 

Proof

See [37, Proposition 5.4]. \(\square \)

Every storage function is a quadratic function of the state, in the following sense.

Proposition 9

Let \(\Sigma =\Sigma ^\top \in \mathbb R ^{\mathtt{{w}}\times \mathtt{{w}}}\) and \(\mathfrak{B }\in \mathfrak{L }^\mathtt{{w}}_\mathrm{cont}\) be \(\Sigma \)-dissipative. Let \(Q_{\Psi }\) be a storage function. Then \(Q_{\Psi }\) is a state function, i.e. for every polynomial matrix \(X\) inducing a state map for \(\mathfrak{B }\), there exists a real symmetric matrix \(K\) such that \(Q_{\Psi }(w)=\left(X\left(\frac{\mathrm{d}}{\mathrm{d}t}\right)w\right)^{\top }K\left(X\left(\frac{\mathrm{d}}{\mathrm{d}t}\right)w\right)\).

 

Proof

See Theorem 5.5 of [37]. \(\square \)

Now assume that \(\mathfrak{B }\) is represented in image form \(w=M\left(\frac{\mathrm{d}}{\mathrm{d}t}\right)\ell \) and that it is \(\Sigma \)-dissipative. Then it is easy to show that if \(Q_{\Psi }\) is a storage function, then for every \(X\in \mathbb R ^{\mathtt{{n}}\times \mathtt{l}}[\xi ]\) inducing a state map for \(\mathfrak{B }\) acting on the latent variable, there exists a symmetric matrix \(K\) such that \(Q_{\Psi }(w)=\left(X\left(\frac{\mathrm{d}}{\mathrm{d}t}\right)\ell \right)^{\top }K\left(X\left(\frac{\mathrm{d}}{\mathrm{d}t}\right)\ell \right)\) for every \(w\) and \(\ell \) such that \(w=M\left(\frac{\mathrm{d}}{\mathrm{d}t}\right)\ell \).

In general, there exists an infinite number of storage functions; however, all of them lie between two extremal ones.

Proposition 10

Let \(\mathfrak{B }\) be \(\Sigma \)-dissipative; then there exist storage functions \(\Psi _-\) and \(\Psi _+\) such that any storage function \(\Psi \) satisfies \(Q_{\Psi _-} \le Q_{\Psi } \le Q_{\Psi _+}\) along \(\mathfrak{B }\).

 

Proof

See [37, Theorem 5.7]. \(\square \)

The extremal storage functions \(Q_{\Psi _+}\) and \(Q_{\Psi _-}\) can be computed from anti-Hurwitz, respectively, Hurwitz spectral factorizations.

Proposition 11

Let \(\mathfrak{B }=\mathrm{im}~M\left(\frac{\mathrm{d}}{\mathrm{d}t}\right)\) be \(\Sigma \)-dissipative, with \(M\) observable. Assume that \(M(-i\omega )^\top \Sigma M(i\omega )>0\) for all \(\omega \in \mathbb R \). Then the smallest and the largest storage functions \(\Psi _{-}\) and \(\Psi _{+}\) of \(\mathfrak{B }\) can be constructed as follows: let \(H\) and \(A\) be Hurwitz, respectively, anti-Hurwitz polynomial spectral factors of \(M(-\xi )^\top \Sigma M(\xi )\). Then

$$\begin{aligned} \Psi _{+}(\zeta ,\eta )&= \frac{M(\zeta )^\top \Sigma M(\eta )-A^{T}(\zeta )A(\eta )}{\zeta +\eta } \text{ and} \Psi _{-}(\zeta ,\eta )\\&= \frac{M(\zeta )^\top \Sigma M(\eta )-H^{T}(\zeta )H(\eta )}{\zeta +\eta }. \end{aligned}$$

 

Proof

See [37, Theorem 5.7]. \(\square \)

If \(\mathtt{{m}}(\mathfrak{B })=\sigma _{+}(\Sigma )\), then the nonnegativity of all storage functions is equivalent with the half-line \(\Sigma \)-dissipativity of \(\mathfrak{B }\), as the following result shows.

Proposition 12

Let \(\mathfrak{B }\in \mathfrak{L }^\mathtt{{w}}_\mathrm{cont}\) and \(\Sigma =\Sigma ^\top \in \mathbb R ^{{\mathtt{{w}}}\times {\mathtt{{w}}}}\) be nonsingular. Let \(X\) be a minimal state map for \(\mathfrak{B }\) acting on the external variable \(w\). Assume that \(\mathtt{{m}}(\mathfrak{B })=\sigma _+(\Sigma )\). Then the following statements are equivalent.

  1. 1.

    \(\mathfrak{B }\) is \(\Sigma \)-dissipative on \(\mathbb R _-\);

  2. 2.

    there exists a nonnegative storage function of \(\mathfrak{B }\);

  3. 3.

    all storage functions of \(\mathfrak{B }\) are nonnegative;

  4. 4.

    there exists \(K=K^\top >0\) real such that \(Q_{K}(w):=\left(X\left(\frac{\mathrm{d}}{\mathrm{d}t}\right)w \right)^{\top } K \left(X\left(\frac{\mathrm{d}}{\mathrm{d}t}\right)w \right)\) is a storage function of \(\mathfrak{B }\);

  5. 5.

    there exists a storage function of \(\mathfrak{B }\), and every real symmetric matrix \(K>0\) such that \(Q_{K}(w):=\left(X\left(\frac{\mathrm{d}}{\mathrm{d}t}\right)w \right)^{\top } K \left(X\left(\frac{\mathrm{d}}{\mathrm{d}t}\right)w \right)\) is a storage function of \(\mathfrak{B }\) satisfies \(K>0\).

 

 

Proof

See [37, Proposition 6.4]. \(\square \)

Appendix B

It was pointed out by an anonymous reviewer that the results of Sect. 4, in particular Theorem 2, allow an alternative, somewhat shorter and more streamlined proof using results from interpolation theory and the theory of reproducing kernel Hilbert spaces. Essentially, as was indicated by the reviewer, the proof of Theorem 2 can be subdivided into a number of steps that lead to a spectral factorization of the polynomial matrix \(M(-\xi )^T \Sigma M(\xi )\), even in the more general case that the transfer matrix \(ND^{-1}\) is proper (instead of strictly proper). Some of these steps can be obtained in a straightforward way from results published before in [6]. Important ingredients in the steps mentioned above are well-known results on reproducing kernel Hilbert spaces, the notion of Potapov factors (see also [26]) , and results from the theory on Schur and Nevanlinna interpolation problems.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Rapisarda, P., Trentelman, H.L. & Minh, H.B. Algorithms for polynomial spectral factorization and bounded-real balanced state space representations. Math. Control Signals Syst. 25, 231–255 (2013). https://doi.org/10.1007/s00498-012-0095-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00498-012-0095-x

Keywords

Navigation