Model Reduction of Parametric Diﬀerential-Algebraic Systems by Balanced Truncation

: We deduce a procedure to apply balanced truncation to parameter-dependent diﬀerential-algebraic systems. For that we solve multiple projected Lyapunov equations for diﬀerent parameter values to compute the Gramians that are required for the truncation procedure. As this process would lead to high computational costs if we perform it for a large number of parameters, we combine this approach with the reduced basis method that determines a reduced representation of the Lyapunov equation solutions for the parameters of interest. Residual-based error estimators are then used to evaluate the quality of the approximations. After introducing the procedure for a general class of diﬀerential-algebraic systems we turn our focus to systems with a speciﬁc structure, for which the method can be applied particularly eﬃciently. We illustrate the eﬃciency of our approach on several models from ﬂuid dynamics and mechanics.


Introduction
In the modeling of various industrial processes one often obtains systems of differential-algebraic equations (DAEs) that are of high dimension.Typical examples are electrical circuits, thermal and diffusion processes, multibody systems, and specific discretized partial differential equations [17,18].Often, these systems are further dependent on parameters that can describe physical properties such as material constants and which are often subject to optimization.The resulting parameter-dependent differential-algebraic systems have the form where E(µ), A(µ) ∈ R N,N , B(µ) ∈ R N,m , and C(µ) ∈ R p,N are dependent on parameters µ ∈ D, where D ⊂ R d .The input, the state, and the output are denoted by u(t) ∈ R m , z(t) ∈ R N and y(t) ∈ R p for t ∈ [0, ∞).The matrix E(µ) can be a singular.However, throughout this paper, we assume uniform regularity, i. e., the pencil sE(µ) − A(µ), which is a matrix polynomial in the variable s, is assumed to be regular for all µ ∈ D, that is det(sE(µ) − A(µ)) is not the zero polynomial for all µ ∈ D.Moreover, we assume that the system under consideration is uniformly asymptotically stable, i. e., the system (or the pencil sE(µ) − A(µ)) is asymptotically stable for all µ ∈ D, meaning that all finite eigenvalues of the pencil sE(µ) − A(µ) have negative real part for all µ ∈ D. Throughout the derivations provided in this work we assume that E(•), A(•), B(•), and C(•) are affine in the parameter µ, i. e., where all Θ E k , Θ A k , Θ B k , Θ C k : D → R are parameter-dependent continuous functions and E k , A k , B k , C k are parameter-independent matrices.For reasons of computational efficiency, we always assume that n E , n A , n B , n C ≪ N and we also assume that the number of inputs and outputs is small compared to the dimension of the states, i. e., m, p ≪ N .For ease of notation, it is sometimes handy to put the matrices E 0 , A 0 , B 0 , and C 0 into the sums in (2) and multiply them with the factors Θ E 0 = Θ A 0 = Θ B 0 = Θ C 0 ≡ 1.The model ( 1) is also referred to as the full-order model (FOM).Often, one also considers the input/output mapping of the system in the frequency domain.This relation is typically expressed by the transfer function G(µ, s) := C(µ)(sE(µ) − A(µ)) −1 B(µ).
Because of the high state-space dimension N of (1) it is useful to apply reduction methods to extract the essential information of the system and its solution.More precisely, we want to determine a (uniformly regular and asymptotically stable) reduced-order model (ROM) with E R (µ), A R (µ) ∈ R r,r , B R (µ) ∈ R r,m , and C R ∈ R p,r , where r ≪ N .The reduced state and output are z R (t) ∈ R r and y R (t) ∈ R p .The aim is to determine the surrogate model in such a way that the output of the ROM well-approximates the output of the FOM for all admissible inputs u(•) and all parameters µ ∈ D. In terms of the transfer function this means that the error G(µ, •) − G R (µ, •) is sufficiently small in an appropriate norm for all µ ∈ D, where G R (µ, s) denotes the transfer function of the ROM.
In this article we focus on balanced truncation which is one of the most popular reduction techniques.This is mainly due to the guaranteed asymptotic stability of the ROM, the existence of an error bound, and appealing numerical techniques for the involved Lyapunov equations [12,20,48,51].Within balanced truncation, Lyapunov equations corresponding to the original system need to be solved.The solutions of these equations are called Gramians.They describe the input-to-state and state-to-output behavior and are used to construct projection matrices for the reduction.The multiplication of the system matrices with these projection matrices then results in a ROM.
All the above-mentioned methods focus on the case E = I N and are, however, not directly applicable in the DAE case.Even if the problem is not parameter-dependent, there are several challenges that one has to face (here we assume for simplicity, that the problem is parameter-independent): a) Since the matrix E is typically singular, the transfer function G(s) is possibly improper, i. e., it may have a polynomial part which is unbounded for growing values of |s|.If the reduced transfer function G R (s) does not match this polynomial part, then the error transfer function G(s) − G R (s) is not an element of the rational Hardy spaces RH p,m ∞ or RH p,m 2 (see, e. g, [60] for a definition) and the output error cannot be bounded by the input norm.Thus, a model reduction scheme for DAEs must preserve the polynomial part of its transfer function.This is addressed in [28,39,52].b) In balanced truncation, one has to solve two large-scale Lyapunov equations.In the DAE case, one has to consider a generalization of these -so-called projected Lyapunov equations.Thereby, the Lyapunov equations are projected onto the left and right deflating subspaces corresponding to the finite or infinite eigenvalues of the matrix pencil sE − A. This involves specific projectors which are hard to form explicitly and that might destroy the sparsity structure of the coefficient matrices.However, for DAE systems of special structure, the projectors are of a particular form which can be exploited from the numerical point of view.For details, we refer to [13,30,47,54].
Besides the approaches mentioned above, some of the issues in model reduction of DAEs are also addressed in [1,4].There, the authors introduce an index-aware model order reduction scheme that splits the DAE system into a differential and an algebraic part.However, in this method it is often difficult to maintain sparsity of the system matrices, and hence, it is in general not feasible for many systems with large dimensions.Also data-driven approaches have been recently extended to differential-algebraic systems, see, e. g., [2,26].Finally, we would like to mention [14] which provides an overview of model order reduction for DAE systems.
Different approaches for parametric model reduction have been developed as well.Besides others, there exist two main classes of such methods.First, one could determine global bases for the projection spaces for the entire parametric model, e. g., by concatenating the local bases for different parameter values, see, e. g., [6].Second, there exist approaches for constructing local bases for a specific parameter value by combining local basis information of several prereduced models at different parameter values, e. g., by interpolating between different models [21,23].We also refer to [9] for an extensive survey on such methods.
However, to the best of our knowledge, none of the techniques presented in [9] has already been extended to the case of DAE systems.Therefore, the motivation of this paper is to develop a first method for this purpose based on balanced trunction.If we aim to reduce a parameter-dependent system by balanced truncation in a naive fashion, we would have to solve the Lyapunov equations for each individual parameter of interest which is computationally prohibitive.To avoid these computational costs, for the case E = I N , the authors in [50] apply some interpolation techniques to approximate the Gramians for all admissible parameters.The authors in [51], on the other hand, apply the reduced basis method which is a well-established method to reduce parameter-dependent partial differential equations [31,45].Using this method, the authors determine the approximate subspaces in which the solutions of the corresponding Lyapunov equations are contained in.
In this paper we extend the reduced basis method to the case of DAE systems where the parametric projected Lyapunov equations are only solved for few sampling parameters.Then, based on these solutions, a reduced subspace in which the Lyapunov equation solutions for all µ ∈ D approximately live, is constructed.The latter steps form the computationally expensive offline phase.After that, using the reduced basis representation, the Lyapunov equations can be solved much more efficiently for all µ ∈ D in the online phase.A crucial question in the offline phase is the choice of the sample parameters.Usually, a grid of test parameters is selected.For these, the error is quantified using a posteriori error estimators.Then new samples are taken at those parameters at which the error estimator estimates the largest error.
In this paper we generalize the reduced basis balanced truncation method of [51] to differentialalgebraic systems with a focus on structured systems from specific application areas.The main problems we solve in this paper are the following: a) We derive error estimators for parameter-dependent projected Lyapunov equations.These can be improved if the given system is uniformly strictly dissipative.Since this condition is not always satisfied, we discuss transformations to achieve this for systems of a particular index 3 structure.
b) We discuss in detail how the polynomial part of parameter-dependent transfer functions of DAE systems can be extracted by efficiently approximating their Markov parameters.
c) We apply this approach to several application problems and illustrate its effectiveness.
Our method also follows the paradigm of decomposing the procedure into an offline and online phase.In the offline phase we approximate the image spaces of controllability and obervability Gramians that are the solutions of projected Lyapunov equations.These approximate image spaces are then used in the online phase, to determine approximations of the Gramians, and hence, the reduced order model, efficiently for all required parameters.The online phase can be performed for any parameter µ ∈ D. Due to the reduced dimension obtained by the offline phase, this step is very fast and can be performed repeatedly for all required parameters.Moreover, in this work, we introduce new error estimators, which are used in the offline phase to assess the quality of the reduction.
This paper is organized as follows.In Section 2, two model problems are introduced that motivate the method presented in this paper.In Section 3, we review existing concepts and approaches that will be used later in this paper.Thereby, in Subsection 3.1, we recall projection techniques to decouple the differential and algebraic equations in (1).Afterwards, in Subsection 3.2, we consider model order reduction by balanced truncation for DAE systems.We further address the numerical challenges that arise in computing the solutions of the required Lyapunov equations.Since solving Lyapunov equations for every requested parameter leads to high computational costs, in Section 4 the reduced basis method is presented, which was first applied to standard Lyapunov equations in [51].We derive two error estimators for our method, one of them is motivated by the estimator from [51], the other one is an adaption of the estimator presented in [48].Here we also discuss the treatment of the algebraic part of the system that may lead to polynomial parts in the transfer function.Finally, in Section 5, we evaluate the method of this paper by applying it to our model problems from Section 2.
2 Model problems

Problem 1: Stokes-like DAEs
The first example is the system d dt which arises, e. g., if we discretize the incompressible Navier-Stokes equation.The parameterindependent version is presented in [39,52].The system matrices are dependent on a parameter µ ∈ D, where D ⊂ R d is the parameter domain.For fluid-flow problems, the matrices E(µ), A(µ) ∈ R n,n represent the masses and the discretized Laplace operator.Naturally, it holds that E(µ) = E(µ) T > 0 and A(µ) = A(µ) T < 0 for all µ ∈ D, where H > 0 (H ≥ 0) denotes a positive (semi)definite matrix H and correspondingly H < 0 (H ≤ 0) denotes a negative (semi)definite matrix.The discrete gradient is given by G(µ) ∈ R n,q which we assume to be of full column rank for all µ ∈ D. The matrices B 1 (µ) ∈ R n,m , B 2 (µ) ∈ R q,m and C 1 (µ) ∈ R p,n , C 2 (µ) ∈ R p,q are the input and the output matrices, respectively.The state of the system at time t is given by x(t) ∈ R n and λ(t) ∈ R q .The vectors u(t) ∈ R m and y(t) ∈ R p are the input and output of the system.
In view of (2) we assume that the symmetry and definitess of E(•) and A(•) is represented in the affine parameter structure as where Θ E k (•), Θ A k (•) are positive parameter-dependent continuous functions with the convention k ≤ 0 for k ≥ 1 are parameter-independent matrices.Moreover, for reasons of computational efficiency, we always assume that n E , n A ≪ n.

Problem 2: Mechanical systems
The second system we consider is of the form d dt which results from a linearization of the spring-mass-damper model presented in [39].The mass, damping, and stiffness matrices M (µ), D(µ), K(µ) ∈ R nx,nx are assumed to be symmetric and positive definite for all µ ∈ D. The matrices B x (µ) ∈ R nx,m and C x (µ) ∈ R p,nx are the input and the output matrices.The matrix G(µ) ∈ R nx,q reflects algebraic constraints on the system and is of full column rank.In this example, the state includes the displacement x 1 (t) ∈ R nx , the velocity x 2 (t) ∈ R nx , and Lagrange multiplier λ(t) ∈ R q .
For convenience, we also define Then with this notation, we can write the mechanical system similarly as in (4) with the difference that the off-diagonal blocks of the state matrix are not the transposes of each other.
Similarly to the first model problem, we assume that M (•), D(•), and K(•) can be written as

Decoupling of differential and algebraic equations
In this section we briefly describe the projection technique for decoupling the algebraic constraints in DAEs of the form (1) from the remaining differential equations for a fixed parameter µ ∈ D and under the assumptions of regularity and asymptotic stability.Therefore, for simplicity of presentation, we omit µ in this section.The details of this can be found in [30,47,52,53].
As shown in [16] there exist regular matrices W, T ∈ R N,N that transform the system matrices into quasi-Weierstraß form, that is where J ∈ R N f ,N f and N ∈ R N∞,N∞ is nilpotent with nilpotency index ν that coincides with the (Kronecker) index of the DAE system (1).Here, N f is the number of the finite eigenvalues of the matrix pencil sE − A and N ∞ is the number of the infinite ones.Accordingly, spectral projectors onto the left and right deflating subspace of sE − A corresponding to the N f finite eigenvalues are defined as We define under the assumption that the input u(•) is sufficiently smooth and that the consistency conditions are satisfied, i. e., we assume that z i (0) := ν−1 k=0 F N (k)Bu (k) (0).They satisfy z(t) = z p (t) + z i (t) such that z(t) solves equation (1a).We refer to z p (t) and z i (t) as differential and algebraic states.These states are associated with the proper and improper controllability Gramians respectively.They span the reachable subspaces of the differential and algebraic states.Analogously, the proper and improper observability Gramians span the corresponding observability spaces.The proper Gramians are obtained by solving the projected continuous-time Lyapunov equations and the improper ones by solving the discrete-time Lyapunov equations In Subsection 3.2.2we will describe how these projected Lyapunov equations can be solved.
In practice, the projectors Π l and Π r are computationally expensive to form and even more expensive to compute with as they are dense matrices of large dimension in general.To determine them, the derivations from [33] can be applied, however they are not numerically stable since a Jordan canonical form needs to be computed.On the other hand, the methods presented in [3], [19], or [36] might be used to generate the differential and algebraic state spaces, however, they involve the determination of matrix kernels which are based on matrix decompositions.Suitable parts of the matrices involved in these decompositions are then used to determine the projection matrices Π l and Π r .They are in general not of a sparse structure and, therefore, not applicable if we consider matrices of large dimensions.Hence, if possible, it is advantageous to determine the projection matrices analytically.In many application fields this is possible because the matrices E and A are of special structure that can be utilized for this purpose.The examples in Section 2 have such a numerically advantageous structure.Further examples can be found in [36,54].

Model reduction of differential-algebraic systems
The aim of this section is to review methods to reduce the DAE system (1) for a fixed parameter.We utilize balanced truncation modified for projected systems which is presented in Subsection 3.2.1.Afterwards, in Subsection 3.2.2,we summarize the ADI and Smith methods to solve the involved projected Lyapunov equations.Finally, in Subsection 3.2.3,we present an alternative approach to approximate the algebraic parts of the system (potentially leading to polynomial parts in the transfer function) which is inspired by [42].

Balanced truncation
The aim of this subsection is to find a reduced system , r ≪ N which approximates the input/output behavior of the original system (1) (for a fixed parameter).
Since the different Gramians are positive semidefinite, there exist factorizations . We consider the singular value decompositions where the matrix Σ p = diag(σ 1 , . . ., σ N ) is a diagonal matrix with nonincreasing nonnegative entries that are the proper Hankel singular values and Σ i,1 is a diagonal matrix including the improper nonzero Hankel singular values.With the matrix Σ p,1 which contains the r p largest Hankel singular values and Σ i,1 ∈ R ri,ri we construct the left and right projection matrices and Preprint (Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg).2023-10-26 that balance the system, reduce the differential subsystem, and generate a minimal realization of the algebraic subsystem.
We obtain the reduced system (13) by setting The generated reduced system has decoupled differential and algebraic states.
If σ rp > σ rp+1 , then the ROM is asymptotically stable and one can estimate the output error of the reduced system by where ∞ are the transfer functions of the original and the reduced system [? , Theorem 7.9].The H ∞ -norm is defined as G ∞ := sup ω∈R σ max (G(iω)) where σ max (•) denotes the maximum singular value of its matrix argument.

Solving projected Lyapunov equations
The aim of this subsection is to present numerical techniques to solve the projected Lyapunov equations ( 9) and ( 11) in order to approximate the Gramians of system (1).For standard systems with E = I N , there are several methods to solve the corresponding Lyapunov equations.If smallscale matrices are considered, the Bartels-Stewart algorithm [5] or Hammarling's method [29] are used.These methods however, are inefficient in the case of large matrices.In typical applications of practical relevance, the solution of a Lyapunov equation is often of very low numerical rank.So it is desired to compute low-rank factors of these solutions directly.State-of-the-art methods are the ADI method [34,44], the sign function method [11] or Krylov subspace methods [49].In the literature there are also several extensions to projected Lyapunov equations such as [30,54].
In this section, we utilize the ADI method to solve the projected continuous-time Lyapunov equation (9) and the generalized Smith method to solve the discrete-time Lyapunov equation (11).Here we follow the ideas from [54].
We can use the following lemma from [54] to derive an equation that is equivalent to the projected continuous-time Lyapunov equation (9).Lemma 3.1.Let the matrix pencil sE − A with E, A ∈ R N,N be regular and asymptotically stable.Let further the matrix A be nonsingular and B ∈ R N,m .Assume that the left and right spectral projectors onto the finite spectrum of sE − A are denoted by Π l , Π r ∈ R N,N .If p ∈ C with Re(p) < 0 is not an eigenvalue of the pencil sA − E, then the projected discrete-time Lyapunov equation is equivalent to the projected continuous-time Lyapunov equation (9), i. e., their solution sets coincide.
Lemma 3.1 motivates the ADI iteration P 0 := 0, As shown in [54] the following proposition provides the convergence of the iteration (19).
Proposition 3.2.Let the matrix pencil sE − A with E, A ∈ R N,N be regular and asymptotically stable (i.e., all its finite eigenvalues have negative real part) and let B ∈ R N,m .Assume that the left and right spectral projectors onto the finite spectrum of sE − A are denoted by Π l , Π r ∈ R N,N .Suppose that we have given a sequence of shift parameters (p k ) k≥0 with Re(p k ) < 0 for all k ≥ 0 and with p k+ℓ = p k for some ℓ ≥ 1 and all k = 0, 1, 2, . ... Then the iteration (19) converges to the solution P p of the projected Lyapunov equation ( 9).
Remark 1.The periodicity of the shift parameters ensures that they fulfill a non-Blaschke condition that is sufficient for the convergence of the ADI iteration [37].
To work with the ADI iteration more efficiently, our aim is to compute low-rank factors of P p , i. e., we aim to compute a tall and skinny matrix Z such that P p ≈ ZZ H .We can represent the iteration ( 19) by the low-rank factors of where κ(p k ) = −Re(p k ) and Z 0 is chosen as the empty matrix in R N,0 .We note that the following properties hold for all j, k = 0, 1, 2, . ..: We further define S(p j )R(p j+1 ), j = 1, . . ., k.
Using (20), we obtain It remains to solve the discrete-time Lvapunov equation (11).Under the assumption that A is nonsingular, ( 11) is equivalent to the transformed discrete-time Lyapunov equation This equation could be solved with the Smith method [54].Since ) is nilpotent with the nilpotency index ν of the matrix N of the quasi-Weierstrass form of the pencil sE − A, the iteration stops after ν steps.This leads to the unique solution We note that we have to solve multiple linear systems with the matrix A to obtain the solution P i .Therefore, we can utilize the sparse LU decomposition to do this efficiently.Instead of computing the full matrix P i we can also generate the low-rank factors Remark 2. In many relevant applications, the solution P i can be explicitly calculated by making use of specific block structures in sE − A. Moreover, the improper Hankel singular values are often zero (namely, when the FOM's transfer function is strictly proper) in which case we do not have to solve the improper Lyapunov equations at all, see also [53].

Alternative approach to determine the algebraic parts
Since the reduced basis method presented later in this paper is applied only to the differential part of the system, in this section we discuss ways to construct reduced representations of the algebraic part.One way is to use the reduction provided by the improper Gramians as explained in Subsection 3.2.1.This is a suitable choice for parameter-independent problems.However, it is difficult to build a reduced representation for parameter-dependent problems, if the improper part depends on parameters.Then the discrete-time Lyapunov equations have to be solved for each parameter value of interest in the online phase which can be too expensive.For this reason, in this subsection, we present an alternative approach that is also viable in the parametric setting and allows an efficient evaluation in the online phase (see also Subsection 4.3).
Our method is inspired by [42] in which the polynomial part of the transfer functions of DAE systems of index at most two is approximated, but the methodology can also be extended to systems of even higher index.Consider the transfer function G(s) of system (1) (for a fixed parameter) which can be decomposed into its strictly proper part and a polynomial part where the polynomial part is given as with T, W, and N with index of nilpotency ν as in (7), see [52] for the details.Hence, for W −1 B =: B1 B2 and CT −1 =: C 1 C 2 partioned conforming to the partioning of ( 7), the polynomial part can be rewritten as k = 0, . . ., ν − 1 denote the k-th Markov parameters of the transfer function.As shown in [2], the strictly proper part G sp (iω) converges to zero for ω → ∞.For this reason we can approximate the Markov parameters if we consider sufficiently large values of ω ∈ R. We exemplify the methodology for a few cases.
The index one case In this case we simply have for a sufficiently large value of ω ∈ R and moreover, M k = 0 for k ≥ 1.

The index two case In this case, M
Hence, for sufficiently large values ω ∈ R we can approximate The index three case In the index three case, we approximate the polynomial part of the form Inserting iω ∈ iR into the polynomial part of the transfer function G poly provides a real and an imaginary part Then the Markov parameters are approximated by for sufficiently large ω, ω 1 , and ω 2 with System realization from Markov parameters If the polynomial part G poly (s) = ν−1 k=0 M k s k has been extracted and all Markov parameters M k , k = 0, . . ., ν − 1 have been determined we can realize it by a DAE system as follows.Assume that Then for k = 1, . . ., ν − 1 we define the matrices This yields Now we obtain an overall realization of the algebraic part of the system as

Reduced basis method
In this section, we return to the problem of reducing parameter-dependent systems.Since the solution of the Lyapunov equations is the most expensive part of balanced truncation, we aim to limit the number of Lyapunov equation solves.To achieve this, the reduced basis method presented by Son and Stykel in [51] can be applied.Besides the regularity and asymptotic stability of the matrix pencil sE(µ) − A(µ) for all µ ∈ D we impose the following assumption on the problem (1) for the rest of the paper.Assumption 4.1 ensures that the solutions of the parameter-dependent projected Lyapunov equations also depend continuously on µ ∈ D. Then also the ROM depends continuously on µ for a fixed reduced order.
In the following we will focus on the reduction of the differential part of the DAE system (1).The main idea consists of finding a reduced representation of the solutions of the Lyapunov equations ( 9) of the form where V(µ) ∈ R N,rV has orthonormal columns with Π r (µ)V(µ) = V(µ) and where Π r (µ) denotes the projector onto the right deflating subspace of sE(µ)−A(µ) for the parameter value µ.Under the assumption that V(µ) T E(µ)V(µ) is invertible and the pencil sV(µ is regular and asymptotically stable (This condition can be guaranteed a priori by the strict dissipativity assumptions in Subsection 4.2.), the low-rank factor Z(µ) solves the reduced Lyapunov equation The equation Π r (µ)V(µ) = V(µ) replaces the condition In practice, we aim to determine a matrix V glob ∈ R N,r glob with N ≫ r glob that contains the information of V(µ) for all µ ∈ D globally.When we have determined such a V glob , we set where orth(Π r (µ)V glob ) denotes the orthonormalized columns of Π r (µ)V glob .If the matrix V glob has been found, then the ROM for a particular parameter µ ∈ D can be determined very efficiently in the so-called online phase, where one simply solves (22) with ( 21) and (23) to determine the two Gramians and then projects the system as in Subsection 3.2.1.
The computation of the reduced basis V glob is done beforehand in the offline phase.There, we solve the full-order Lyapunov equation only for those parameters, where their solutions are currently approximated worst to enrich the current reduced basis.A posteriori error estimators are employed on a test parameter set to find these points efficiently.
We determine V glob by considering a test parameter set D Test ⊂ D. We begin by computing a lowrank factor Z(µ 1 ) in µ 1 ∈ D Test such that P p (µ 1 ) = Z(µ 1 )Z(µ 1 ) H solves the projected Lyapunov equation The first reduced basis is then given by V glob := orth(Z(µ 1 )).Next we determine the test parameter µ 2 ∈ D Test for which the solution P p (µ 2 ) of the Lyapunov equation ( 24) is approximated worst by using ( 21), (22), and (23).For that, one of the error estimators ∆ V (µ) presented in Subsection 4.1 is utilized.For this parameter µ 2 , we solve the projected Lyapunov equation ( 24) with µ = µ 2 to obtain the low-rank factor Z(µ 2 ).Then we define the new reduced basis by setting ).This procedure is repeated until the error estimator ∆ V is smaller than a prescribed tolerance for every test parameter in D Test .This method results in Algorithm 1.
Algorithm 1 Reduced basis method for projected Lyapunov equations (offline phase) Input: Dynamical system (1), test parameter set D Test , tolerance Tol.

11:
Set µ := arg max µ∈DTest\M ∆ V (µ). 12: 13: end while Remark 3.Under the assumption that the projected Lyapunov equations ( 9), (10) are solved exactly for all parameters µ ∈ D using the reduced Lyapunov equations of the form (22), the H ∞ error bound ( 16) is also valid in the entire parameter domain.Unfortunately, this bound is no longer valid if numerical approximation errors affect the numerically computed Gramians which is already the case in the parameter-independent case.A general and in-depth analysis of the effect of such errors on the Hankel singular values is still an open problem.On the other hand, the numerically computed Hankel singular values can still be used to obtain an approximate error bound.

Error estimation
We want to estimate the error E(µ) := P p (µ) − Z(µ)Z(µ) H with the help of the residual Lemma 4.2.Let µ ∈ D be given.Let the matrix pencil sE(µ) − A(µ) with E(µ), A(µ) ∈ R N,N be regular and asymptotically stable.Assume that the left and right spectral projectors onto the finite spectrum of sE(µ) − A(µ) are denoted by Π l (µ), Π r (µ) ∈ R N,N .Let the controllability Gramian P p (µ) ∈ R N,N be defined as in (8) and let it be approximated as described in equation (21).Then the error E(µ) := P p (µ) − Z(µ)Z(µ) H solves the projected error equation Proof.The condition For the remaining equation we consider its left-hand side which proves the statement.
Similarly to [51], we consider the linear system where the operator ⊗ denotes the Kronecker product and vec the vectorization operator that stacks the columns of the matrix on top of one another.This linear system is equivalent to the projected Lyapunov equation (24).We note that L(µ) is a singular matrix and that the unique solvability is enforced by the additional restriction given by the projection with Π ⊗ r (µ).Using the linear system representation and the abbreviations e(µ) := vec(E(µ)), r(µ) := vec(R(µ)), we can rewrite the error equation (25) as This equation is equivalent to Due to the unique solvability of the latter, we obtain where (•) † denotes the Moore-Penrose inverse.Then we can estimate V (µ).
Here, σ + min (•) denotes the smallest positive singular value of its matrix argument and α(µ) is an easily computable lower bound of σ V (µ) is the first possible error estimator that provides a rather conservative bound in practice.
To improve the estimator (28), we consider the error bound presented in [48].Assume that E(µ) ≈ E(µ) is an error estimate that is not necessarily an upper bound.An error bound ∆ (2) where the residual R(µ) is defined as It remains to obtain an error estimate E(µ) for all µ ∈ D Test .To do so, we solve the error equation (25) approximately by modifying the projected ADI iteration from Section 3.2.2 and the reduced basis method in Algorithm 1.
The right-hand side of the error equation ( 25) can be written as the product B l (µ)B r (µ) H , where For simplicity of notation we consider the modified ADI iteration for the parameter-independent case such that E(µ) ≡ E, B l (µ) ≡ B l , B r (µ) ≡ B r , and Π l (µ) ≡ Π l .Analogously to the derivation in Section 3.2.2(and under the same conditions), it can be shown that the iteration converges to the solution E of ( 25), where S(p k ) and R(p k ) are defined as in ( 18) for a shift parameter p k ∈ C with Re(p k ) < 0. The factors Z l,k and Z r,k can be written as We include the modified projected ADI iteration into the reduced basis method presented in Algorithm 1 by solving the error equation ( 25) in Step 2 and 8 to obtain the the two factors Z l (µ), Z r (µ) with Z l (µ)Z r (µ) H ≈ E(µ).The orthonormal basis computation in Step 4 and 10 is replaced by V glob := orth([V glob , Z l (µ)]).Note that here, the columns of Z l (µ) and Z r (µ) span the same subspace.As error estimator we use ∆ V .After we have determined the orthonormal basis, we solve equation ( 25) on the corresponding subspace to get an approximate error E(µ), that we use for the error estimator ∆ (2) V (µ).Remark 4. If the factor Z(µ) ∈ R N,nZ is already of large dimension n Z , the right-hand side of the error equation ( 25) described by B l (µ) and B r (µ) is of even larger rank 2n Z + m, and hence, the solution E(µ) cannot in general be well-represented by low-rank factors.This means that the error space basis might be large which leads to a time consuming error estimation.

Strictly dissipative systems
For the model examples from Section 2 we can make use of their structure to derive the bound α(µ) in a computationally advantageous way.Therefore, we will impose an additional assumption on the matrix pencils sE(µ) − A(µ) in ( 4) and ( 6) that is formed from submatrices of E(µ) and A(µ).
We will consider this condition for our motivating examples from Section 2 and determine a corresponding lower bound α(µ).
As shown in [53] the projection matrices are given as where Even though for the Stokes-like system in (4) we have A S (µ) = A(µ) < 0 for all µ ∈ D, in this section we will already treat the more general case of a uniformly strictly dissipative set of matrix pencils sE(µ) − A(µ) with nonsymmetric A(µ) as this case will arise later in Subsubsection 4.2.2.
We continue finding a value for α(µ).
Proof.Let Π(µ) be as in (31).Making use of the condition Π(µ)G(µ) = 0, we can deduce Denoting the smallest positive eigenvalue of a symmetric matrix by λ + min (•), Corollary 3.1.5from [32] and the eigenvalue properties of the Kronecker product, given in [32,35] (see also [51]) as well as a Rayleigh quotient argument lead to the estimate Since computing the eigenvalues of E(µ) and A(µ) for all requested parameters µ can be expensive, there are several ways to find cheap lower bounds of 2λ min (E(µ))λ min − A S (µ) , some of which are derived in [51].Based on these bounds, we can, e. g., estimate =: α(µ), where μ is an arbitrary fixed parameter in D and where we make use of the convention that Here we compute the eigenvalues of E(μ) and A S (μ) only once to find a lower bound α(µ) of σ + min Π ⊗ l (µ)L(µ)Π ⊗ r (µ) for every parameter µ ∈ D.

Mechanical systems
Next we turn to the mechanical system from Subsection 2.2.There, the corresponding matrix pencil set is not uniformly strictly dissipative.However, it can be made uniformly strictly dissipative by appropriate generalized state-space transformations as we will show now.
Here we apply a transformation presented in [21,43].We observe that by adding productive zeros and for an auxiliary function γ : D → (0, ∞) yet to be chosen, the system ( 5) is equivalent to the system where x(t) is equal to x 1 (t) and d dt x(t) equal to x 2 (t) from system (5).The last equation is a direct consequence from the equation above and poses no extra restrictions.These equations can be written as first-order system.By defining the matrices and replacing G(µ) by , we generate a system in the form ( 4) such that we can apply the methods from Subsection 4.2.1 for this kind of system in the following, if γ(µ) is chosen properly.This is possible, because even if A(µ) is unsymmetric, the projector Π(µ) in ( 31) will be the same in both left and right spectral projectors Π l (µ) and Π r (µ) and hence, the estimate in Theorem 4.4 is still valid.Only an estimation of the form ( 32) is not possible, since the matrices E(µ) and A S (µ) cannot be written as affine decompositions in which all summands are positive or negative semidefinite, respectively.So in our numerical experiments we make use of Theorem 4.4 directly.
Proof.As shown in [43], the pencil set {sE(µ Using the estimate λ min (XY ) ≥ λ min (X)λ min (Y ) for symmetric and positive definite matrices X and Y of conforming dimension [? , Theorem 4], we arrive at the choice for γ(µ) given by Because of the symmetry and positive definiteness of D(µ)K(µ) −1 D(µ) and the submultiplicativity of the norm we obtain By Weyl's lemma [32,35], we also have Therefore, condition (34) is satisfied, if we choose γ(µ) such that This can be achieved by the choice To avoid computing the eigenvalues of M (µ), D(µ), K(µ) for every parameter µ, we further use the estimates that are a consequence of Weyl's lemma.Similar estimates are also valid for M (µ) and K(µ).
Then we can further estimate γ(µ) as while ensuring that we can choose γ(µ) > 0 for all µ ∈ D.
The benefit of the above estimate is that the extremal eigenvalues of the matrices M k , D k , and K k have to be computed only once in the beginning and otherwise, its evaluation is cheap, since

Treatment of the algebraic parts
The goal of this subsection is to discuss the reduction of the algebraic part of the system which is potentially also parameter-dependent.To do so, we first precompute specific matrices in the offline phase which will later be used to efficiently construct the Markov parameters in the online phase for a particular parameter configuration and then determine a reduced representation of the algebraic subsystem using the construction from the last paragraph in Subsection 3.2.3.In order for this construction to be efficient, we assume that the parameter influence has an additional low-rank structure, i. e., with regard to the affine decomposition (2), we assume that E k , A k , B k , and C k are low-rank matrices for k ≥ 1, i. e., they admit low-rank factorizations To determine the Markov parameters, for a fixed set of sample points ω i ∈ R, i = 1, . . ., n S , we aim to evaluate G(µ, iω i ) efficiently for all needed parameters µ ∈ D. In other words, we aim for an efficient procedure to evaluate the transfer function G(µ, iω) for fixed value ω ∈ R and varying µ.Due to (37) and since n E and n A are assumed to be very small, also the matrix . . .
It holds that With the help of the Sherman-Morrison-Woodbury identity we obtain −1 B j are all of small dimension and they can be precomputed in the offline phase.
In the online phase, the small matrix Ω(µ, iω) −1 + V T (iωE 0 − A 0 ) −1 U can be formed (under the assumption that all necessary matrices are invertible) and used for small and dense system solves in order to calculate G(µ, iω) with only little effort.

Numerical results
In this section, we present the numerical results for the two differential-algebraic systems from Section 2. The computations have been done on a computer with 2 Intel Xeon Silver 4110 CPUs running at 2.1 GHz and equipped with 192 GB total main memory.The experiments use MATLAB ® R2021a (9.3.0.713579) and examples and methods from M-M.E.S.S.-2.2.[46].

Problem 1: Stokes equation
We consider a system that describes the creeping flow in capillaries or porous media.It has the following structure with appropriate initial and boundary conditions.The position in the domain Ω ⊂ R ℓ is described by ζ ∈ Ω and t ≥ 0 is the time.For simplicity, we use a classical solution concept and assume that the external force f : Ω × [0, ∞) → R ℓ is continuous and that the velocities v : Ω × [0, ∞) → R ℓ and pressures p : Ω × [0, ∞) → R ℓ satisfy the necessary smoothness conditions.The parameter µ ∈ D represents the dynamic viscosity.We discretize system (38) by finite differences as shown in [39,53] and add an output equation.Then, we obtain a discretized system of the form (4), which is of index 2.The matrix A(µ) = µ • A ∈ R n,n depends on the viscosity parameter µ.  stokes FVM and we choose the dimensions n = 3280, q = 1681, m, p = 1, and the parameter set D = 1 2 , 3 2 .The test parameter set D Test consists of ten equidistant points within D, i. e., D Test = 1 2 + 1 9 • k k = 0, . . ., 9 .The matrices B 2 and C 2 are zero matrices.The reduced basis method from Section 4 produces bases of dimensions 24 for both projected continuous-time Lyapunov equations.This basis generation is done in the offline phase, which takes 2.7•10 3 seconds for this example.For the first iteration of the reduced basis method corresponding to the proper controllability Lyapunov equation, we evaluate the errors and their estimates from equation ( 28) and (29).They are presented in Figure 1, where the error is evaluated by where P acc p (µ) denotes an accurate approximation of the exact solution P p (µ).For evaluating the error, the Lyapunov equations are also solved with the residual tolerance 10 −15 to obtain an accurate estimate of the exact solution.After the first iteration step, we obtain an error that is already smaller than the tolerance tol = 10 −4 .Additionally, we see that the error estimator ∆  V leads to more conservative error bounds.
After we have determined the bases, we approximate the proper controllability and observability Gramians to apply balanced truncation in the online phase for two different parameters µ = 1.28 and µ = 0.65 where the first parameter is the eighth entry from the test parameter set D Test .The reduced systems for both parameters are of dimension r = 10 and consist only of differential states.This online phases last for 3.7•10 −2 and 1.9•10 −2 seconds, respectively, for this example.Figures 2a  and 2b depict the transfer functions of the full and the reduced systems and the corresponding errors.We observe that the error is smaller than 10 −8 for all ω in [10 −4 , 10 4 ].Additionally, in Figures 3a and 3b, we show the output y(t) of the full system and the output y R (t) of the reduced system for an input u(t) = −2−sin(t), t ∈ [0, 10], and the initial condition is given as z(0) = 0. We also depict the error y(t) − y R (t) 2 is smaller than 10 −8 in the entire time range.Hence, we can observe that the system behavior is well-approximated by the reduced system.Now, to demonstrate that the methods presented in this paper are also suitable for systems with an improper transfer function, we modify the system in such a way that the matrices B 2 and C 2 are chosen to be B 2 = 0 . . .0 1 T and C 2 = 1 0 . . .0 1 .That way, we obtain more complex restrictions for G T x(t) and also evaluate the pressure described by λ(t) to generate an output y(t).We additionally assume that the external force represented by the input function B 1 (µ)u(t) varies its magnitude depending on the viscosity of the system so that the input matrix B 1 (µ) = (1 + µ) • B 1 is parameter-dependent, and hence, the polynomial part of the transfer function is parameter-dependent as well.
Again, in the offline phase, we execute the reduced basis method to generate bases that approximate the proper controllability and observability spaces (i.e., the image space of the corresponding proper Gramians) of this system.This phase takes 9.3 • 10 3 seconds for this example.In Figure 4a and 4b, the errors and error estimations in the two stages of the reduced basis method corresponding to the proper controllability space are shown.For evaluating the error, the Lyapunov equations are also solved with the residual tolerance 10 −15 to obtain an accurate estimate of the exact solution.
We can observe that after the first iteration step, we obtain an error that is significantly larger than the tolerance 10 −4 as shown in Figure 4a.Additionally, we see that the error estimator ∆ V provides an almost sharp bound of the actual error while ∆ V leads to more conservative error bounds.The second iteration of the reduced basis method and the corresponding errors are depicted in Figure 4b.In this case, the error as well as the estimators are smaller than the tolerance tol = 10 −4 .The final basis projection space dimensions for the projected continuous-time Lyapunov equations are 48 and 54.
The next step is to use the bases to generate approximations of the system Gramians and to apply balanced truncation, which leads to a reduced system of dimension 25 where we obtain 23 differential states and 2 algebraic ones for both parameters µ = 1.28 and µ = 0.65.The online phase lasts for 3.2 and 3.1 seconds if we use balanced truncation via the improper Gramians to reduce the improper system components and 2.4 seconds for both parameters if we extract the polynomial part of the transfer function.However, in this example, the parameter dependency does not have a low-rank structure.Thus, for the determination of the polynomial part, it is necessary to solve one linear system of equations of full order in the online phase for each parameter of interest.Explicit formulas of the improper Gramians are also described in [53].For a fixed parameter, also an explicit formula is provided with compare with (14).This formula replaces the computation of the low-rank factors of the improper Gramians in the online phase.In our setting, (39) can be evaluated efficiently since all involved matrices can be precomputed and then appropriately scaled with µ in the parametric setting.
We demonstrate the quality of the reduced systems by evaluating the error of the transfer functions.
In the following, we denote the reduced system (error system) with an algebraic part generated by balanced truncation as reduced BT (error BT ) and the reduced system (error system) with an algebraic part obtained by approximating the polynomial part of the transfer function as reduced TF (error TF ).We show the sigma plot of the original and reduced transfer function as well as the error for two parameters.The first parameter, depicted in Figure 5a, is µ = 1.28 ∈ D Test that is an element from the test parameter set and hence we already know that the error estimators ∆ 1/2 (µ) are smaller than the tolerance and the proper controllability space is well-approximated.The second parameter is µ = 0.65 ∈ D, and the corresponding results are shown in Figure 5b.The polynomial part of the transfer function is clearly observable since it does not tend to zero for growing values of ω.We observe that for both parameters, the error is smaller than 10 −5 in the entire frequency band [10 −4 , 10 6 ] and that the parameter chosen from the test parameter set does not lead to significantly better approximations than an arbitrarily chosen parameter from the parameter set D. This allows the conclusion that the basis V glob approximates the proper controllability space well -for all parameters in D. Additionally, we can observe that both procedures to approximate the improper part of the system lead to satisfactory results.
Finally, to illustrate that the output y is well approximated by the reduced output y R , in Figure 6a and 6b we evaluate the norms of the output error y(t) − y R (t) 2 for t ∈ [0, 100] for the test parameter µ = 1.28 and the parameter µ = 0.65.We choose the input function u(t) = 5 − 5 • cos(t) and the initial condition z(0) = x(0) λ(0) = 0, in such a way that the consistency conditions are satisfied.We obtain an output error that is smaller than 10 −5 for all t ∈ [0, 100] for both evaluated parameters.Again, we observe that both approximations of the algebraic part of the system lead to results of similar quality.

Problem 2: Mechanical system
As an example for system (5), we consider a constrained mass-spring-damper system which is depicted in Figure 7 and which is similar to the one considered in [59].The matrices are generated using the function triplechain MSD from the M-M.E.S.S.-2.2.toolbox, see [46] where the mass matrix M is built as The stiffness matrix is built as Figure 7: Constrained mass-damper-spring system including three chains of masses that are connected by the mass m 3ℓ+1 . with We set ℓ = 200 so that the dimension of M and K is n = 601 and the matrix G ∈ R n,1 is a zero matrix except for the entries G 1,1 = 1 and G n,1 = −1.Hence, the first and the last mass are additionally connected by a rigid structure, as shown in Figure 7. Hence, the first-order system is of dimension 2n + 2 = 1204 after applying the transformation presented in Subsubsection 4.2.2.The input matrix B x ∈ R n,1 is chosen as a zero matrix with one nonzero entry (B x ) 450,1 = 1.The output matrix C x ∈ R 1,n is given by C x = B T x .These systems occur in the field of damping optimization, see [15,[57][58][59], where the damping matrix D ∈ R n,n consists of an internal damping D int and some external dampers that need to be optimized.In our example, we choose Rayleigh damping as a model for the internal damping, i. e., D int = ηM + ρK, where we set η = 0.01 and ρ = 0.02.To describe the external damping, we assume that every mass is connected to a grounded damper, where the masses m 1 , . . ., m ℓ are connected by dampers with viscosity µ 1 , the masses m ℓ+1 , . . ., m 2ℓ by dampers with viscosity µ 2 and the masses m 2ℓ+1 , . . ., m 3ℓ by dampers with viscosity µ 3 as depicted in Figure 7.The mass m 3ℓ+1 is connected to a grounded damper with fixed viscosity 1.The external dampers can be described by the matrices where 0 n1,n2 denotes the zero matrix in R n1,n2 .Therefore, the overall damping matrix can then be constructed as where µ = [µ 1 , µ 2 , µ 3 ] ∈ D are the viscosities of the dampers, which are optimized and thus variable in our model.This model leads to a differential-algebraic system of index 3.We generate the index-2 surrogate system as shown in Subsubsection 4.2.2 using the value γ(µ) = 1 2 γ * (µ) from (36).
We consider two parameter settings.In the first one, the system is damped more strongly, and in the second one more weakly.The latter setting is used to illustrate that the slightly damped system is more difficult to approximate by projection spaces of small dimensions, which poses a challenge from the numerical point of view.For both parameter settings, the reduced models are obtained by truncating all state variables corresponding to proper Hankel singular values with σ p,k /σ p,1 < 10 −8 .63, where all states are differential ones.The online phases take 6.4 • 10 −1 and 1.3 • 10 −1 seconds, respectively, for this example.In Figure 9, we depict the original, the reduced, and the error transfer function.Additionally, we evaluate the outputs of the original and the reduced system y and y R .The results are shown in Figure 10, where we observe the norm of the output error y(t) − y R (t) 2 for t ∈ [0, 50].We choose the input function u(t) = 1 − 1 • cos(t) and the initial condition z(0) = x(0) λ(0) = 0, so that the consistency conditions are satisfied.We observe that the output error is smaller than 10 −2 for all t ∈ [0, 50].Note that for damping optimization examples, the proper controllability and observability spaces are often hard to approximate by spaces of small dimensions, see [10,56].Hence, the tolerances are chosen rather large for this kind of example to achieve satisfying results.

Setting 2: Weaker external damping
Now, we consider a parameter set with viscosities of small magnitudes, that is D = [0.01,0.1] 3 .The test-parameter set D Test consists of 50 randomly selected parameters from D. To generate the test parameters, we again use the rand operator from MATLAB so that the k-th parameter is equal to µ = 0.01 + 0.09 • rand(3, 1) with fixed seed.
First, we apply the offline phase of the reduced basis method that takes 6.9 • 10 3 seconds.When computing the proper controllability space, two iterations are needed to generate a subspace of dimension 449 that leads to error estimates ∆ V is equal to 5.8 • 10 −4 , and hence, sufficient accuracy is achieved.As for the first parameter setting, the error estimator ∆ (2) V approximates the error E(•) better than the estimation by ∆ (1) V .Hence, we use the error estimator ∆ (2) V within the reduced basis method.We apply the online phase for the parameters µ = 0.0886, 0.0972, 0.0882 ∈ D Test ⊂ D (which is rounded to 4 digits) and µ = 0.01, 0.02, 0.01 ∈ D to derive a reduced-order model of dimension 78 for the first parameter and the reduced dimension 178 for the second one.The online phases need 2.21 and 2.04 seconds, respectively.Figure 12 depicts the original, the reduced, and the error transfer function.Additionally, we evaluate the norm of the output error y(t) − y R (t) 2 for t ∈ [0, 50] in Figure 13, where the input function u(t) = 1 − 1 • cos(t) and the initial condition z(0) = x(0) λ(0) = 0 are chosen in such a way that the consistency conditions are satisfied.We observe that the output error is smaller than 10 −4 for all t ∈ [0, 50].

Conclusions
This paper has addressed the model reduction of parameter-dependent differential-algebraic systems by balanced truncation.To apply the balancing procedure we have utilized specific projectors to eliminate the algebraic constraints.This has enabled us to compute the necessary Gramians efficiently by solving projected Lyapunov equations.
To handle the parameter-dependency, we have applied the reduced basis method, which is split into the offline phase and the online phase.In the offline phase, we have computed the basis of the subspace which approximates the solution space of the parametric Lyapunov equations.To assess the quality of this subspace we have derived two error estimators.Afterwards, in the online    phase, we have solved a reduced Lyapunov equation to obtain an approximation of the Gramian for a parameter value of interest efficiently.Therefore, a balanced truncation for this parameter value can also be carried out fast.
This method has been illustrated by numerical examples of index two and three.In particular, we were often able to reduce the associated Lyapunov equations to very small dimensions.We have evaluated our error estimators, where the second one can often estimate the error almost exactly.

Assumption 4 . 1 .
We assume that D ⊂ R d is a bounded and connected domain.We assume further that the matrix-valued functions E(•), A(•), B(•), C(•) in (1) as well as the left and right spectral projector functions Π l (•) and Π r (•) on the deflating subspace associated with the finite eigenvalues of (sE − A)(•) depend continuously on the parameter µ.

Figure 1 :
Figure 1: Error and error estimates of the approximated controllability Gramians for the Stokes system (4) after the first iteration of the reduced basis method.

Figure 2 :
Figure 2: Results for the reduction of the Stokes system (4).
sharp bound of the actual error while ∆ Output plots of the original and reduced system and the corresponding error for the test parameter µ = 1.28.
Output plots of the original and reduced system and the corresponding error for the parameter µ = 0.65.

Figure 3 :
Figure 3: Output and output error of the original and reduced Stokes system (4) First step of the reduced basis method.Second step of the reduced basis method.

Figure 4 :
Figure 4: Error and error estimates of the approximated Gramians for the Stokes system (4) with improper parts.

Figure 5 :
Figure 5: Results for the reduction of the Stokes system (4) with improper parts.
Output plots of the original and reduced system and the corresponding error for the test parameter µ = 1.28.Output plots of the original and reduced system and the corresponding error for the parameter µ = 0.65.

Figure 6 :
Figure 6: Output and output error of the original and reduced Stokes system (4) with improper parts Output plots of the original and reduced system and the corresponding error for the test parameter µ = 0.2622, 0.1175, 0.5169 .Output plots of the original and reduced system and the corresponding error for the parameter µ = 0.15, 0.15, 0.5 .

Figure 10 : 2 •Figure 11 :
Figure 10: Output and output error of the original and reduced system (5) with stronger external damping.

( 2 )
V (µ) that are at most 5.6 • 10 −4 .For the proper Preprint (Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg).2023Sigma plots of the original, reduced, and error transfer functions for the test parameter µ = 0.0886, 0.0972, 0.0882 .Sigma plots of the original, reduced, and error transfer functions for the parameter µ = 0.01, 0.02, 0.01 .The magnifying glass illustrates that the reduced-order model captures the strong oscillations in the sigma plot very well.

Figure 12 :
Figure 12: Results for the reduction of the mechanical system (5) with weaker external damping.
Output plots of the original and reduced system and the corresponding error for the test parameter µ = 0.089, 0.097, 0.088 .
Output plots of the original and reduced system and the corresponding error for the parameter µ = 0.01, 0.02, 0.01 .

Figure 13 :
Figure 13: Output and output error of the original and reduced mechanical system (5) with weaker external damping.