Accelerated construction of projection-based reduced-order models via incremental approaches

We present an accelerated greedy strategy for training of projection-based reduced-order models for parametric steady and unsteady partial differential equations. Our approach exploits hierarchical approximate proper orthogonal decomposition to speed up the construction of the empirical test space for least-square Petrov-Galerkin formulations, a progressive construction of the empirical quadrature rule based on a warm start of the non-negative least-square algorithm, and a two-fidelity sampling strategy to reduce the number of expensive greedy iterations. We illustrate the performance of our method for two test cases: a two-dimensional compressible inviscid flow past a LS89 blade at moderate Mach number, and a three-dimensional nonlinear mechanics problem to predict the long-time structural response of the standard section of a nuclear containment building under external loading.


Projection-based model reduction of parametric systems
In the past few decades, several studies have shown the potential of model order reduction (MOR) techniques to speed up the solution to many-query and real-time problems, and ultimately enable the use of physics-based three-dimensional models for design and optimization, uncertainty quantification, real-time control and monitoring tasks.The distinctive feature of MOR methods is the offline/online computational decomposition: during the offline stage, high-fidelity (HF) simulations are employed to generate an empirical reduced-order approximation of the solution field and a parametric reduced-order model (ROM); during the online stage, the ROM is solved to estimate the solution field and relevant quantities of interest for several parameter values.Projection-based ROMs (PROMs) rely on the projection of the equations onto a suitable low-dimensional test space.Successful MOR techniques should hence achieve significant online speedups at acceptable offline training costs.This work addresses the reduction of offline training costs of PROMs for parametric steady and unsteady partial differential equations (PDEs).
We are interested in the solution to steady parametric conservation laws.We denote by μ the vector of p model parameters in the compact parameter region P ⊂ R p ; given the domain ⊂ R d (d = 2 or d = 3), we introduce the Hilbert spaces (X , • ) and (Y, |||•|||) defined over .Then, we consider problems of the form find u μ ∈ X : R μ (u μ , v) = 0, ∀ v ∈ Y, μ ∈ P, (1) where R : X × Y × P → R is the parametric residual associated with the PDE of interest.We here focus on linear approximations, that is we consider reduced-order approximations of the form where Z : R n → X is a linear operator, and n is much smaller than the size of the HF model; α : P → R n is the vector of generalized coordinates.We here exploit the least-square Petrov-Galerkin (LSPG, [1]) ROM formulation proposed in [2], which is well-adapted to approximating advection-dominated problems: the approach relies on the definition of a low-dimensional empirical test space Y ⊂ Y; furthermore, it relies on the approximation of the HF residual R through hyper-reduction [3] to enable fast online calculations of the solution to the ROM.In more detail, we consider an empirical quadrature (EQ) procedure [4,5] for hyper-reduction: EQ methods recast the problem of finding a sparse quadrature rule to approximate R as a sparse representation problem and then resort to optimization algorithms to find an approximate solution.Following [4], we resort to the non-negative least-square (NNLS) method to find the quadrature rule.
We further consider the application to unsteady problems: to ease the presentation, we consider one-step time discretizations based on the time grid {t (k) } K k=0 .Given μ ∈ P, we seek the sequence μ = {u for all μ ∈ P. As for the steady-state case, we consider linear ansatzs of the form u μ , where Z : R n → X is a linear time-and parameter-independent operator, and α (0) , . . ., α (K ) : P → R n are obtained by projecting the Eq. ( 3) onto a low-dimensional test space.To speed up online costs, we also replace the HF residual in (3) with a rapidlycomputable surrogate through the same hyper-reduction technique considered for steadystate problems.
Following the seminal work by Veroy [6], numerous authors have resorted to greedy methods to adaptively sample the parameter space, with the ultimate aim of reducing the number of HF simulations performed during the training phase.Algorithm 1 summarizes the general methodology.First, we initialize the reduced-order basis (ROB) Z and the ROM based on a priori sampling of the parameter space; second, we repeatedly solve the ROM and we estimate the error over a range of parameters P train ⊂ P; third, we compute the HF solution for the parameter that maximizes the error indicator; fourth, if the error is above a certain threshold, we update the ROB Z and the ROM, and we iterate; otherwise, we terminate.Note that the algorithm depends on an a posteriori error indicator : if is a rigorous a posteriori error estimator, we might apply the termination criterion directly to the error indicator (and hence save one HF solve).The methodology has been extended to unsteady problems in [7]: the method in [7] combines a greedy search driven by an a posteriori error indicator with proper orthogonal decomposition (POD, [8,9]) to compress the temporal trajectory.

4:
Compute the estimate u hf μ and evaluate the error indicator μ for all μ ∈ P train .

5:
Compute u hf μ ,n for μ ,n = arg max μ∈P train μ ; update P and S . 6: Update the ROB Z and the ROM.end if

10: end for
As observed by several authors, greedy methods enable effective sampling of the parameter space [10]; however, they suffer from several limitations that might ultimately limit their effectiveness compared to standard a priori sampling.First, Algorithm 1 is inherently sequential and cannot hence benefit from parallel architectures.Second, Algorithm 1 requires the solution to the ROM and the evaluation of the error indicator for several parameter values at each iteration of the offline stage; similarly, it requires the update of the ROM-i.e., the trial and test bases, the reduced quadrature, and possibly the data structures employed to evaluate the error indicator.These observations motivate the development of more effective training strategies for MOR.

Contributions and relation to previous work
We propose an acceleration strategy for Algorithm 1 based on three ingredients: (i) a hierarchical construction of the empirical test space for LSPG ROMs based on the hierarchical approximate POD (HAPOD, [11]); (ii) a progressive construction of the empirical quadrature rule based on a warm start of the NNLS algorithm; (iii) a two-fidelity sampling strategy that is based on the application of the strong-greedy algorithm (see, e.g., [12, section 7.3]) to a dataset of coarse simulations.Re (iii), sampling based on coarse simulations is employed to initialize Algorithm 1 (cf.Line 1) and ultimately reduce the number of expensive greedy iterations.We illustrate the performance of our method for two test cases: a two-dimensional compressible inviscid flow past a LS89 blade at moderate Mach number, and a three-dimensional nonlinear mechanics problem to predict the long-time structural response of the standard section of a nuclear containment building (NCB) under external loading.
Our method shares several features with previously-developed techniques.Incremental POD techniques have been extensively applied to avoid the storage of the full snapshot set for unsteady simulations (see, e.g., [11] and the references therein): here, we adapt the incremental approach described in [11] to the construction of the test space for LSPG formulations.Chapman [13] extensively discussed the parallelization of the NNLS algorithm for MOR applications: we envision that our method can be easily combined with the method in [13] to further reduce the training costs.Several authors have also devised strategies to speed up the greedy search through the vehicle of a surrogate error model [14,15]; our multi-fidelity strategy extends the work by Barral [16] to unsteady problems and to a more challenging-from the perspective of the HF solver-compressible flow test.We remark that our approach is similar in scope to the work by Benaceur [17] that devised a progressive empirical interpolation method (EIM, [18]) for hyperreduction of nonlinear problems.Finally, we observe that multi-fidelity techniques have been extensively considered for non-intrusive MOR (see, e.g., [19] and the references therein).
The paper is organized as follows.In "Projection-based model reduction of parametric systems", we review relevant elements of the construction of MOR techniques based on LSPG projection for steady conservation laws; we further address the construction of Galerkin ROMs for problems of the form (3).Then, in "Accelerated construction of PROMs", we present the accelerated strategy for both classes of ROMs."Numerical results" illustrates the performance of the method for two model problems."Conclusions" draws some conclusions and discusses future developments.

Projection-based model reduction of parametric systems
"Preliminary definitions and tools" summarizes notation that is employed throughout the paper and reviews three algorithms-POD, strong-greedy, and the active set method for NNLS problems-that are used afterwards."Least-square Petrov-Galerkin formulation of steady" reviews the LSPG formulation employed in the present work for steady conservation laws and illustrates the strategies employed for the definition of the quadrature rule and the empirical test space.Finally, "Vanilla POD-greedy for Galerkin time-marching ROMs" addresses model reduction of unsteady systems of the form (3).

Preliminary definitions and tools
We denote by T hf = {x hf j } N nd j=1 , T the HF mesh of the domain with nodes {x hf j } j and connectivity matrix T; we introduce the elements {D k } N e k=1 and the facets {F j } N f j=1 of the mesh and we define the open set F j in as the union of the elements of the mesh that share the facet F j , with j = 1, . . ., N f .We further denote by X hf the HF space associated with the mesh T hf and we define N hf := dim(X hf ).
Exploiting the previous definitions, we can introduce the HF residual: for all w, v ∈ X hf .To shorten notation, in the remainder, we do not explicitly include the restriction operators in the residual.The global residual can be viewed as the sum of local element-wise residuals and local facet-wise residuals, which can be evaluated at a cost that is independent of the total number of elements and facets, and is based on local information.The nonlinear infinite-dimensional statement (1) translates into the high-dimensional problem: find u hf μ ∈ X hf such that Since ( 5) is nonlinear, the solution requires to solve a nonlinear system of N hf equations with N hf unknowns.Towards this end, we here resort to the pseudo-transient continuation (PTC) strategy proposed in [20].
We use the method of snapshots (cf.[8]) to compute POD eigenvalues and eigenvectors.Given the snapshot set {u k } n train k=1 ⊂ X hf and the inner product (•, •) pod , we define the Gramian matrix C ∈ R n train ×n train , C k,k = (u k , u k ) pod , and we define the POD eigenpairs with λ 1 ≥ λ 2 ≥ . . .λ n train ≥ 0. In our implementation, we orthonormalize the modes, that is (ζ n , ζ n ) pod = 1 for n = 1, . . ., n train .In the remainder we use notation to refer to the application of POD to the snapshot set {u k } n train k=1 .The number of modes n can be chosen adaptively by ensuring that the retained energy content is above a certain threshold (see, e.g., [12, Eq. (6.12)]).
We further recall the strong-greedy algorithm: the algorithm takes as input the snapshot set {u k } n train k=1 , an integer n, an inner product (•, •) sg and the induced norm • sg = (•, •) sg , and returns a set of n indices I sg ⊂ {1, . . ., n train } The dimension n can be chosen adaptively by ensuring that the projection error is below a given threshold tol sg > 0, where Z n denotes the n -dimensional space obtained after n steps of the greedy procedure in Algorithm 2, Z ⊥ n is the orthogonal complement of the space Z n and Z ⊥ n is the orthogonal projection operator onto Z ⊥ n .We conclude this section by reviewing the active set method [21] that is employed to find a sparse solution to the non-negative least square problem: for a given matrix G ∈ R M×N and a vector b ∈ R M .Algorithm 3 reviews the computational procedure.In the remainder, we use notation

Algorithm 2 Strong-greedy algorithm
Inputs: {u k } n train k=1 snapshot set, n size of the desired reduced space, (•, •) sg inner product.Outputs: I sg indices of selected snapshots.Compute i = arg max i∈I train Z ⊥ u i sg .

4:
Update Z i = Z i−1 ∪ span{u i } and I sg = I sg ∪ {i }.
5: end for Algorithm 3 Active set method for ( 9) Output: ρ approximate solution to (9), it number of iterations to meet convergence criterion.

8:
Set P = P ∪ {i }. 18: end while 21: end while to refer to the application of Algorithm 3. Note that the method takes as input a set of indices-which is initialized with the empty set in the absence of prior information-to initialize the process.Given the matrix G = [g 1 , . . ., g N ], the vector x ∈ R N , and the set of indices P = {p i } m i=1 ⊂ {1, . . ., N }, we use notation G(:, P) := [g p 1 , . . ., g p m ] ∈ R M×m and x(P) = vec((x) p 1 , . . ., (x) p m ) ∈ R m ; we denote by #P the cardinality of the discrete set P, and we introduce the complement of P in {1, . . ., N } as P c = {1, . . ., N } \ P. Given the vector x ∈ R N and the set of indices I ⊂ {1, . . ., N }, notation [α, i ] = min i∈I (x) i signifies that α = min i∈I (x) i and i ∈ I realizes the minimum, α = (x) i .The constant > 0 is intended to avoid division by zero and is set to 2 −1022 .The computational cost of Algorithm 3 is dominated by the cost to repeatedly solve the least-square problem at Line 11: in "Numerical results", we hence report the total number of least-square solves needed to achieve convergence (cf.output it in Algorithm 3).

Least-square Petrov-Galerkin formulation of steady problems
Given the reduced-order basis (ROB [16], we consider the LSPG formulation where Y = span{ψ i } m i=1 is a suitable test space that is chosen below and the empirical residual where ρ eq,e ∈ R N e + and ρ eq,f ∈ R + are sparse vectors of non-negative weights.Provided that {ψ i } m i=1 is an orthonormal basis of Y, we can rewrite (11a) as which can be efficiently solved using the Gauss-Newton method (GNM).Note that (12) does not explicitly depend on the choice of the test norm |||•|||: dependence on the norm is implicit in the choice of {ψ i } m i=1 .Formulation ( 11)-( 12) depends on the choice of the test space Y and the empirical weights ρ eq,e , ρ eq,f : in the remainder of this section, we address the construction of these ingredients.
We remark that Carlberg [1] considered a different projection method, which is based on the minimization of the Euclidean norm of the discrete residual: due to the particular choice of the test norm, the approach of [1] does not require the explicit construction of the empirical test space.We further observe that Yano and collaborators [22,23] have considered different formulations of the empirical residual (11b).A thorough comparison between different projection methods and different hyper-reduction techniques is beyond the scope of the present study.

Construction of the empirical test space
As discussed in [2], the test space Y should approximate the Riesz representers of the functionals associated with the action of the Jacobian on the elements of the trial ROB.Given the snapshot set of HF solutions {u hf μ : μ ∈ P train } with P train = {μ k } n k=1 ⊂ P and the ROB {ζ i } i , we hence apply POD to the test snapshot set where J hf μ [w] : X hf × X hf → R denotes the Fréchet derivative of the HF residual at w.It is also useful to provide an approximate computation of the right-hand side of ( 13), with | | 1.The evaluation of the right-hand side of ( 14) involves the computation of the residual at u hf μ k + ζ i for i = 1, . . ., n; on the other hand, the evaluation of ( 13) requires the computation of the Jacobian matrix at u hf μ k and the post-multiplication by the algebraic counterpart of Z.Both ( 13) and ( 14)-which are equivalent in the limit → 0are used in the incremental approach of "Progressive construction of empirical test space and quadrature".

Construction of the empirical quadrature rule
Following [5], we seek ρ eq,e ∈ R N e + and ρ eq,f ∈ R + in (11b) such that (i) (efficiency constraint) the number of nonzero entries in ρ eq,e , ρ eq,f , nnz(ρ eq,e ) and nnz(ρ eq,f ), is as small as possible; (ii) (constant function constraint) the constant function is approximated correctly in (iii) (manifold accuracy constraint) for all μ ∈ P train,eq = {μ k } n train +n train,eq k=1 , the empirical residual satisfies where R hf μ corresponds to substitute ρ eq,e 1 = . . .= ρ eq,e and P train = {μ k } n train k=1 is the set of parameters for which the HF solution is available.In the remainder, we use P train,eq = P train .
By tedious but straightforward calculations, we find that for some row vectors we find that the constraints ( 15) and ( 16) can be expressed in the algebraic form , ρ eq = ρ eq,e ρ eq,f , (17c) and In conclusion, the problem of finding the sparse weights ρ eq,e , ρ eq,f can be recast as a sparse representation problem where nnz(ρ) is the number of non-zero entries in the vector ρ, for some user-defined tolerance δ > 0, and b = Gρ hf .Following [4], we resort to the NNLS algorithm discussed in "Preliminary definitions and tools" (cf.Algorithm 3) to find an approximate solution to (18).

A posteriori error indicator
The final element of the formulation is the a posteriori error indicator that is employed for the parameter exploration.We here consider the residual-based error indicator (cf.[16]), : Note that the evaluation of (19) requires the solution to a symmetric positive definite linear system of size N hf : it is hence ill-suited for real-time online computations; nevertheless, in our experience the offline cost associated with the evaluation of ( 19) is comparable with the cost that is needed to solve the ROM-clearly, this is related to the size of the mesh and is hence strongly problem-dependent.

Overview of the computational procedure
Algorithm 4 provides a detailed summary of the construction of the ROM at each step of Algorithm 1 (cf.Line 7).Some comments are in order.The cost of Algorithm 4 is dominated by the assembly of the test snapshot set and by the solution to the NNLS problem.We also notice that the storage of S test n scales with O(n 2 ) and is hence the dominant memory cost of the offline procedure.

Vanilla POD-greedy for Galerkin time-marching ROMs
We denote by u μ ∈ R d the displacement field, by σ μ ∈ R d×d the Cauchy tensor, by ε μ = ∇ s u μ the strain tensor with ∇ s • = 1 2 (∇ • +∇• ); we further introduce the vector of internal variables γ μ ∈ R d int .Then, we introduce the quasi-static equilibrium equations (completed with suitable boundary and initial conditions) where are suitable parametric functions that encode the material constitutive law-note that the Newton's law (20) 1 does not include the inertial term; the temporal evolution is hence entirely driven by the constitutive law (20) 3 .Equation ( 20) is discretized using the FE method in space and a one-step finite difference (FD) method in time.Given the parameter μ ∈ P, the FE space X hf and the time grid {t (k) } K k=1 , we seek the sequence μ = {u where the elemental residuals satisfy Given the reduced space Z = span{ζ i } n i=1 ⊂ X hf , the time-marching hyper-reduced Galerkin ROM of ( 21)-( 22) reads as: given μ ∈ P, for k = 1, . . ., K , where ρ eq,e ∈ R N e + and ρ eq,f ∈ R + are suitably-chosen sparse vectors of weights.We observe that the Galerkin ROM (23) does not reduce the total number of time steps: solution to (23) hence requires the solution to a sequence of K nonlinear problems of size n and is thus likely much more expensive than the solution to (Petrov-) Galerkin ROMs for steady-state problems.Furthermore, several independent studies have shown that residual-based a posteriori error estimators for (23) are typically much less sharp and reliable than their counterparts for steady-state problems.These observations have motivated the development of space-time formulations [24]: to our knowledge, however, space-time methods have not been extended to problems with internal variables.
The abstract Algorithm 1 can be readily extended to time-marching ROMs for unsteady PDEs: the POD-Greedy method [7] combines a greedy search in the parameter domain with a temporal compression based on POD.At each iteration it of the algorithm, we update the reduced space Z using the newly-computed trajectory {u (k)  μ ,it } K k=1 and we update the quadrature rule.The reduced space Z should be updated using hierarchical methods that avoid the storage of the full space-time trajectory for all sampled parameters: the hierarchical approximate POD (HAPOD, see [11] and also "Empirical test space") or the hierarchical approach in [25, section 3.5] that generates nested spaces: we refer to [26, section 3.2.1]for further details and extensive numerical investigations for a model problem with internal variables.Here, we consider the nested approach of [25]: we denote by n it the dimension of the reduced space Z it at the it-th iteration, and we denote by n new = n it − n it−1 the number of modes added at each iteration, where for some tolerance tol > 0, with t (k) = t (k) − t (k−1) , k = 1, . . ., K .The quadrature rule is obtained using the same procedure described in "Empirical quadrature": exploiting (24), it is easy to verify that the matrix G (cf. (17c)) can be rewritten as (we omit the details) Note that G it acc corresponds to the columns of the matrix G associated with the manifold accuracy constraints (cf.( 16)) at the it-th iteration of the greedy procedure.

Accelerated construction of PROMs
"Progressive construction of empirical test space and quadrature" discusses the incremental strategies proposed in this work, to reduce the costs associated with the construction of the empirical test space and the empirical quadrature in Algorithm 4; "Multi-fidelity sampling" summarizes the multi-fidelity sampling strategy, which is designed to reduce the total number of greedy iterations required by Algorithm 1. "Progressive construction of empirical test space and quadrature" and "Multi-fidelity sampling" focus on LSPG ROMs for steady problems; in "Extension to unsteady problems" we discuss the extension to Galerkin ROMs for unsteady problems.

Empirical test space
By reviewing Algorithms 1 and 4, we notice that the test snapshot set S test n satisfies therefore, at each iteration, it suffices to solve 2n − 1-as opposed to n 2 -Riesz problems of the form (13) to define S test n .As in [16], we rely on Cholesky factorization with fillin reducing permutations of rows and columns, to reduce the cost of solving the Riesz problems.In the numerical experiments, we rely on (13), which involves the assembly of the Jacobian matrix, to compute n,1 n k=1 , while we consider the finite difference approximation (14) to compute k,n n−1 k=1 with = 10 −6 .In order to lower the memory costs associated with the storage of the test snapshot set S test n and also the cost of performing POD, we consider a hierarchical approach to construct the test space Y.In this work, we apply the (distributed) hierarchical approximate proper orthogonal decomposition: HAPOD approach is related to incremental singular value decomposition [27] and guarantees near-optimal performance with respect to the standard POD (cf.[11]).Given the POD space {ψ i } m i=1 such that ((ψ i , ψ j )) = δ i,j and the corresponding eigenvalues {λ i } m i=1 , and the new set of snapshots , HAPOD corresponds to applying POD to a suitable snapshot set that combines information from current and previous iterations, with S incr . Note that the modes {ψ i } m i=1 are scaled to properly take into account the energy content of the modes in the subsequent iterations.Note also that the storage cost of the method scales with m + 2n − 1, which is much lower than n 2 , provided that m n 2 .Note also that the POD spaces { Y n } n , which are generated at each iteration of Algorithm 1 using (27), are not nested.

Empirical quadrature
Exploiting (17), it is straightforward to verify that if the test spaces are nested -that is, where We hence observe that we can reduce the cost of assembling the EQ matrix G n by exploiting (28), provided that we rely on nested test spaces: since, in our experience, the cost of assembling G n is negligible, the use of non-nested test spaces does not hinder offline performance.
On the other hand, due to the strong link between consecutive EQ problems that are solved at each iteration of the greedy procedure, we propose to initialize the NNLS Algorithm 3 using the solution from the previous time step, that is We provide extensive numerical investigations to assess the effectiveness of this choice.

Summary of the incremental weak-greedy algorithm
Algorithm 5 reviews the full incremental generation of the ROM.In the numerical experiments, we set m = 2n: this implies that the storage of S incr scales with 4n − 3 as opposed to n 2 .

Algorithm 5 Incremental generation of the ROM
POD eigenpairs for test space, m size of the test space, P (n−1) initial condition for Algorithm 3.
i=1 new POD eigenpairs for test space, P (n) initial condition for Algorithm 3 at subsequent iteration.17)).5: Solve the NNLS problem (cf.Algorithm 3) with initial condition given by P (n−1) .6: Update the ROM data structures and compute P using (29).

Multi-fidelity sampling
The incremental strategies of "Progressive construction of empirical test space and quadrature" do not affect the total number of greedy iterations required by Algorithm 1 to achieve the desired accuracy; furthermore, they do not address the reduction of the cost of the HF solves.Following [16], we here propose to resort to coarser simulations to learn the initial training set P (cf.Line 1 Algorithm 1) and to initialize the HF solver.Algorithm 6 summarizes the computational procedure.Algorithm 6 Multi-fidelity weak-greedy algorithm 1: Define two meshes T 0 hf and T hf , where T 0 hf is coarser than T hf .
2: Generate a ROM for the HF model associated with the coarse mesh T 0 hf (e.g. using Algorithm 1), Z 0 , α 0 : P → R n .
5: Apply Algorithm 1 to the HF model associated with the fine mesh T hf ; use P to initialize the greedy method, and use the estimate μ → Z 0 α 0 μ to initialize the HF solver.
Some comments are in order.
• In the numerical experiments, we choose the cardinality n 0 of P according to (8) with tol sg = tol (cf.Algorithm 1).Note that, since the strong greedy algorithm is applied to the generalized coordinates of the coarse ROM, n 0 cannot exceed the size n of the ROM.• We observe that increasing the cardinality of P ultimately leads to a reduction of the number of sequential greedy iterations; it hence enables much more effective parallelization of the offline stage.Note also that Algorithm 6 can be coupled with parametric mesh adaptation tools to build an effective problem-aware mesh T hf (cf.[16]); in this work, we do not exploit this feature of the method.• We observe that the multi-fidelity Algorithm 6 critically depends on the choice of the coarse grid T 0 hf : an excessively coarse mesh T 0 hf might undermine the quality of the initial sample P and of the initial condition for the HF solve, while an excessively fine mesh T 0 hf reduces the computational gain.In the numerical experiments, we provide extensive investigations of the influence of the coarse mesh on performance.

Extension to unsteady problems
Acceleration of the POD-greedy algorithm for the Galerkin ROM (23) relies on two independent building blocks: first, the incremental construction of the quadrature rule; second, a two-fidelity sampling strategy.Re the quadrature procedure, we notice that at each greedy iteration, the matrix G in (25) admits the decomposition in (28).If the number of new modes is modest compared to n it−1 the rows of G (it) new are significantly less numerous than the columns of G (it) : we can hence reduce the cost of the construction of G (it) by keeping in memory the EQ matrix from the previous iteration; furthermore, the decomposition (28) motivates the use of the active set of weights from the previous iterations to initialize the NNLS algorithm at the current iteration.On the other hand, the extension of the two-fidelity sampling strategy in Algorithm 6 simply relies on the generalization of the strong-greedy procedure 2 to unsteady problems, which is illustrated in the algorithm below.
Outputs: I sg indices of selected snapshots.Update the reduced space (cf. ( 24)) 5: 6: end for Algorithm 7 takes as input a set of trajectories and returns the indices of the selected parameters.To shorten the notation, we here assume that the time grid is the same for all parameters: however, the algorithm can be readily extended to cope with parameterdependent temporal discretizations.We further observe that the procedure depends on the data compression strategy employed to update the reduced space at each greedy iteration: in this work, we consider the same hierarchical strategy (cf.[25, section 3.5]) employed in the POD-(weak-)greedy algorithm.

Numerical results
We present numerical results for a steady compressible inviscid flow past a LS89 blade (cf."Transonic compressible flow past an LS89 blade"), and for an unsteady nonlinear mechanics problem that simulates the long-time mechanical response of the standard section of a containment building under external loading (cf."Long-time mechanical response of the standard section of a containment building under external loading").Simulations of ."Transonic compressible flow past an LS89 blade" are performed in Matlab 2022a [28] based on an in-house code, and executed over a commodity Linux workstation (RAM 32 GB, Intel i7 CPU 3.20 GHz x 12).The HF simulations of "Long-time mechanical response of the standard section of a containment building under external loading" are performed using the FE software code_aster [29] and executed over a commodity Linux workstation (RAM 32 GB, Intel i7-9850H CPU 2.60 GHz x 12); on the other hand, the MOR procedure relies on an in-house Python code and is executed on a Windows workstation (RAM 16 GB, Intel i7-9750H CPU 2.60 GHz x 12).

Model problem
We consider the problem of estimating the solution to the two-dimensional Euler equations past an array of LS89 turbine blades; the same model problem is considered in [30], for a different parameter range.We consider the computational domain depicted in Fig. 1a; we prescribe total temperature, total pressure and flow direction at the inflow, static pressure at the outflow, non-penetration (wall) condition on the blade and periodic boundary conditions on the lower and upper boundaries.We study the sensitivity of the solution We deal with geometry variations through a piecewise-smooth mapping associated with the partition in Fig. 1a.We set H ref = 1 and we define the curve x 1 → f btm (x 1 ) that describes the lower boundary btm of the domain = (H = 1); then, we define H > 0 such that x 1 → f btm (x 1 ) + H and 1 → f btm (x 1 ) + H − H do not intersect the blade for any H ∈ [0.9, 1.1]; finally, we define the geometric mapping geo where with Fig. 1b, c show the distribution of the Mach field for μ (1) = [0.95,0.78] and μ (2) = [1.05,0.88].We notice that for large values of the free-stream Mach number the solution develops a normal shock on the upper side of the blade and two shocks at the trailing edge: effective approximation for higher values of the Mach number requires the use of nonlinear approximations and is beyond the scope of the present work.

Results
We consider a hierarchy of six P2 meshes with N e = 1827, 2591, 3304, 4249, 7467, 16353 elements, respectively; Fig. 2a-c show three computational meshes.Figure 2d show maximum and mean errors between the HF solution u hf ,( 6) μ and the corresponding HF solution associated with the i-th mesh, over five randomly-chosen parameters in P .We remark that the error is measured in the reference configuration , which corresponds to H = 1.We consider the L 2 ( ) norm • = (•) 2 dx. Figure 3 compares the performance of the standard ("std") greedy method with the performance of the incremental ("incr") procedures of "Progressive construction of empirical test space and quadrature" for the coarse mesh (mesh 1).In all the tests below, we consider a training space P train based on a ten by ten equispaced discretization of the parameter domain.Figure 3a shows the number of iterations of the NNLS algorithm 3, while Fig. 3b shows the wall-clock cost (in seconds) on a commodity laptop, and Fig. 3c shows the percentage of sampled weights nnz(ρ eq ) N e +N f × 100%.We observe that the proposed initialization of the active set method leads to a significant reduction of the total number of iterations, and to a non-negligible reduction of the total wall-clock cost 1 , without affecting the per- formance of the method.Figure 3d shows the cost of constructing the test space: we notice that the progressive construction of the test space enables a significant reduction of offline costs.Finally, Fig. 3e shows the averaged out-of-sample performance of the LSPG ROM over n test = 20 randomly-chosen parameters P test , 1) μ , and Fig. 3f shows the online costs: we observe that the standard and the incremental approaches lead to nearly-equivalent results for all values of the ROB size n considered.
Figure 4 replicates the tests of Fig. 3 for the fine mesh (mesh 6).As for the coarser grid, we find that the progressive construction of the test space and of the quadrature rule do not hinder online performance of the LSPG ROM and ensure significant offline savings.We notice that the relative error over the test set is significantly larger: reduction of the mesh size leads to more accurate approximations of sharp features-shocks, wakes-that are troublesome for linear approximations.On the other hand, we notice that online costs are nearly the same as for the coarse mesh for the corresponding value of the ROB size n: this proves the effectiveness of the hyper-reduction procedure.
Figure 5 investigates the effectiveness of the sampling strategy based on the stronggreedy algorithm.We rely on Algorithm 6 to identify the training set of parameters P for different choices of the coarse mesh T 0 hf = T (i) hf , for i = 1, . . ., 6 and for T hf = T (6)  hf .Then, we measure performance in terms of the maximum relative projection error over P train , where P (i) ,n is the set of the first n parameters selected through Algorithm 6 based on the coarse mesh T (i)  hf .Figure 5a shows the behavior of the projection error E proj,(i) n (32) for three different choices of the coarse mesh; to provide a concrete reference, we also report the performance of twenty sequences of reduced spaces obtained by randomly selecting (32) for six different choices of the coarse mesh, and for random samples.b, c Parameters {μ ,j } j selected by Algorithm 6 for two different coarse meshes sequences of parameters in P train .Figure 5b, c show the parameters selected through Algorithm 6 for two different choices of the coarse mesh: we observe that the selected parameters are clustered in the proximity of Ma ∞ = 0.9.
Table 1 compares the costs of the standard weak-greedy algorithm ("vanilla"), the weakgreedy algorithm with progressive construction of the test space and the quadrature rule ("incr"), and the two-fidelity Algorithm 6 with coarse mesh given by T 0 hf = T (1)   hf Overview of offline costs for three computational strategies and two different fine meshes (coarse mesh: ("incr+MF").To ensure a fair comparison, we impose that the final ROM has the same number of modes (twenty) for all cases.Training of the coarse ROM in Algorithm 6 is based on the weak-greedy algorithm with progressive construction of the test space and the quadrature rule, with tolerance tol = 10 −3 : this leads to a coarse ROM with n 0 = 14 modes, which corresponds to an initial training set P of cardinality 14 in the greedy method (cf.Line 5, Algorithm 6).The ROMs associated with three different training strategies show comparable performance in terms of online cost and L 2 errors.
For the fine mesh T (6)  hf , the two-fidelity training leads to a reduction of offline costs of roughly 25% with respect to the vanilla implementation and of roughly 10% with respect to the incremental implementation; for the fine mesh T (5)  hf (which has one half as many elements as the finer grid), the two-fidelity training leads to a reduction of offline costs of roughly 39% with respect to the vanilla implementation and of roughly 31% with respect to the incremental implementation.In particular, we notice that the initialization based on the coarse model-which is the same for both cases-is significantly more effective for the HF model associated with mesh 5 than for the HF model associated with mesh 6.The empirical finding strengthens the observation made in "Multi-fidelity sampling" that the choice of the coarse approximation is a compromise between overhead costs-which increase as we increase the size of the coarse mesh-and accuracy of the coarse solution.
We remark that for the first two cases the HF solver is initialized using the solution for the closest parameter in the training set for n = 2, 3, . ..; for n = 1, we rely on a coarse solver and on a continuation strategy with respect to the Mach number: this initialization is inherently sequential.On the other hand, in the two-fidelity procedure, the HF solver is initialized using the reduced-order solver that is trained using coarse HF data: this choice enables trivial parallelization of the HF solves for the initial parameter sample P .We hence expect to achieve higher computational savings for parallel computations.

Model problem
We study the long-time mechanical response of a three-dimensional standard section of a nuclear power plant containment building: the highly-nonlinear mechanical response is activated by thermal effects; its simulation requires the coupling of thermal, hydraulic and A thorough presentation of the mathematical model and of the MOR procedure is provided in [31].The ultimate goal of the simulation is to predict the temporal behavior of several quantities of interest (QoIs), such as water saturation in concrete, delayed deformations, and stresses: these QoIs are directly related to the leakage rate, whose estimate is of paramount importance for the design of NCBs.The deformation field is also important to conduct validation and calibration studies against real-world data.Following [32], we consider a weak THM coupling procedure to model deferred deformations within the material; weak coupling is appropriate for large structures under normal operational loads.The MOR process is exclusively applied to estimate the mechanical response: the results from thermal and hydraulic calculations are indeed used as input data for the mechanical calculations, which constitute the computational bottleneck of the entire simulation workflow.To model the mechanical response of the concrete structure, we consider a three-dimensional nonlinear rheological creep model with internal variables; on the other hand, we consider a one-dimensional linear elastic model for the prestressing cables: the state variables are hence the displacement field of the three-dimensional concrete structure and of the one-dimensional steel cables.We assume that the whole structure satisfies the small-strain and small-displacement hypotheses.To establish connectivity between concrete and steel nodes, a kinematic linkage is implemented: a point within the steel structure and its corresponding point within the concrete structure are assumed to share identical displacements.
We study the solution behavior with respect to two parameters: the desiccation creep viscosity (η dc ) and the basic creep consolidation parameter (κ) in the parameter range 6 shows the behavior of (a) the normal force on a horizontal cable, and (b) the tangential and (c) vertical strains on the outer wall of the standard section of the containment building, for three distinct parameter values μ (i) = (5.10 9 , κ (i) ), for κ (i) ∈ {10 −5 , 10 −4 , 10 −3 }, i = 1, 2, 3. Notation "-E" indicates that the HF data are associated to the outer face of the structure.Note that the value of the consolidation parameter κ affects the rate of decay of the various quantities.

Results
High-fidelity solver.We consider the two distinct three-dimensional meshes, depicted in Fig. 7: a coarse mesh of N e = 784 three-dimensional hexahedral elements and a refined mesh with N e = 1600 elements.This mesh features the geometry of a portion of the   building halfway up the barrel, and is crossed by two vertical and three horizontal cables.
We consider an adaptive time-stepping scheme: approximately 45 to 50 time steps are needed to reach the specified final time step, for all parameters in P and for both meshes.Table 2 provides an overview of the costs of the HF solver over the training set for the two meshes: we observe that the wall-clock cost of a full HF simulation is roughly nine minutes for the coarse mesh and seventeen minutes for the refined mesh.We consider a 7 by 7 training set P train and a 5 by 5 test set; parameters are logarithmically spaced in both directions.Figure 8 showcases the evolution of normal forces in the central horizontal cable (N H2 ), the vertical (ε zz ) and the tangential (ε tt ) deformations on the outer surface of the geometry.Table 3 shows the behavior of the maximum and average relative errors for the three quantities of interest of Fig. 8.We notice that the two meshes lead to nearlyequivalent results for this model problem.μ ,it } K k=1 ; 2. we update the reduced space Z using (24) with tolerance tol = 10 −5 ; 3. we update the quadrature rule using the (incremental) strategy described in "Extension to unsteady problems".
Below, we assess (i) the effectiveness of the incremental strategy for the construction of the quadrature rule, and (ii) the impact of the sampling strategy on performance.Towards this end, we compute the projection error where is the discrete L 2 (0, T ; • ) norm-as in [31], we consider • = • 2 .Further results on the prediction error and online speedups of the ROM are provided in [31].
Figure 9 illustrates the performance of the EQ procedure; in this test, we consider the finer mesh (mesh 2), and we select the parameters {μ ,it } maxit=15 it=1 using the PODstrong greedy algorithm 7 based on the HF results on mesh 1. Figure 9a-c show the number of iterations required by Algorithm 3 to meet the convergence criterion, the computational cost, and the percentage of sampled elements, which is directly related to the online costs, for δ = 10 −4 .As for the previous test case, we observe that the progressive construction of the quadrature rule drastically reduces the NNLS iterations without hindering performance.Figure 9d shows the speedup of the incremental method for three choices of the tolerance δ and for several iterations of the iterative procedure.We notice that the speedup increases as the iteration count increases: this can be explained by observing that the percentage of new columns added during the it-th step in the matrix G decays with it (cf.(25)).We further observe that the speedup increase as we decrease the tolerance δ: this is justified by the fact that the number of iterations required by Algorithm 3 increases as we decrease δ.
Figures 10 and 11 investigate the efficacy of the greedy sampling strategy.Figure 10 shows the parameters {μ ,j } j selected by Algorithm 7 for (a) the coarse mesh (mesh 1) and (b) the refined mesh (mesh 2): we observe that the majority of the sampled parameters are clustered in the bottom left corner of the parameter domain for both meshes.Figure 11 shows the performance (measured through the projection error (34)) of the two samples depicted in Figure 10 on training and test sets; to provide a concrete reference, we also compare performance with five randomly-generated samples.Interestingly, we observe that for this model problem the choice of the sampling strategy has little effects on performance; nevertheless, also in this case, the greedy procedure based on the coarse mesh consistently outperforms random sampling.

Conclusions
The computation burden of the offline training stage remains an outstanding challenge of MOR techniques for nonlinear, non-parametrically-affine problems.Adaptive sampling of the parameter domain based on greedy methods might contribute to reduce the number of offline HF solves that is needed to meet the target accuracy; however, greedy methods are inherently sequential and introduce non-negligible overhead that might ultimately hinder the benefit of adaptive sampling.To address these issues, in this work, we proposed two new strategies to accelerate greedy methods: first, a progressive construction of the ROM based on HAPOD to speed up the construction of the empirical test space for LSPG ROMs and on a warm start of the NNLS algorithm to determine the empirical quadrature rule; second, a two-fidelity sampling strategy to reduce the number of expensive greedy iterations.
The numerical results of "Numerical results" illustrate the effectiveness and the generality of our methods for both steady and unsteady problems.First, we found that the warm start of the NNLS algorithm enables a non-negligible reduction of the computational cost without hindering performance.Second, we observed that the sampling strategy based on coarse data leads to near-optimal performance: this result suggests that multi-fidelity algorithms might be particularly effective to explore the parameter domain in the MOR training phase.
The empirical findings of this work motivate further theoretical and numerical investigations that we wish to pursue in the future.First, we wish to analyze the performance of multi-fidelity sampling methods for MOR: the ultimate goal is to devise a priori and a posteriori indicators to drive the choice of the mesh hierarchy and the cardinality n 0 of the set P in Algorithm 6.Second, we plan to extend the two elements of our formulationprogressive ROM generation and multi-fidelity sampling-to optimization problems for which the primary objective of model reduction is to estimate a suitable quantity of interest (goal-oriented MOR): in this respect, we envision to combine our formulation with adaptive techniques for optimization [33][34][35].
are the indices of the boundary facets and the facet residuals {r (k),f j,μ } j∈I bnd incorporate boundary conditions.Note that F γ μ, t is the FD approximation of the constitutive law (20) 3 .

1 :
Apply Gram-Schmidt orthogonalization to define the new ROB Z. 2: Assemble the test snapshot set S incr n in (27).

Fig. 1
Fig. 1 Inviscid flow past an array of LS89 turbine blades.a Partition associated with the geometric map.b, c Behavior of the Mach number for two parameter values

Fig. 2
Fig. 2 Compressible flow past a LS89 blade.a-c Three computational meshes.d Behavior of the average and maximum error(31)

Fig. 3
Fig. 3 Compressible flow past a LS89 blade.Progressive construction of quadrature rule and test space, coarse mesh (N e = 1827)

Fig. 4 Fig. 5
Fig. 4 Compressible flow past a LS89 blade.Progressive construction of quadrature rule and test space; fine mesh (N e = 16353)

Fig. 6
Fig. 6 Mechanical response of a NCB under external loading.a Normal force on a horizontal cable.b Tangential and (c) vertical strains on the outer wall of the standard section of the containment building, for η dc = 5 • 10 9 and three values of κ on the coarse mesh

Fig. 8
Fig. 8 Mechanical response of a NCB under external loading.Comparison of the quantities of interest computed for the two meshes of the standard section: (a) normal force on a horizontal cable, (b) tangential and (c) vertical strains on the outer wall of the standard section of the containment building

Fig. 9
Fig.9 Mechanical response of a NCB under external loading.Progressive construction of the quadrature rule on mesh 2. a-c Performance of standard ("std") and incremental ("incr") EQ procedures for several iterations of the greedy procedures, for δ = 10 −4 .d Computational speedup for three choices of the tolerance δ

Fig. 11
Fig. 11 Mechanical response of a NCB under external loading.Behavior of the projection error E proj it (34) for parameters selected by Algorithm 7 based on coarse (mesh 1)and fine data (mesh 2);

Table 1
Compressible flow past a LS89 blade

Table 2
Mechanical response of a NCB under external loading HF CPU cost in seconds [s] for the HF simulations on the coarse (mesh 1) and the refined mesh (mesh 2)

Table 3
Mechanical response of a NCB under external loading