Skip to main content
Log in

Two-Step Greedy Algorithm for Reduced Order Quadratures

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

We present an algorithm to generate application-specific, global reduced order quadratures (ROQ) for multiple fast evaluations of weighted inner products between parameterized functions. If a reduced basis or any other projection-based model reduction technique is applied, the dimensionality of integrands is reduced dramatically; however, the cost of approximating the integrands by projection still scales as the size of the original problem. In contrast, using discrete empirical interpolation points as ROQ nodes leads to a computational cost which depends linearly on the dimension of the reduced space. Generation of a reduced basis via a greedy procedure requires a training set, which for products of functions can be very large. Since this direct approach can be impractical in many applications, we propose instead a two-step greedy targeted towards approximation of such products. We present numerical experiments demonstrating the accuracy and the efficiency of the two-step approach. The presented ROQ are expected to display very fast convergence whenever there is regularity with respect to parameter variation. We find that for the particular application here considered, one driven by gravitational wave physics, the two-step approach speeds up the offline computations to build the ROQ by more than two orders of magnitude. Furthermore, the resulting ROQ rule is found to converge exponentially with the number of nodes, and a factor of \(\sim \)50 savings, without loss of accuracy, is observed in evaluations of inner products when ROQ are used as a downsampling strategy for equidistant samples using the trapezoidal rule. While the primary focus of this paper is on quadrature rules for inner products of parameterized functions, our method can be easily adapted to integrations of single parameterized functions, and some examples of this type are considered.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. For generic target applications we envision these functions not to be polynomials, and the weights not to be standard ones. In particular, our proposed reduced order quadrature construction does not rely on classical orthogonal polynomial theory for which the precise form of the weight \(W\) is essential.

  2. In carrying out the RB-greedy algorithm we resolve all the integrals involved within machine precision, so for all practical purposes within that context the discrete and continuum scalar products agree with each other.

  3. For certain applications it may be desirable to evaluate the functions at a different set of points \(\{y_i\}_{i=1}^{M^{\prime }}\), see Sect. 4.3 for details.

  4. The 2-norm and \(\infty \)-norm of a matrix \(\mathbf{Q }\in \mathbb{C }^{M \times n}\) is defined by

    $$\begin{aligned} \left\| \left| {\mathbf{Q }} \right| \right\| _{2} {:=} \max _{\mathbf{u }\ne 0} \frac{ \Vert \mathbf{Q }\mathbf{u }\Vert _\mathtt{d }}{ \Vert \mathbf{u }\Vert _\mathtt{d} } = \mathinner { \left( \uplambda _{\max } (\mathbf{Q }^{\dagger } \mathbf{Q }) \right) } ^{1/2}, \quad \left\| \left| {\mathbf{Q }} \right| \right\| _\infty := \max _{1 \le k \le M} \sum _{\ell =1}^{n} | q_{k\ell } |, \end{aligned}$$

    where \(q_{k\ell }\) denotes the \((k,\ell )\) entry of \(\mathbf{Q }\).

  5. Our numerical experiments are carried out with double precision arithmetic. This translates into a double precision computation of the quantity \(\left\| h_{\mu } - \mathcal{P }_{n}{} h_\mu \right\| _\mathtt{d}^2\) found in step 3b of Algorithm 4 (RB-Greedy Algorithm), and hence an accuracy of about \(10^{-7}\) in the computation of its square root. These observations motivate a choice for the greedy error tolerance to be \(\epsilon = 10^{-6}\). References [53, 54] address this issue in greater detail and propose alternatives for improving error computations.

  6. See the discussion in Sect. 4.3 related to the subtleties involved when sampling the RB functions at a set different from those used to compute the underlying quadratures to build the RB itself.

References

  1. Patera, A., Rozza, G.: Reduced Basis Approximation and a Posteriori Error Estimation for Parametrized Partial Differential Equations, to appear in (tentative rubric) MIT Pappalardo Graduate Monographs in Mechanical Engineering. http://augustine.mit.edu/methodology/methodology_bookPartI.htm

  2. Maday, Y., Nguyen, N.C., Patera, A.T., Pau, S.H.: A general multipurpose interpolation procedure: the magic points. Commun. Pure Appl. Anal. 8, 383–404 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  3. Barrault, M., Maday, Y., Nguyen, N.C., Patera, A.T.: An empirical interpolation method: application to efficient reduced-basis discretization of partial differential equations. Comptes Rendus Mathematique 339, 667–672 (2004). doi:10.1016/j.crma.2004.08.006

    Article  MathSciNet  MATH  Google Scholar 

  4. Chaturantabut, S., Sorensen, D.C.: Nonlinear model reduction via discrete empirical interpolation. SIAM J. Sci. Comput. 32(5), 2737–2764 (2010). doi:10.1137/090766498

    Article  MathSciNet  MATH  Google Scholar 

  5. Chaturantabut, S., Sorensen, D.: Discrete empirical interpolation for nonlinear model reduction. In: Decision and Control, 2009 Held Jointly with the 2009 28th Chinese Control Conference. CDC/CCC 2009. Proceedings of the 48th IEEE Conference on, pp. 4316–4321 (2009). doi:10.1109/CDC.2009.5400045

  6. Hinze, M., Volkwein, S.: Proper orthogonal decomposition surrogate models for nonlinear dynamical systems: error estimates and suboptimal control. In: Dimension Reduction of Large-Scale Systems, Vol. 45 of Lecturer Notes Computer Science Engineering, pp. 261–306. Springer, Berlin (2005)

  7. Manolakis, D., Ingle, V., Kogon, S.: Statistical and adaptive signal processing: spectral estimation, signal modeling, adaptive filtering, and array processing. Artech House signal processing library, Artech House. http://books.google.com/books?id=3RQfAQAAIAAJ (2000)

  8. Wainstein, L.A., Zubakov, V.D.: Extraction of Signals from Noise. Prentice-Hall, London (1962)

    Google Scholar 

  9. Eftang, J.L., Stamm, B.: Parameter multi-domain ‘hp’ empirical interpolation. Int. J. Num. Meth. Eng. 90, 412–428 (2012). doi:10.1002/nme.3327

    Google Scholar 

  10. Aanonsen, T.O.: Empiric al interpolation with application to reduced basis approximations. Ph.D. thesis, Norwegian University of Science and Technology. http://ntnu.diva-portal.org/smash/record.jsf?pid=diva2:348771 (2009)

  11. Pinkus, A.: N-widths in Approximation Theory. Springer, Amsterdam (1985)

    Book  MATH  Google Scholar 

  12. Binev, P., Cohen, A., Dahmen, W., DeVore, R.A., Petrova, G., Wojtaszczyk, P.: Convergence rates for greedy algorithms in reduced basis methods., SIAM J. Math. Anal. 43(3), 1457–1472. http://dblp.uni-trier.de/db/journals/siamma/siamma43.html#BinevCDDPW11 (2011)

  13. DeVore, R., Petrova, G., Wojtaszczyk, P.: Greedy algorithms for reduced bases in banach spaces, Arxiv, preprint arXiv:1204.2290

  14. Eftang, J.L., Patera, A.T., Ronquist, E.M.: An hp certified reduced basis method for parametrized elliptic partial differential equations. SIAM J. Sci. Comput. 32, 3170–3200 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  15. Bui-Thanh, T.: Model-constrained optimization methods for reduction of parameterized large-scale systems. PhD thesis, Massachusetts Institute of Technology

  16. Haasdonk, B., Dihlmann, M., Ohlberger, M.: A training set and multiple bases generation approach for parametrized model reduction based on adaptive grids in parameter space. Technical Report, SRC SimTech

  17. Caudill, S., Field, S.E., Galley, C.R., Herrmann, F. Tiglio, M.: Reduced basis representations of multi-mode black hole ringdown gravitational waves. Class. Quantum Gravity 29, 095016. arXiv:1109.5642 (2012)

    Google Scholar 

  18. Field, S.E., Galley, C.R., Herrmann, F., Hesthaven, J.S., Ochsner, E., Tiglio, M.: Reduced basis catalogs for gravitational wave templates. Phys. Rev. Lett. 106, 221102 (2011). doi:10.1103/PhysRevLett.106.221102, arXiv:1101.3765

    Google Scholar 

  19. Field, S.E., Galley, C.R., Ochsner, E.: Towards beating the curse of dimensionality for gravitational waves using reduced basis. Phys. Rev. D 86, 084046 (2012). doi:10.1103/PhysRevD.86.084046

  20. Cannon, K., Chapman, A., Hanna, C., Keppel, D., Searle, A.C., et al.: Singular value decomposition applied to compact binary coalescence gravitational-wave signals. Phys. Rev. D 82, 044025 (2010). doi:10.1103/PhysRevD.82.044025, arXiv:1005.0012

  21. Cannon, K., Hanna, C., Keppel, D.: Efficiently enclosing the compact binary parameter space by singular-value decomposition. arXiv:1101.4939

  22. Xu, J., Zikatanov, L.: Some observations on babuška and brezzi theories. Numer. Math. 94(1), 195–202 (2003). doi:10.1007/s002110100308

    Article  MathSciNet  MATH  Google Scholar 

  23. Szyld, D.B.: The many proofs of an identity on the norm of oblique projections. Numer. Algorithms 42(3–4), 309–323 (2006). doi:10.1007/s11075-006-9046-2

    Article  MathSciNet  MATH  Google Scholar 

  24. Quarteroni, A., Sacco, R., Saleri, F.: Numerical Mathematics. Springer, Berlin (2010)

    MATH  Google Scholar 

  25. Barish, B.C., Weiss, R.: LIGO and the detection of gravitational waves. Phys. Today 52N10, 44–50 (1999)

    Google Scholar 

  26. Waldmann, S.J.: Status of LIGO at the start of the fifth science run. Class. Quantum Gravity 23, S653–S660 (2006)

    Article  Google Scholar 

  27. Hild, S.: The status of GEO 600. Class. Quantum Gravity 23, S643–S651 (2006)

    Article  MATH  Google Scholar 

  28. Acernese, F., et al.: The Virgo status. Class. Quantum Gravity 23, S635–S642 (2006)

    Article  MATH  Google Scholar 

  29. Abbott, B., et al.: LIGO: The Laser Interferometer Gravitational-Wave, Observatory, arXiv:0711.3041

  30. Abadie, J., et al.: Predictions for the rates of compact binary coalescences observable by ground-based gravitational-wave detectors. Class. Quantum Gravity 27, 173001 (2010). doi:10.1088/0264-9381/27/17/173001

    Article  Google Scholar 

  31. Centrella, J., Baker, J.G., Kelly, B.J., van Meter, J.R.: Black-hole binaries, gravitational waves, and numerical relativity. Rev. Mod. Phys. 82, 3069 (2010). doi:10.1103/RevModPhys.82.3069, arXiv:1010.5260

    Google Scholar 

  32. Pretorius, F.: Binary black hole coalescence, arXiv:0710.1338

  33. Baumgarte, T.W., Shapiro, S.L.: Numerical relativity and compact binaries. Phys. Rep. 376(2), 41–131 (2003). http://arxiv.org/abs/gr-qc/0211028, arXiv:gr-qc/0211028

  34. Sarbach, O., Tiglio, M.: Continuum and discrete initial-boundary-value problems and einstein’s field equations. Living Rev. Relativ 191 (to appear). arXiv:1203.6443

  35. Blanchet, L.: Gravitational radiation from post-newtonian sources and inspiralling compact binaries. Living Rev. Relativ. (2006). doi:10.12942/lrr-2006-4

  36. Blanchet, L., Damour, T., Iyer, B.R.: Gravitational waves from inspiralling compact binaries: energy loss and wave form to second postNewtonian order. Phys. Rev. D 51, 5360 (1995). doi:10.1103/PhysRevD.51.5360, arXiv:gr-qc/9501029

  37. Will, C.M., Wiseman, A.G.: Gravitational radiation from compact binary systems: gravitational waveforms and energy loss to second post-Newtonian order. Phys. Rev. D 54, 4813–4848. arXiv:gr-qc/9608012 (1996)

    Google Scholar 

  38. Allen, B., Anderson, W.G., Brady, P.R., Brown, D.A., Creighton, J.D.: FINDCHIRP: an algorithm for detection of gravitational waves from inspiraling compact binaries, arXiv:gr-qc/0509116

  39. Tu, L.-C., Li, Q., Wang, Q.-L., Shao, C.-G., Yang, S.-Q., et al.: New determination of the gravitational constant G with time-of-swing method. Phys. Rev. D 82, 022001 (2010). doi:10.1103/PhysRevD.82.022001

    Article  Google Scholar 

  40. Owen, B.J.: Search templates for gravitational waves from inspiraling binaries: choice of template spacing. Phys. Rev. D 53, 6749 (1996)

    Article  Google Scholar 

  41. Owen, B.J., Sathyaprakash, B.S.: Matched filtering of gravitational waves from inspiraling compact binaries: computational cost and template placement. Phys. Rev. D 60, 022002 (1999). arXiv:gr-qc/9903108

    Google Scholar 

  42. Babak, S., Balasubramanian, R., Churches, D., Cokelaer, T., Sathyaprakash, B.S.: A template bank to search for gravitational waves from inspiralling compact binaries: I. Physical models. Class. Quantum Gravity 23, 5477–5504 (2006). doi:10.1088/0264-9381/23/18/002, arXiv:arXiv:gr-qc/0604037

    Google Scholar 

  43. Finn, L.S.: Detection, measurement and gravitational radiation. Phys. Rev. D 46, 5236–5249 (1992). doi:10.1103/PhysRevD.46.5236, arXiv:gr-qc/9209010

    Google Scholar 

  44. Maggiore, M.: Gravitational Waves Volume 1: Theory and Experiments. Oxford University Press, Oxford (2008)

    Google Scholar 

  45. Abbott, B., et al.: LIGO: the laser interferometer gravitational-wave observatory. Rep. Prog. Phys. 72, 076901 (2009). doi:10.1088/0034-4885/72/7/076901, arXiv:0711.3041

  46. Ajith, P., Bose, S.: Estimating the parameters of non-spinning binary black holes using ground-based gravitational-wave detectors: statistical errors. Phys. Rev. D 79, 084032 (2009). doi:10.1103/PhysRevD.79.084032, arXiv:0901.4936

  47. Abbott, B., et al.: Einstein@home search for periodic gravitational waves in ligo s4 data. Phys. Rev. D 79, 022001 (2009). doi:10.1103/PhysRevD.79.022001

    Article  MathSciNet  Google Scholar 

  48. Brady, P.R., Creighton, T.: Searching for periodic sources with LIGO. 2. Hierarchical searches. Phys. Rev. D 61, 082001 (2000). doi:10.1103/PhysRevD.61.082001, arXiv:gr-qc/9812014

  49. Vallisneri, M.: Beyond fisher: exact sampling distributions of the maximum-likelihood estimator in gravitational-wave parameter estimation. Phys. Rev. Lett. 107, 191104 (2011). doi:10.1103/PhysRevLett.107.191104, arXiv:1108.1158

    Google Scholar 

  50. Kanner, J., et al.: LOOC UP: locating and observing optical counterparts to gravitational wave bursts. Class. Quantum Gravity 25, 184034 (2008). doi:10.1088/0264-9381/25/18/184034, arXiv:0803.0312

  51. Shawhan P.S., LIGO Scientific Collaboration, Virgo Collaboration: LOOC-UP: seeking optical counterparts to gravitational-wave signal candidates. In: Bulletin of the American Astronomical Society, Vol. 42 of, Bulletin of the American Astronomical Society, p. 230 (2010)

  52. Cannon, K., Cariou, R., Chapman, A., Crispin-Ortuzar, M., Fotopoulos, N., et al.: Toward early-warning detection of gravitational waves from compact binary coalescence. Astrophys. J. 748, 136 (2012). arXiv:1107.2665

    Google Scholar 

  53. Casenave, F.: Accurate a posteriori error evaluation in the reduced basis method. Comptes Rendus Mathematique 350(910), 539–542 (2012). doi:10.1016/j.crma.2012.05.012. http://www.sciencedirect.com/science/article/pii/S1631073X12001483

  54. Canuto, C., Tonn, T., Urban, K.: A posteriori error analysis of the reduced basis method for nonaffine parametrized nonlinear pdes. SIAM J. Numer. Anal. 47(3), 2001–2022 (2009). doi:10.1137/080724812

    Article  MathSciNet  MATH  Google Scholar 

  55. Press, W.H., Flannery, B.P., Teukolsky, S.A., Vetterling, W.T.: Numerical Recipes, 2nd edn. Cambridge University Press, New York (1992)

    Google Scholar 

Download references

Acknowledgments

This work has been supported in part by NSF Grants DMS-0807811, PHY0801213, PHY1005632, and DMS-1109325 to the University of Maryland, and PHY1125915 to the University of California at Santa Barbara. We thank Priscilla Canizares for comments on the manuscript and Chad Galley for useful discussions. MT and SEF thank the Kavli Institute for Theoretical Physics, University of California at Santa Barbara, where this work was completed, for its hospitality. H. A. and M. T. thank Tryst DC, where parts of this work were done, for its hospitality.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Harbir Antil.

Appendices

Appendix 1: Greedy and EIM Algorithms

1.1 Greedy Algorithm

For any \(\epsilon > 0\) and training set \(\mathcal{T }_K\) the greedy algorithm to build a reduced basis is as follows:

Algorithm 4

(RB Greedy Algorithm) \(\left[ \{e_\ell \}_{\ell =1}^n, \{\mu _\ell \}_{\ell =1}^n\right] \) = RB-Greedy (\(\epsilon \), \(\mathcal{T }_K\))

  1. (1)

    Set \(n=0\) and define \(\sigma _{0}{M}(\mathcal{F }_{\mathrm{train}};\mathcal{H }) := 1\)

  2. (2)

    Choose an arbitrary \(\mu _1 \in \mathcal{T }_K\) and set \(e_1 := h_{\mu _1}\) Comment: \(\Vert h_{\mu _1}\Vert _\mathtt{d} = 1\)

  3. (3)

    do, while \(\sigma _{n} (\mathcal{F }_{\mathrm{train}};\mathcal{H }) \ge \epsilon \)

    1. (a)

      \(n = n + 1\)

    2. (b)

      \(\sigma _{n} (h_{\mu }) := \left\| h_{\mu } - \mathcal{P }_{n}{} h_\mu \right\| _\mathtt{d}\) for all \(\mu \in \mathcal{T }_K\)

    3. (c)

      \(\sigma _{n} (\mathcal{F }_{\mathrm{rain}};\mathcal{H }) = \sup _{ \mu \in \mathcal{T }_K} \left\{ \sigma _{n} (h_{\mu }) \right\} \)

    4. (d)

      \( \mu _{n+1} := {{\mathrm{\arg \!\sup }}}_{\mu \in \mathcal{T }_K} \left\{ \sigma _{n} (h_\mu ) \right\} \quad \text{(greedy } \text{ sweep) }\)

    5. (e)

      \(e_{n+1} := h_{\mu _{n+1}} - \mathcal{P }_{n}{} h_{\mu _{n+1}} \quad \text{(Gram-Schmidt } \text{ Orthogonalization) } \)

    6. (f)

      \(e_{n+1} := e_{n+1}/ \Vert e_{n+1} \Vert _\mathtt{d} \quad \text{(normalization) }\)

Remark 8

In step \(3\)b the error is computed exactly as \(\left\| h_{\mu } - \mathcal{P }_{n}{} h_\mu \right\| _\mathtt{d}\). In the setting of PDEs, RB methods avoid exact error computations by using parametric error estimators and without computing full solutions. In the context of the empirical interpolation method Ref. [2] suggests using \(\hat{\sigma }( h_{\mu } ) = \Vert h_{\mu } - I_n [h_{\mu } ]\Vert _\mathtt{d}\), where \(I_n [h_{\mu }]\) is the empirical interpolant. Indeed, for applications limited by computational resources using \(\hat{\sigma }( h_{\mu } )\) could be desirable. Within the Hilbert space setting described in this paper, however, an exact error \(\sigma ( h_{\mu } )\) computation results in a reduced basis space which will more accurately approximate the full space (either \(\mathcal{F }\) or \(\widetilde{\mathcal{F }}\)) and should be used whenever possible.

1.2 EIM Algorithm

Algorithm 5

(Selection of DEIM Points) \(\left[ \mathbf{P }, \{ p_i \}_{i=1}^n\right] \) = DEIM (\(\mathbf{V }\), \(\{ x_k \}_{k=1}^M\)) Comment: the column vectors of \(\mathbf{V }\) must be linearly independent

  1. (1)

    \(j = \mathrm{arg} \max | \mathbf{e }_1 | \) Comment: here \(\mathrm{arg} \max \) takes a vector and returns the index of its largest entry

  2. (2)

    Set \(\mathbf{U }= [\mathbf{e }_1]\), \(\mathbf{P }= [\hat{\mathbf{e }}_j]\), \(p_1 = x_j\) Comment: \(\hat{\mathbf{e }}_j\) is a unit column vector with a single unit entry at index \(j\)

  3. (3)

    for \(i = 2, {\ldots } , n\) do

    1. (4)

      Solve \(( \mathbf{P }^T \mathbf{U }) \mathbf{c }= \mathbf{P }^T \mathbf{e }_i\) for \(\mathbf{c }\)

    2. (5)

      \(\mathbf{r }= \mathbf{e }_i - \mathbf{U }\mathbf{c }\)

    3. (6)

      \(j = \mathrm{arg} \max | \mathbf{r }| \)

    4. (7)

      Set \(\mathbf{U }= [\mathbf{U }\quad \mathbf{r }]\), \(\mathbf{P }= [\mathbf{P }\quad \hat{\mathbf{e }}_j]\), \(p_i = x_j\)

The DEIM algorithm described above is nearly identical to the one given in Ref. [4] with the exception of Step 7 which is \(\mathbf{U }= [\mathbf{U }\quad \mathbf{e }_i]\) in that reference. In Appendix 9 we show these algorithms to be equivalent and, furthermore, we show a relationship between DEIM and Gauss-elimination with partial pivoting. Owing to the lower triangular form of \(\mathbf{P }^T \mathbf{U }\), in Appendix 8 we show that Algorithm 5 has a reduced computational cost.

Appendix 2: Asymptotic FLOP Count

1.1 DEIM FLOP Count

In Algorithm 5 as \(\mathbf{P }^T\mathbf{U }\) is a lower triangular matrix therefore (after \(m\) iterations) Step (4) costs \(\mathcal{O }({ m^3 })\), Step (5) costs \(\mathcal{O }({Mm})\) subtractions and \(\mathcal{O }({M m^2})\) matrix vector multiplications, where latter is the dominant one. The improvement in our implementation of Algorithm 5 as compared to [4] is that the our implementation scales as \(\mathcal{O }({m^3})\) as compared to \(\mathcal{O }({m^4})\) in [4]. Therefore the total DEIM cost is \(\mathcal{O }({Mm^2+m^3})\).

1.2 ROQ FLOP Count

The dominant cost (assuming \(M > m\)) of computing reduced order quadrature weights \( \mathinner { \left( {{\varvec{\omega }}}^\mathrm{ROQ}\right) } ^{T} = {\varvec{\omega }}^{T} {\widetilde{\mathbf{V }}} \mathinner { \left( {\widetilde{\mathbf{P }}}^{T} {\widetilde{\mathbf{V }}}\right) } ^{-1}\) in (21) is \(\mathcal{O }({Mm^2})\). Here we used the fact that the operation \(\widetilde{\mathbf{P }}^T \widetilde{\mathbf{V }}\) is equivalent to selecting \(m\) rows corresponding to DEIM points \(\{ \widetilde{p}_\ell \}_{\ell =1}^m\).

Then the overall cost to compute ROQ using one-step (Algorithm 4 [Path #1] with modified Gram-Schmidt in greedy is:

$$\begin{aligned} \mathcal{O }({K^2 M \widehat{m} + M \widehat{m}^2 }) \end{aligned}$$

and two-step (Algorithm 4 [Path #2] is:

$$\begin{aligned} \mathcal{O }({n^2 M m + K M n + M m^2 }). \end{aligned}$$

Here \(m\), \(\widehat{m}\) denote the number of reduced basis used in approximation of \(\widetilde{\mathcal{F }}\) via two-step and direct approach respectively and \(n\) is the number of reduced basis used to approximate \(\mathcal{F }\).

Appendix 3: DEIM and LU Decomposition with Partial Pivoting

The original DEIM described in [4] has as Step 7

$$\begin{aligned} \mathbf{U }= [ \mathbf{U }\quad \mathbf{e }_i], \quad i = 2, {\ldots } n. \end{aligned}$$

Since the columns of \(\mathbf{U }\) are linearly independent, \(\mathbf{P }^T\mathbf{U }\) is always invertible, but in this form the matrix \(\mathbf{P }^T \mathbf{U }\) can be dense. Next we show that with a slight modification to the original DEIM, we get Algorithm 5 and at every iteration \(i=2, {\ldots }, n\) with \(\mathbf{r }_i := \mathbf{r }\),

$$\begin{aligned} \mathbf{U }:= [ \mathbf{U }\quad \mathbf{r }_i], \end{aligned}$$

gives the same result. One of the advantages of looking at DEIM in this format is that the matrix \(\mathbf{P }^T \mathbf{U }\) is lower triangular, therefore the system can be solved for \(\mathbf{c }\) with \(\mathcal{O }({n^2})\) operations, as compared to using \(\mathbf{e }_i\), which gives a dense matrix requiring \(\mathcal{O }({n^3})\) operations.

Next we prove that we get the same result using both formats.

Proposition 1

In [4], Step 7 of Algorithm 5 was presented as \(\mathbf{U }= [ \mathbf{U }\quad \mathbf{e }_i]\), \(i=2,{\ldots },n\), we can replace it by

$$\begin{aligned} \mathbf{U }= [ \mathbf{U }\quad \mathbf{r }_i] , \quad i = 2, {\ldots }, n , \end{aligned}$$

where \(\mathbf{r }_i := \mathbf{r }\), at every iteration, and \(\mathbf{r }\) is as given in Step \(5\) of Algorithm 5.

Proof

Step \(4\) of Algorithm 5, at \(i=n\), gives the coefficient matrix

$$\begin{aligned} \mathbf{P }^T \mathbf{U }= \left( \begin{array}{cccc} \mathbf{e }_1(p_1) &{} \mathbf{e }_2(p_1) &{} \cdots &{} \mathbf{e }_{n-1}(p_1) \\ \mathbf{e }_1(p_2) &{} \mathbf{e }_2(p_2) &{} \cdots &{} \mathbf{e }_{n-1}(p_2) \\ \mathbf{e }_1(p_3) &{} \mathbf{e }_2(p_3) &{} \cdots &{} \mathbf{e }_{n-1}(p_3) \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ \mathbf{e }_1(p_{n-1}) &{} \mathbf{e }_2(p_{n-1}) &{} \cdots &{} \mathbf{e }_{n-1}(p_{n-1}) \\ \end{array} \right) . \end{aligned}$$
(32)

Next we write the row reduced echelon form using forward Gaussian elimination for \(( \mathbf{P }^T\mathbf{U })^T\), where the first step implies

$$\begin{aligned} \left( \begin{array}{cccc} \mathbf{e }_1(p_1) &{} 0 &{} \cdots &{} 0 \\ \mathbf{e }_1(p_2) &{} \mathbf{r }_2(p_2) &{} \cdots &{} \mathbf{e }_{n-1}(p_2) - \frac{\mathbf{e }_{n-1}(p_1)}{\mathbf{e }_1(p_1)}\mathbf{e }_1(p_2) \\ \mathbf{e }_1(p_3) &{} \mathbf{r }_2(p_3) &{} \cdots &{} \mathbf{e }_{n-1}(p_3) - \frac{\mathbf{e }_{n-1}(p_1)}{\mathbf{e }_1(p_1)}\mathbf{e }_1(p_3) \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ \mathbf{e }_1(p_{n-1}) &{} \mathbf{r }_2(p_{n-1}) &{} \cdots &{} \mathbf{e }_{n-1}(p_{n-1}) - \frac{\mathbf{e }_{n-1}(p_1)}{\mathbf{e }_1(p_1)}\mathbf{e }_1(p_{n-1}) \\ \end{array} \right) . \end{aligned}$$

The final row reduced echelon form for \(( \mathbf{P }^T\mathbf{U })^T\) is

$$\begin{aligned} \left( \begin{array}{cccc} \mathbf{e }_1(p_1) &{} 0 &{} \cdots &{} 0 \\ \mathbf{e }_1(p_2) &{} \mathbf{r }_2(p_2) &{} \cdots &{} 0 \\ \mathbf{e }_1(p_3) &{} \mathbf{r }_2(p_3) &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ \mathbf{e }_1(p_{n-1}) &{} \mathbf{r }_2(p_{n-1}) &{} \cdots &{} \mathbf{r }_{n}(p_{n-1}) \\ \end{array} \right) . \end{aligned}$$
(33)

Hence the proposition.

Remark 9

Algorithm 5 has a flavor of Gauss elimination with row pivoting. We showed the elimination part in Proposition 1. Now we show that the DEIM points \(\{ p_1, {\ldots }, p_n \}\) are in fact the pivots. Our goal is to arrive at (33) row echelon form of \(( \mathbf{P }^T\mathbf{U })^T\) with with partial pivoting. Consider the matrix (32). Let the location of the first pivot, i.e., the maximum of the absolute value of the first column be \(p_1\) (this is the same as Step 1 in Algorithm 5). Apply the first step of forward elimination as in the proof of Proposition 1. Next compute the second pivot \(p_2\), which is the maximum of the absolute value of the so-obtained second column. Following in this manner we arrive at the matrix in (33).

Rights and permissions

Reprints and permissions

About this article

Cite this article

Antil, H., Field, S.E., Herrmann, F. et al. Two-Step Greedy Algorithm for Reduced Order Quadratures. J Sci Comput 57, 604–637 (2013). https://doi.org/10.1007/s10915-013-9722-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10915-013-9722-z

Keywords

Navigation