Skip to main content
Log in

A Kogbetliantz-type algorithm for the hyperbolic SVD

  • Original Paper
  • Published:
Numerical Algorithms Aims and scope Submit manuscript

Abstract

In this paper, a two-sided, parallel Kogbetliantz-type algorithm for the hyperbolic singular value decomposition (HSVD) of real and complex square matrices is developed, with a single assumption that the input matrix, of order n, admits such a decomposition into the product of a unitary, a non-negative diagonal, and a J-unitary matrix, where J is a given diagonal matrix of positive and negative signs. When J = ±I, the proposed algorithm computes the ordinary SVD. The paper’s most important contribution—a derivation of formulas for the HSVD of 2 × 2 matrices—is presented first, followed by the details of their implementation in floating-point arithmetic. Next, the effects of the hyperbolic transformations on the columns of the iteration matrix are discussed. These effects then guide a redesign of the dynamic pivot ordering, being already a well-established pivot strategy for the ordinary Kogbetliantz algorithm, for the general, n × n HSVD. A heuristic but sound convergence criterion is then proposed, which contributes to high accuracy demonstrated in the numerical testing results. Such a J-Kogbetliantz algorithm as presented here is intrinsically slow, but is nevertheless usable for matrices of small orders.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. For a = 0, η(a) = 0 instead of a huge negative integer, so taking \(\max \limits \{|a|,\omega \}\) filters out such a.

  2. For example, the minimumNumber operation of the IEEE 754-2019 standard [12, section 9.6].

References

  1. Anderson, E., Bai, Z., Bischof, C., Blackford, S., Demmel, J., Dongarra, J., Du Croz, J., Greenbaum, A., Hammarling, S., McKenney, A., Sorensen, D.: LAPACK Users’ Guide. Software, Environments and Tools, 3rd edn. Society for Industrial and Applied Mathematics, Philadelphia (1999). https://doi.org/10.1137/1.9780898719604

    Book  Google Scholar 

  2. Baudet, G., Stevenson, D.: Optimal sorting algorithms for parallel computers. IEEE Trans. Comput. C-27(1), 84–87 (1978). https://doi.org/10.1109/TC.1978.1674957

    Article  MathSciNet  Google Scholar 

  3. Bečka, M., Okša, G., Vajteršic, M.: Dynamic ordering for a parallel block-Jacobi SVD algorithm. Parallel Comp. 28(2), 243–262 (2002). https://doi.org/10.1016/S0167-8191(01)00138-7

    Article  Google Scholar 

  4. Bojanczyk, A.W., Onn, R., Steinhardt, A.O.: Existence of the hyperbolic singular value decomposition. Linear Algebra Appl. 185(C), 21–30 (1993). https://doi.org/10.1016/0024-3795(93)90202-Y

    Article  MathSciNet  Google Scholar 

  5. Charlier, J.P., Vanbegin, M., Van Dooren, P.: On efficient implementations of Kogbetliantz’s algorithm for computing the singular value decomposition. Numer. Math. 52(3), 279–300 (1987). https://doi.org/10.1007/BF01398880

    Article  MathSciNet  Google Scholar 

  6. Drmač, Z.: Implementation of Jacobi rotations for accurate singular value computation in floating point arithmetic. SIAM J. Sci. Comput. 18(4), 1200–1222 (1997). https://doi.org/10.1137/S1064827594265095

    Article  MathSciNet  Google Scholar 

  7. Hari, V., Matejaš, J.: Accuracy of two SVD algorithms for 2 × 2 triangular matrices. Appl. Math. Comput. 210 (1), 232–257 (2009). https://doi.org/10.1016/j.amc.2008.12.086

    MathSciNet  MATH  Google Scholar 

  8. Hari, V., Singer, S., Singer, S.: Block-oriented J-Jacobi methods for Hermitian matrices. Linear Algebra Appl. 433(8–10), 1491–1512 (2010). https://doi.org/10.1016/j.laa.2010.06.032

    Article  MathSciNet  Google Scholar 

  9. Hari, V., Singer, S., Singer, S.: Full block J-Jacobi method for Hermitian matrices. Linear Algebra Appl. 444, 1–27 (2014). https://doi.org/10.1016/j.laa.2013.11.028

    Article  MathSciNet  Google Scholar 

  10. Hari, V., Veselić, K.: On Jacobi methods for singular value decompositions. SIAM J. Sci. Stat. 8(5), 741–754 (1987). https://doi.org/10.1137/0908064

    MathSciNet  MATH  Google Scholar 

  11. Hari, V., Zadelj-Martić, V.: Parallelizing the Kogbetliantz method: A first attempt. J. Numer. Anal. Ind. Appl. Math. 2(1–2), 49–66 (2007)

    MathSciNet  MATH  Google Scholar 

  12. IEEE Computer Society: 754-2019 - IEEE Standard for Floating-Point Arithmetic. IEEE, New York. https://doi.org/10.1109/IEEESTD.2019.8766229 (2019)

  13. ISO/IEC JTC1/SC22/WG5: ISO/IEC 1539-1:2018(en) Information Technology — Programming languages — Fortran — Part 1: Base language, 4th edn. ISO, Geneva (2018)

    Google Scholar 

  14. Kogbetliantz, E.G.: Solution of linear equations by diagonalization of coefficients matrix. Quart. Appl. Math. 13(2), 123–132 (1955). https://doi.org/10.1090/qam/88795

    Article  MathSciNet  Google Scholar 

  15. Kulikov, G. Yu., Kulikova, M.V.: Hyperbolic-singular-value-decomposition-based square-root accurate continuous-discrete extended-unscented Kalman filters for estimating continuous-time stochastic models with discrete measurements. Int. J. Robust Nonlinear Control 30, 2033–2058 (2020). https://doi.org/10.1002/rnc.4862

    Article  MathSciNet  Google Scholar 

  16. Kulikova, M.V.: Hyperbolic SVD-based Kalman filtering for Chandrasekhar recursion. IET Control Theory A. 13(10), 1525–1531 (2019). https://doi.org/10.1049/iet-cta.2018.5864

    Article  MathSciNet  Google Scholar 

  17. Kulikova, M.V.: Square-root approach for Chandrasekhar-based maximum correntropy Kalman filtering. IEEE Signal Process Lett. 26(12), 1803–1807 (2019). https://doi.org/10.1109/LSP.2019.2948257

    Article  Google Scholar 

  18. Mackey, D.S., Mackey, N., Tisseur, F.: Structured factorizations in scalar product spaces. SIAM J. Matrix Anal. and Appl. 27(3), 821–850 (2005). https://doi.org/10.1137/040619363

    Article  MathSciNet  Google Scholar 

  19. Matejaš, J., Hari, V.: Accuracy of the Kogbetliantz method for scaled diagonally dominant triangular matrices. Appl. Math. Comput. 217(8), 3726–3746 (2010). https://doi.org/10.1016/j.amc.2010.09.020

    MathSciNet  MATH  Google Scholar 

  20. Matejaš, J., Hari, V.: On high relative accuracy of the Kogbetliantz method. Linear Algebra Appl. 464, 100–129 (2015). https://doi.org/10.1016/j.laa.2014.02.024

    Article  MathSciNet  Google Scholar 

  21. Novaković, V.: A hierarchically blocked Jacobi SVD algorithm for single and multiple graphics processing units. SIAM J. Sci. Comput. 37(1), C1–C30 (2015). https://doi.org/10.1137/140952429

    Article  MathSciNet  Google Scholar 

  22. Novaković, V.: Batched computation of the singular value decompositions of order two by the AVX-512 vectorization. Parallel Process. Lett. 30(4), 2050015 (2020). https://doi.org/10.1142/S0129626420500152

    Article  MathSciNet  Google Scholar 

  23. Novaković, V., Singer, S.: A GPU-based hyperbolic SVD algorithm. BIT 51(4), 1009–1030 (2011). https://doi.org/10.1007/s10543-011-0333-5

    Article  MathSciNet  Google Scholar 

  24. NVIDIA Corp.: CUDA C++ Programming Guide v10.2.89. https://docs.nvidia.com/cuda/cuda-c-programming-guide/ (2019)

  25. Okša, G., Yamamoto, Y., Bečka, M., Vajteršic, M.: Asymptotic quadratic convergence of the two-sided serial and parallel block-Jacobi SVD algorithm. SIAM J. Matrix Anal. and Appl. 40(2), 639–671 (2019). https://doi.org/10.1137/18M1222727

    Article  MathSciNet  Google Scholar 

  26. Onn, R., Steinhardt, A.O., Bojanczyk, A.W.: The hyperbolic singular value decomposition and applications. IEEE Trans. Signal Process. 39 (7), 1575–1588 (1991). https://doi.org/10.1109/78.134396

    Article  Google Scholar 

  27. OpenMP ARB: OpenMP Application Programming Interface Version 5.0. https://www.openmp.org/wp-content/uploads/OpenMP-API-Specification-5.0.pdf (2018)

  28. Singer, S.: Indefinite QR factorization. BIT 46(1), 141–161 (2006). https://doi.org/10.1007/s10543-006-0044-5

    Article  MathSciNet  Google Scholar 

  29. Singer, S., Di Napoli, E., Novaković, V., Čaklović, G.: The LAPW method with eigendecomposition based on the Hari–Zimmermann generalized hyperbolic SVD. SIAM J. Sci. Comput. 42(5), C265–C293 (2020). https://doi.org/10.1137/19M1277813

    Article  MathSciNet  Google Scholar 

  30. Singer, S., Singer, S., Novaković, V., Davidović, D., Bokulić, K., Ušćumlić, A.: Three-level parallel J-Jacobi algorithms for Hermitian matrices. Appl. Math. Comput. 218(9), 5704–5725 (2012). https://doi.org/10.1016/j.amc.2011.11.067

    MathSciNet  MATH  Google Scholar 

  31. Singer, S., Singer, S., Novaković, V., Ušćumlić, A., Dunjko, V.: Novel modifications of parallel Jacobi algorithms. Numer. Algorithms 59(1), 1–27 (2012). https://doi.org/10.1007/s11075-011-9473-6

    Article  MathSciNet  Google Scholar 

  32. Slapničar, I.: Accurate symmetric eigenreduction by a Jacobi method. Ph.D. thesis, FernUniversitȧt–Gesamthochschule, Hagen (1992)

  33. Slapničar, I.: Componentwise analysis of direct factorization of real symmetric and Hermitian matrices. Linear Algebra Appl. 272, 227–275 (1998). https://doi.org/10.1016/S0024-3795(97)00334-0

    Article  MathSciNet  Google Scholar 

  34. Stewart, G.W.: An updating algorithm for subspace tracking. IEEE Trans. Signal Process. 40(6), 1535–1541 (1992). https://doi.org/10.1109/78.139256

    Article  Google Scholar 

  35. Veselić, K.: A Jacobi eigenreduction algorithm for definite matrix pairs. Numer. Math. 64 (1), 241–269 (1993). https://doi.org/10.1007/BF01388689

    Article  MathSciNet  Google Scholar 

  36. Zha, H.: A note on the existence of the hyperbolic singular value decomposition. Linear Algebra Appl. 240, 199–205 (1996). https://doi.org/10.1016/0024-3795(94)00197-9

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

We are much indebted to Saša Singer for his suggestions on the paper’s subject, and to the anonymous referee for significantly improving the presentation of the paper.

Funding

This work has been supported in part by Croatian Science Foundation under the project IP–2014–09–3670 “Matrix Factorizations and Block Diagonalization Algorithms” (https://web.math.pmf.unizg.hr/mfbda/).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vedran Novaković.

Ethics declarations

Conflict of interest

The authors declare no competing interests.

Additional information

Author contribution

The second author formulated the research topic, reviewed the literature, plotted the figures and proofread the manuscript. The first author performed the rest of the research tasks.

Data availability

Several animations and the full testing dataset with the matrix outputs are available at http://euridika.math.hr:1846/Jacobi/JKogb/ or by request.

Code availability

The source code is available in https://github.com/venovako/JKogb repository.

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work is dedicated to the memory of Sanja Singer.

Appendix: . Proofs of Lemmas 3.1 and 3.2

Appendix: . Proofs of Lemmas 3.1 and 3.2

Proof

(Lemma 3.1) Let, for 1 ≤ n, \(\gamma _{\ell }=\arg (x_{\ell })\) and \(\delta _{\ell }=\arg (y_{\ell })\). Then,

$$ \left[\begin{array}{cc}x_{\ell}&y_{\ell} \end{array}\right]= \left[\begin{array}{cc}e^{\mathrm{i}\gamma_{\ell}}|x_{\ell}|&e^{\mathrm{i}\delta_{\ell}}|y_{\ell}| \end{array}\right] = e^{\mathrm{i}\gamma_{\ell}}\left[\begin{array}{cc}|x_{\ell}|&e^{\mathrm{i}(\delta_{\ell}-\gamma_{\ell})}|y_{\ell}| \end{array}\right]= e^{\mathrm{i}\delta_{\ell}}\left[\begin{array}{cc}e^{\mathrm{i}(\gamma_{\ell}-\delta_{\ell})}|x_{\ell}|&|y_{\ell}| \end{array}\right]. $$

Using the second equality, from the matrix multiplication it follows

$$ x_{\ell}^{\prime}=e^{\mathrm{i}\gamma_{\ell}^{}}(|x_{\ell}^{}|\cosh\psi+e^{\mathrm{i}(\delta_{\ell}^{}-\gamma_{\ell}^{}-\beta)}|y_{\ell}^{}|\sinh\psi), $$

and using the third equality, from the matrix multiplication it follows

$$ y_{\ell}^{\prime}=e^{\mathrm{i}\delta_{\ell}^{}}(e^{\mathrm{i}(\gamma_{\ell}^{}-\delta_{\ell}^{}+\beta)}|x_{\ell}^{}|\sinh\psi+|y_{\ell}^{}|\cosh\psi). $$

Since \(|e^{-\mathrm {i}\gamma _{\ell }^{}}x_{\ell }^{\prime }|=|x_{\ell }^{\prime }|\) and \(|e^{-\mathrm {i}\delta _{\ell }^{}}y_{\ell }^{\prime }|=|y_{\ell }^{\prime }|\) and \(\cos \limits (-\phi )=\cos \limits \phi \), it holds

$$ \begin{aligned} |x_{\ell}^{\prime}|^2 &=(|x_{\ell}^{}|\cosh\psi+\cos(\delta_{\ell}^{}-\gamma_{\ell}^{}-\beta)|y_{\ell}^{}|\sinh\psi)^2+(\sin(\delta_{\ell}^{}-\gamma_{\ell}^{}-\beta)|y_{\ell}^{}|\sinh\psi)^2\\ &=|x_{\ell}^{}|^2\cosh^2\psi+|y_{\ell}^{}|^2\sinh^2\psi+\cos(\delta_{\ell}^{}-\gamma_{\ell}^{}-\beta)|x_{\ell}^{}||y_{\ell}^{}|2\cosh\psi\sinh\psi,\\ |y_{\ell}^{\prime}|^2 &=(\cos(\gamma_{\ell}^{}-\delta_{\ell}^{}+\beta)|x_{\ell}^{}|\sinh\psi+|y_{\ell}^{}|\cosh\psi)^2+(\sin(\gamma_{\ell}^{}-\delta_{\ell}^{}+\beta)|x_{\ell}^{}|\sinh\psi)^2\\ &=|x_{\ell}^{}|^2\sinh^2\psi+|y_{\ell}^{}|^2\cosh^2\psi+\cos(\delta_{\ell}^{}-\gamma_{\ell}^{}-\beta)|x_{\ell}^{}||y_{\ell}^{}|2\cosh\psi\sinh\psi. \end{aligned} $$

After grouping the terms, the square of the Frobenius norm of the new th row is

$$ \begin{aligned} \left\|\left[\begin{array}{cc}x_{\ell}^{\prime} & y_{\ell}^{\prime} \end{array}\right]\right\|_{F}^{2}= |x_{\ell}^{\prime}|^{2}+|y_{\ell}^{\prime}|^{2}&= (\cosh^{2}\psi+\sinh^{2}\psi)(|x_{\ell}^{}|^{2}+|y_{\ell}^{}|^{2})\\ &\quad+2\cos(\delta_{\ell}^{}-\gamma_{\ell}^{}-\beta)|x_{\ell}^{}||y_{\ell}^{}|2\cosh\psi\sinh\psi. \end{aligned} $$
(A.1)

Summing the left side of (A.1) over all one obtains

$$ \left\|\left[\begin{array}{cc}\mathbf{x}^{\prime}&\mathbf{y}^{\prime} \end{array}\right]\right\|_F^2= \sum\limits_{\ell=1}^n\left( |x_{\ell}^{\prime}|^2+|y_{\ell}^{\prime}|^2\right), $$

what is equal to the right side of (A.1), summed over all ,

$$ \sum\limits_{\ell=1}^n\left( (\cosh^2\psi+\sinh^2\psi)(|x_{\ell}^{}|^2+|y_{\ell}^{}|^2)+2\zeta_{\ell}^{}|x_{\ell}^{}||y_{\ell}^{}|2\cosh\psi\sinh\psi\right), $$

where \(-1\le \zeta _{\ell }^{}=\cos \limits (\delta _{\ell }^{}-\gamma _{\ell }^{}-\beta )\le 1\), so \(|\zeta _{\ell }^{}|\le 1\). The last sum can be split into a non-negative part and the remaining part of an arbitrary sign,

$$ (\cosh^2\psi+\sinh^2\psi)\sum\limits_{\ell=1}^n(|x_{\ell}^{}|^2+|y_{\ell}^{}|^2)+ 2\cosh\psi\sinh\psi\sum\limits_{\ell=1}^n 2\zeta_{\ell}^{}|x_{\ell}^{}||y_{\ell}^{}|. $$

Using the triangle inequality, and observing that \({\sum }_{\ell =1}^{n}(|x_{\ell }^{}|^{2}+|y_{\ell }^{}|^{2})=\left \|\left [\begin {array}{cc}\mathbf {x}&\mathbf {y} \end {array}\right ]\right \|_{F}^{2}\), this value can be bounded above by

$$ (\cosh^2\psi+\sinh^2\psi)\left\|\left[\begin{array}{cc}\mathbf{x}&\mathbf{y} \end{array}\right]\right\|_F^2+ 2\cosh\psi|\sinh\psi|\sum\limits_{\ell=1}^n 2|x_{\ell}^{}||y_{\ell}^{}|, $$

and below by

$$ (\cosh^2\psi+\sinh^2\psi)\left\|\left[\begin{array}{cc}\mathbf{x}&\mathbf{y} \end{array}\right]\right\|_F^2- 2\cosh\psi|\sinh\psi|{\sum}_{\ell=1}^n 2|x_{\ell}^{}||y_{\ell}^{}|, $$

where both bounds can be simplified by the identities

$$ \cosh^2\psi+\sinh^2\psi=\cosh(2\psi),\qquad 2\cosh\psi|\sinh\psi|=|\sinh(2\psi)|. $$

By the inequality of arithmetic and geometric means it holds \(2|x_{\ell }^{}||y_{\ell }^{}|\le (|x_{\ell }^{}|^{2}+|y_{\ell }^{}|^{2})\), so a further upper bound is reached as

$$ \cosh(2\psi)\left\|\left[\begin{array}{cc}\mathbf{x}&\mathbf{y} \end{array}\right]\right\|_F^2+ |\sinh(2\psi)|\left\|\left[\begin{array}{cc}\mathbf{x}&\mathbf{y} \end{array}\right]\right\|_F^2, $$

and a further lower bound as

$$ \cosh(2\psi)\left\|\left[\begin{array}{cc}\mathbf{x}&\mathbf{y} \end{array}\right]\right\|_F^2- |\sinh(2\psi)|\left\|\left[\begin{array}{cc}\mathbf{x}&\mathbf{y} \end{array}\right]\right\|_F^2, $$

what, after grouping the terms and dividing by \(\left \|\left [\begin {array}{cc}\mathbf {x}&\mathbf {y} \end {array}\right ]\right \|_{F}^{2}\), concludes the proof. □

Proof

(Lemma 3.2) Note that \(\cosh (2\psi )+|\sinh (2\psi )|\ge \cosh (2\psi )\ge 1\), and

$$ \begin{aligned} 1&=\cosh^2(2\psi)-\sinh^2(2\psi)\\ &=(\cosh(2\psi)-|\sinh(2\psi)|)\cdot(\cosh(2\psi)+|\sinh(2\psi)|)\\ &\ge\cosh(2\psi)-|\sinh(2\psi)|>0. \end{aligned} $$

If ψ = 0, the equalities in the bounds established in Lemma 3.1 hold trivially. Also, if both equalities hold simultaneously, ψ = 0.

The inequality of arithmetic and geometric means in the proof of Lemma 3.1 turns into equality if and only if |x| = |y| for all . When |x||y|≠ 0, it has to hold ζ = ζ, where \(\zeta =\pm \text {sign}(\sinh \psi )\), to reach the upper or the lower bound, respectively. From ζ = ± 1 it follows δ = γ + β + lπ for a fixed \(l\in \mathbb {Z}\), i.e., \(x_{\ell }=e^{\mathrm {i}\gamma _{\ell }}|x_{\ell }|\) and \(y_{\ell }=\pm e^{\mathrm {i}\beta }e^{\mathrm {i}\gamma _{\ell }}|x_{\ell }|\), so y = ±eiβx for all . Conversely, y = ±eiβx implies, for all , that |x| = |y| and ζ is a constant ζ = ± 1, so one of the two bounds is reached. □

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Novaković, V., Singer, S. A Kogbetliantz-type algorithm for the hyperbolic SVD. Numer Algor 90, 523–561 (2022). https://doi.org/10.1007/s11075-021-01197-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11075-021-01197-4

Keywords

Mathematics Subject Classification (2010)

Navigation