Skip to main content
Log in

Duality and Difference Operators for Matrix Valued Discrete Polynomials on the Nonnegative Integers

  • Published:
Constructive Approximation Aims and scope

Abstract

In this paper we introduce a notion of duality for matrix valued orthogonal polynomials with respect to a measure supported on the nonnegative integers. We show that the dual families are closely related to certain difference operators acting on the matrix orthogonal polynomials. These operators belong to the so called Fourier algebras, which play a key role in the construction of the families. In order to illustrate duality, we describe a family of Charlier type matrix orthogonal polynomials with explicit shift operators which allow us to find explicit formulas for three term recurrences, difference operators and squared norms. These are the essential ingredients for the construction of different dual families.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. Most quantities will now depend on this new parameter \(\lambda \).

References

  1. Aldenhoven, N., Koelink, E., de los Ríos, A.M.: Matrix-valued little \(q\)-Jacobi polynomials. J. Approx. Theory 193, 164–183 (2015)

  2. Aldenhoven, N., Koelink, E., Román, P.: Matrix-valued orthogonal polynomials related to the quantum analogue of \(\rm (SU(2)\times SU(2), diag)\). Ramanujan J. 43(2), 243–311 (2017)

    Article  MathSciNet  Google Scholar 

  3. Álvarez-Fernández, C., Ariznabarreta, G., García-Ardila, J.C., Mañas, M., Marcellán, F.: Christoffel transformations for matrix orthogonal polynomials in the real line and the non-Abelian 2D Toda lattice hierarchy. Int. Math. Res. Not. 2017(5), 1285–1341 (2017)

    MathSciNet  Google Scholar 

  4. Álvarez-Nodarse, R., Durán, A.J., de los Ríos, A.M.: Orthogonal matrix polynomials satisfying second order difference equations. J. Approx. Theory 169, 40–55 (2013)

  5. Ariznabarreta, G., García-Ardila, J.C., Mañas, M., Marcellán, F.: Non-Abelian integrable hierarchies: matrix biorthogonal polynomials and perturbations. J. Phys. A Math. Theor. 51, 20 (2018)

  6. Ariznabarreta, G., Mañas, M.: Matrix orthogonal Laurent polynomials on the unit circle and Toda type integrable systems. Adv. Math. 264, 396–463 (2014)

    Article  MathSciNet  Google Scholar 

  7. Cagliero, L., Koornwinder, T.H.: Explicit matrix inverses for lower triangular matrices with entries involving Jacobi polynomials. J. Approx. Theory 193, 20–38 (2015)

    Article  MathSciNet  Google Scholar 

  8. Cantero, M.J., Moral, L., Velázquez, L.: Differential properties of matrix orthogonal polynomials. J. Concr. Appl. Math. 3(3), 313–334 (2005)

    MathSciNet  Google Scholar 

  9. Cantero, M.J., Moral, L., Velázquez, L.: Matrix orthogonal polynomials whose derivatives are also orthogonal. J. Approx. Theory 146(2), 174–211 (2007)

    Article  MathSciNet  Google Scholar 

  10. Casper, W.R., Yakimov, M.: The matrix Bochner problem. Am. J. Math. 144(4), 1009–1065 (2022)

    Article  MathSciNet  Google Scholar 

  11. Damanik, D., Pushnitski, A., Simon, B.: The analytic theory of matrix orthogonal polynomials. Surv. Approx. Theory 4, 1–85 (2008)

    MathSciNet  Google Scholar 

  12. de la Iglesia, M.D.: Spectral methods for bivariate Markov processes with diffusion and discrete components and a variant of the Wright–Fisher model. J. Math. Anal. Appl. 393(1), 239–255 (2012)

    Article  MathSciNet  Google Scholar 

  13. de la Iglesia, M.D., Román, P.: Some bivariate stochastic models arising from group representation theory. Stoch. Process. Appl. 128(10), 3300–3326 (2018)

    Article  MathSciNet  Google Scholar 

  14. Deaño, A., Eijsvoogel, B., Román, P.: Ladder relations for a class of matrix valued orthogonal polynomials. Stud. Appl. Math. 146(2), 463–497 (2021)

    Article  MathSciNet  Google Scholar 

  15. Duits, M., Kuijlaars, A.B.J.: The two periodic aztec diamond and matrix valued orthogonal polynomials. J. Eur. Math. Soc. 23(4), 1075–1131 (2021)

    Article  MathSciNet  Google Scholar 

  16. Durán, A.J.: Markov’s theorem for orthogonal matrix polynomials. Can. J. Math. 48(6), 1180–1195 (1996)

    Article  MathSciNet  Google Scholar 

  17. Durán, A.J.: The algebra of difference operators associated to a family of orthogonal polynomials. J. Approx. Theory 164(5), 586–610 (2012)

    Article  ADS  MathSciNet  Google Scholar 

  18. Durán, A.J., de la Iglesia, M.D.: Some examples of orthogonal matrix polynomials satisfying odd order differential equations. J. Approx. Theory 150(2), 153–174 (2008)

    Article  MathSciNet  Google Scholar 

  19. Durán, A.J., Alberto Grünbaum, F.: Orthogonal matrix polynomials satisfying second-order differential equations. Int. Math. Res. Not. 2004(10), 461–484 (2004)

  20. Durán, A.J., Ismail, M.E.H.: Differential coefficients of orthogonal matrix polynomials. J. Comput. Appl. Math. 190(1–2), 424–436 (2006)

  21. Durán, A.J., Sánchez-Canales, V.: Rodrigues’ formulas for orthogonal matrix polynomials satisfying second-order difference equations. Integral Transforms Spec. Funct. 25(11), 849–863 (2014)

    Article  MathSciNet  Google Scholar 

  22. Eijsvoogel, B.: \(N\times N\) Matrix Time–Band–Limiting Examples. Preprint, arXiv:2201.10855 [math.CA]

  23. Ebisu, A., Iwasaki, K.: Three-term relations for \(_3F_2(1)\). J. Math. Anal. Appl. 463(2), 593–610 (2018)

    Article  MathSciNet  Google Scholar 

  24. Faraut, J.: Analysis on Lie Groups. An Introduction. Transl. from the French, vol. 110. Cambridge University Press, Cambridge (2008)

  25. Geronimo, J.S.: Scattering theory and matrix orthogonal polynomials on the real line. Circuits Syst. Signal Process. 1(3–4), 471–495 (1982)

    Article  MathSciNet  Google Scholar 

  26. Groenevelt, W., Ismail, M.E.H., Koelink, E.: Spectral decomposition and matrix-valued orthogonal polynomials. Adv. Math. 244, 91–105 (2013)

  27. Grünbaum, F.A., Pacharoni, I., Tirao, J.: Matrix valued spherical functions associated to the complex projective plane. J. Funct. Anal. 188(2), 350–441 (2002)

    Article  MathSciNet  Google Scholar 

  28. Grünbaum, F.A., de la Iglesia, M.D.: Matrix valued orthogonal polynomials arising from group representation theory and a family of quasi-birth-and-death processes. SIAM J. Matrix Anal. Appl. 30(2), 741–761 (2008)

  29. Hall, B.C.: Lie Groups, Lie Algebras, and Representations. An Elementary Introduction, vol. 222. Springer, New York (2003)

  30. Heckmanm, G., van Pruijssen, M.: Matrix valued orthogonal polynomials for Gelfand pairs of rank one. Tohoku Math. J. (2) 68(3), 407–437 (2016)

  31. Ismail, M.E.H., Koelink, E., Román, P.: Matrix valued Hermite polynomials, Burchnall formulas and non-Abelian Toda lattice. Adv. Appl. Math. 110, 235–269(2019)

  32. Koekoek, R., Lesky, P.A., Swarttouw, R.F.: Hypergeometric Orthogonal Polynomials and Their \(q\)-Analogues. Springer, With a Foreword by Tom H. Koornwinder. Berlin (2010)

  33. Koelink, E., Liu, J.: \(BC_2\) type multivariable matrix functions and matrix spherical functions. arXiv: 2110.02287 (2021)

  34. Koelink, E., de los Ríos, Ana, M., Román, P.: Matrix-valued Gegenbauer-type polynomials. Constr. Approx. 46(3), 459–487 (2017)

  35. Koelink, E., Román, P.: Matrix valued Laguerre polynomials. In: Positivity and Noncommutative Analysis. Festschrift in Honour of Ben de Pagter on the Occasion of his 65th Birthday. Based on the Workshop “Positivity and Noncommutative Analysis”, Delft, The Netherlands, September 26–28, 2018, pp. 295–320. Birkhäuser, Cham (2019)

  36. Koelink, E., van Pruijssen, M., Román, P.: Matrix-valued orthogonal polynomials related to (\(\text{ SU }(2) \times \text{ SU }(2)\), diag). Int. Math. Res. Not. 2012(24), 5673–5730 (2012)

    Article  MathSciNet  Google Scholar 

  37. Koelink, E., van Pruijssen, M., Román, P.: Matrix-valued orthogonal polynomials related to \((\text{ SU }(2) \times \text{ SU }(2)\), diag). II. Publ. Res. Inst. Math. Sci. 49(2), 271–312 (2013)

    Article  MathSciNet  Google Scholar 

  38. Koelink, E., van Pruijssen, M., Román, P.: Matrix elements of irreducible representations of \({\rm SU}(n + 1) \times {\rm SU}(n + 1)\) and multivariable matrix-valued orthogonal polynomials. J. Funct. Anal. 278(7), 48 (2020)

  39. Koelink, E., Román, P.: Orthogonal vs. non-orthogonal reducibility of matrix-valued measures. SIGMA Symm. Integr. Geom. Methods Appl. 12(8), 9 (2016)

  40. Kreĭn, M.G.: Hermitian positive kernels on homogeneous spaces. I. Ukrain. Mat. Žurnal 1(4), 64–98 (1949)

    MathSciNet  Google Scholar 

  41. Leonard, D.A.: Orthogonal polynomials, duality and association schemes. SIAM J. Math. Anal. 13, 656–663 (1982)

    Article  MathSciNet  Google Scholar 

  42. Miller, W., Jr.: Lie Theory and Special Functions. Mathematics in Science and Engineering, vol. 43. Academic Press, New York (1968)

  43. Olver, F.W.J., Olde Daalhuis, A.B., Lozier, D.W., Schneider, B.I., Boisvert, R.F., Clark, C.W., Miller, B.R., Saunders, B.V., Cohl, H.S., McClain, M.A. (Eds.): NIST Digital Library of Mathematical Functions. http://dlmf.nist.gov/

  44. Swarttouw, R.F., Koekoek, R.: The Askey-scheme of hypergeometric orthogonal polynomials and its \(q\)-analogue. http://aw.twi.tudelft.nl/~oekoek/askey.html, Report 98-17, Technical University Delft (1998)

  45. Tirao, J., Zurrián, I.: Reducibility of matrix weights. Ramanujan J. 45(2), 349–374 (2018)

    Article  MathSciNet  Google Scholar 

  46. van Pruijssen, M., Román, P.: Matrix valued classical pairs related to compact Gelfand pairs of rank one. SIGMA Symm. Integr. Geom. Methods Appl. 10(113), 28 (2014)

    MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors are immensely grateful to Erik Koelink for countless useful comments. The authors would also like to thank Riley Casper for fruitful discussions at an earlier stage of the project. The authors also thank an anonymous referee for careful revision and constructive remarks and suggestions that helped to improve the manuscript. The support of Erasmus+ travel grant is gratefully acknowledged. The work of Lucía Morey and Pablo Román was supported by a FONCyT grant PICT 2014-3452 and by SeCyTUNC.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pablo Román.

Additional information

Communicated by Erik Koelink.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A: Miscellaneous Proofs

Appendix A: Miscellaneous Proofs

1.1 Block Vandermonde Determinant

This subsection will mostly involve upper triangular matrices and we will focus our attention on their diagonal parts. So when two upper triangular matrices \(\mathcal {M}_1, \mathcal {M}_2\) have the same diagonal, we will write

$$\begin{aligned} \mathcal {M}_1 = \mathcal {M}_2 + \text {s.u.t.} \end{aligned}$$

where \(\text {s.u.t.}\) stands for strictly upper triangular. All the matrices \(\rho _i^{(\lambda )}(n)\) in Sect. 8 satisfy the conditions in Corollary A.2, but we will collect a preliminary result for a slightly simpler case first.

Lemma A.1

Consider the n-dependent upper triangular matrix of the following form

$$\begin{aligned} \mathcal {M}(n) = n \mathcal {T} + \mathcal {T}_0 +\mathrm {s.u.t.} \end{aligned}$$
(A.1)

where \(\mathcal {T}\) and \(\mathcal {T}_0\) are constant diagonal matrices, and the \(\text {s.u.t.}\) part is allowed to depend on n. The determinant of its block Vandermonde matrix does not depend on the \(\text {s.u.t.}\) part or \(\mathcal {T}_0\), and is given by

$$\begin{aligned} \det \begin{pmatrix} I &{} \mathcal {M}(n_0) &{} \dots &{} \mathcal {M}(n_0)^x \\ I &{} \mathcal {M}(n_1) &{} \dots &{} \mathcal {M}(n_1)^x \\ \vdots &{} \vdots &{} \, &{} \vdots \\ I &{} \mathcal {M}(n_x) &{} \dots &{} \mathcal {M}(n_x)^x \end{pmatrix} = \det (\mathcal {T})^{\frac{1}{2} x(x+1)} \left( \prod _{0\le s < t \le x} (n_t-n_s) \right) ^N, \quad x \in \mathbb {N}_{>0}, \end{aligned}$$

where \((n_j)_{j=0}^x\) is a list of complex values for which \(\mathcal {M}(n_j)\) is defined.

Proof

This proof will be by induction on x. On occasion we will denote the block Vandermonde as \(\det \left( \mathcal {M}(n_j)^k \right) _{j,k=0}^x\) even though this a slight abuse of notation in case \(\mathcal {M}(n_j)\) is singular. We apply a block analogue of an elementary row operation to get

$$\begin{aligned}&\det \begin{pmatrix} I &{} \mathcal {M}(n_0) &{} \dots &{} \mathcal {M}(n_0)^x \\ I &{} \mathcal {M}(n_1) &{} \dots &{} \mathcal {M}(n_1)^x \\ \vdots &{} \vdots &{} \, &{} \vdots \\ I &{} \mathcal {M}(n_x) &{} \dots &{} \mathcal {M}(n_x)^x \end{pmatrix} \\&= \det \begin{pmatrix} I &{} \mathcal {M}(n_0) &{} \dots &{} \mathcal {M}(n_0)^x \\ I &{} \mathcal {M}(n_1) &{} \dots &{} \mathcal {M}(n_1)^x \\ \vdots &{} \vdots &{} \, &{} \vdots \\ I &{} \mathcal {M}(n_x) &{} \dots &{} \mathcal {M}(n_x)^x \end{pmatrix} \det \begin{pmatrix} I &{} -\mathcal {M}(n_0) &{} 0 &{} \dots &{} 0 \\ 0 &{} I &{} -\mathcal {M}(n_0) &{} \dots &{} 0 \\ \vdots &{} \ddots &{} \ddots &{} \ddots &{} \vdots \\ 0 &{} \dots &{} 0 &{} I &{} -\mathcal {M}(n_0) \\ 0 &{} \dots &{} \dots &{} 0 &{} I \end{pmatrix} \\&= \det \begin{pmatrix} I &{} 0 &{}\dots &{} 0 \\ I &{} (n_1 - n_0)\mathcal {T} + \text {s.u.t.} &{}\dots &{} \mathcal {M}(n_1)^{x-1}(n_1 - n_0)\mathcal {T} + \text {s.u.t.} \\ \vdots &{} \vdots &{} \, &{} \vdots \\ I &{} (n_x - n_0)\mathcal {T} + \text {s.u.t.} &{} \dots &{} \mathcal {M}(n_x)^{x-1}(n_x - n_0)\mathcal {T} + \text {s.u.t.} \end{pmatrix}. \end{aligned}$$

In the last step we have used that \(\mathcal {M}(n_j) - \mathcal {M}(n_0) = (n_j-n_0)\mathcal {T} + \text {s.u.t}\). To proceed we notice that \( \mathcal {M}(n_j)^k - \mathcal {M}(n_j)^{k-1}\mathcal {M}(n_0) = (n_j-n_0)\mathcal {T}\left( n_j\mathcal {T}+\mathcal {T}_0\right) ^{k-1} +\text {s.u.t.}\) and reduce the size of the determinent using the Schur complement

$$\begin{aligned}&= \det \begin{pmatrix} (n_1 - n_0)\mathcal {T} + \text {s.u.t.} &{}\dots &{} (n_1 - n_0)\mathcal {T}(n_1 \mathcal {T}+\mathcal {T}_0)^{x-1} + \text {s.u.t.} \\ \vdots &{} \, &{} \vdots \\ (n_x - n_0)\mathcal {T} + \text {s.u.t.} &{} \dots &{} (n_x - n_0)\mathcal {T}(n_x \mathcal {T}+\mathcal {T}_0)^{x-1} + \text {s.u.t.} \end{pmatrix} \\&=\det \left( \text {diag}((n_j-n_0)\mathcal {T} + \text {s.u.t.})_{j=1}^x \right) \det \begin{pmatrix} I &{} n_1\mathcal {T} + \text {s.u.t.} &{}\dots &{} (n_1\mathcal {T}+\mathcal {T}_0)^{x-1} + \text {s.u.t.} \\ \vdots &{} \vdots &{} \, &{}\vdots \\ I &{} n_x\mathcal {T} + \text {s.u.t.} &{}\dots &{} (n_x\mathcal {T}+\mathcal {T}_0)^{x-1} + \text {s.u.t.} \end{pmatrix} \end{aligned}$$

The block diagonal matrix in the second line is meant to have exactly the entries of the first column of the matrix in the previous line; in particular it has the same s.u.t. parts. We need this so that the first column of the second determinant is exactly just identity matrices without any \(\text {s.u.t.}\) part.

So we have a reduction of the block Vandermonde to one of a smaller size

$$\begin{aligned} \det \left( \mathcal {M}(n_j)^k \right) _{j,k=0}^x = \det (\mathcal {T})^x \left( \prod _{j=1}^x (n_j-n_0) \right) ^N \det \left( \mathcal {N}(n_{j+1})^k \right) _{j,k=0}^{x-1},\qquad \end{aligned}$$
(A.2)

with \( \mathcal {N}(n) = n\mathcal {T}+\mathcal {T}_0 +\text {s.u.t.}\) So \(\mathcal {N}(n)\) has the same diagonal entries as \(\mathcal {M}(n)\). Then since the factor in front of the block Vandermonde determinant with \(\mathcal {N}(n)\) in (A.2), does not depend on the specifics of the \(\text {s.u.t.}\) we can continue reducing the size of the block Vandermonde until we get the desired result. \(\square \)

Corollary A.2

The determinant formula in Lemma A.1 still holds if we conjugate \(\mathcal {M}(n)\) in (A.1) by an invertible matrix \(\mathcal {Q}\) that does not depend on n.

Proof

Let \(\mathcal {R}(n)=\mathcal {Q}\mathcal {M}(n)\mathcal {Q}^{-1}\). Then it is easy to see that

$$\begin{aligned} \det \begin{pmatrix} I &{} \mathcal {R}(n_0) &{} \dots &{} \mathcal {R}(n_0)^x \\ I &{} \mathcal {R}(n_1) &{} \dots &{} \mathcal {R}(n_1)^x \\ \vdots &{} \vdots &{} \, &{} \vdots \\ I &{} \mathcal {R}(n_x) &{} \dots &{} \mathcal {R}(n_x)^x \end{pmatrix} = \frac{\det \left( \text {diag}(\mathcal {Q})_{j=0}^x \right) }{\det \left( \text {diag}(\mathcal {Q})_{j=0}^x \right) } \det \begin{pmatrix} I &{} \mathcal {M}(n_0) &{} \dots &{} \mathcal {M}(n_0)^x \\ I &{} \mathcal {M}(n_1) &{} \dots &{} \mathcal {M}(n_1)^x \\ \vdots &{} \vdots &{} \, &{} \vdots \\ I &{} \mathcal {M}(n_x) &{} \dots &{} \mathcal {M}(n_x)^x \end{pmatrix} \end{aligned}$$

\(\square \)

1.2 Proof of Proposition 8.2

In this section we prove the first part of Proposition 8.2 which states that

$$\begin{aligned} R_n^{(\lambda )}(0) = -a(I+\mathcal {A}_{n+\lambda })R_{n-1}^{(\lambda +1)}(0), \end{aligned}$$
(A.3)

using the matrix introduced in (6.23) to express \( \mathcal {A}_{n+\lambda } = (N+\lambda +n-J)^{-1} J A^*. \)

Proof

Recall that the entries \(R_n^{(\lambda )}(0)_{jk}=\xi _{j,k,n}^{(\lambda )}\) are given explicitly in Theorem 7.9 and that \(A_{j+1,j} =\frac{\sqrt{N-j}}{\sqrt{a}}\). So we start off by writing (A.3) in terms of its entries

$$\begin{aligned} \xi _{j,k,n}^{(\lambda )} = -a \xi _{j,k,n-1}^{(\lambda +1)} - \frac{j \sqrt{a}\sqrt{N-j}}{(N+\lambda +n-j)} \xi _{j+1,k,n-1}^{(\lambda +1)}. \end{aligned}$$

Note that since the \(\xi _{j,k,n}^{(\lambda )}\) are given by two different expressions (for \(n+j\ge N\) and for \(n+j<N\)), we should prove the desired entrywise recursion for both cases. But it is not necessary to prove a mixed case because the two cases actually coincide for \(n+j=N\), so for any values of the parameters the three terms can always be considered as belonging to the same case.

Before specifying to either case, we compute the following expressions which will come in handy later on in the proof

$$\begin{aligned} \frac{\mathfrak {X}(j,k,n-1,\lambda +1)}{\mathfrak {X}(j,k,n,\lambda )}&= -\frac{N+\lambda +1-j}{a(\lambda +1)}, \\ \frac{\mathfrak {X}(j+1,k,n-1,\lambda +1)}{\mathfrak {X}(j,k,n,\lambda )}&= \frac{\sqrt{N-j}}{\sqrt{a}}\frac{N+\lambda +n-j}{j(\lambda +1)}. \end{aligned}$$

To derive the \(n+j>N\) case we will need a fairly standard hypergeometric identity that is easy to check in general (for parameters with which the series either converge or truncate)

$$\begin{aligned}{} & {} \mathfrak {d} \left( \,_{3}F_{2} \left( \genfrac{}{}{0.0pt}{}{\mathfrak {a},\mathfrak {b},\mathfrak {c}}{\mathfrak {d}, \mathfrak {e}};1 \right) - \,_{3}F_{2} \left( \genfrac{}{}{0.0pt}{}{\mathfrak {a},\mathfrak {b},\mathfrak {c}}{\mathfrak {d}+1, \mathfrak {e}};1 \right) \right) \\{} & {} \quad = \mathfrak {b} \left( \,_{3}F_{2} \left( \genfrac{}{}{0.0pt}{}{\mathfrak {a},\mathfrak {b}+1,\mathfrak {c}}{\mathfrak {d}+1, \mathfrak {e}};1 \right) - \,_{3}F_{2} \left( \genfrac{}{}{0.0pt}{}{\mathfrak {a},\mathfrak {b},\mathfrak {c}}{\mathfrak {d}+1, \mathfrak {e}};1 \right) \right) , \end{aligned}$$

It follows from both sides being equal to

$$\begin{aligned} \sum _{s=1} \frac{(\mathfrak {a})_s(\mathfrak {b})_s (\mathfrak {c})_s s}{s! (\mathfrak {d}+1)_s (\mathfrak {e})_s}. \end{aligned}$$

When we fill in the values \(\mathfrak {a}=1-k\), \(\mathfrak {b}=j-N\), \(\mathfrak {c}=n+\lambda +1\), \(\mathfrak {d}=\lambda +1\) and \(\mathfrak {e} = 1-N\) and collect some similar terms we get

$$\begin{aligned}&\,_{3}F_{2} \left( \genfrac{}{}{0.0pt}{}{1-k,j-N,n+\lambda +1}{\lambda +1, 1-N};1 \right) \\&\quad = \frac{N+\lambda +1-j}{\lambda +1} \,_{3}F_{2} \left( \genfrac{}{}{0.0pt}{}{1-k,j-N,n+\lambda +1}{\lambda +2, 1-N};1 \right) \\&\qquad - \frac{N-j}{\lambda +1} \,_{3}F_{2} \left( \genfrac{}{}{0.0pt}{}{1-k,j-N+1,n+\lambda +1}{\lambda +2, 1-N};1 \right) . \end{aligned}$$

After multiplying this equation by \(\mathfrak {X}(j,k,n,\lambda )(1-N)_{k-1}\) we get the desired result for \(n+j>N\) by using the ratios of different \(\mathfrak {X}\) expressed earlier in this proof.

The \(n+j \le N\) case is slightly different. Note that now instead of the factor of \((1-N)_{k-1}\) we have \((1-n-j)_{k-1}\) so it is no longer a common factor in the three terms of our desired result. The necessary hypergeometric identity is somewhat less standard,

$$\begin{aligned}{} & {} \,_{3}F_{2} \left( \genfrac{}{}{0.0pt}{}{\mathfrak {a},\mathfrak {b},\mathfrak {c}}{\mathfrak {d}, \mathfrak {e}};1 \right) \\{} & {} \quad = \frac{\mathfrak {c}}{\mathfrak {d}}\frac{\mathfrak {e}-\mathfrak {a}}{\mathfrak {e}} \,_{3}F_{2} \left( \genfrac{}{}{0.0pt}{}{\mathfrak {a},\mathfrak {b}+1,\mathfrak {c}+1}{\mathfrak {d}+1, \mathfrak {e}+1};1 \right) - \frac{\mathfrak {c}-\mathfrak {d}}{\mathfrak {d}} \,_{3}F_{2} \left( \genfrac{}{}{0.0pt}{}{\mathfrak {a},\mathfrak {b}+1,\mathfrak {c}}{\mathfrak {d}+1, \mathfrak {e}};1 \right) , \end{aligned}$$

but it can be shown to hold using the methods described in [23, Recipe 5.4] for the cases where the series either converge or truncate. We then put in the values \(\mathfrak {a}=1-k\), \(\mathfrak {b}=-n\), \(\mathfrak {c}=N-j+\lambda +1\), \(\mathfrak {d}=\lambda +1\) and \(\mathfrak {e}=1-n-j\), to get

$$\begin{aligned}&\,_{3}F_{2} \left( \genfrac{}{}{0.0pt}{}{1-k,-n,N-j+\lambda +1}{\lambda +1, 1-n-j};1 \right) \\&\quad = \frac{N+\lambda +1-j}{\lambda +1} \frac{n+j-k}{n+j-1} \,_{3}F_{2} \left( \genfrac{}{}{0.0pt}{}{1-k,-n+1,N-j+\lambda +2}{\lambda +2, 2-n-j};1 \right) \\&\qquad - \frac{N-j}{\lambda +1} \,_{3}F_{2} \left( \genfrac{}{}{0.0pt}{}{1-k,-n+1,N-j+\lambda +1}{\lambda +2, 1-n-j};1 \right) . \end{aligned}$$

After multiplying this equation by \(\mathfrak {X}(j,k,n,\lambda )(1-n-j)_{k-1}\) we get the desired result for \(n+j \le N\) by using the ratios of different \(\mathfrak {X}\) expressed earlier in this proof. \(\square \)

1.3 Proof of Lemma 8.13

We want to find \(\mathscr {W}_R^{(\lambda )}(0)\) so that we will have the zero moment courtesy of (8.12). For this we need \((\mathcal {D}_n^{(\lambda )})_{jj}\) and \( R_n^{(\lambda )}(0)_{j,k} = \xi _{j,k,n}^{(\lambda )}\). The entrywise expression is

$$\begin{aligned} \left( \mathscr {W}_R^{(\lambda )}(0)\right) _{j,k} = \sum _{n=0}^\infty \sum _{r=1}^N \frac{ \xi _{r,j,n}^{(\lambda )} \xi _{r,k,n}^{(\lambda )} }{\left( \mathcal {D}_n^{(\lambda )}\right) _{rr}}. \end{aligned}$$

Since this expression is clearly symmetric, we will take \(j\ge k\) from now on. It will prove useful to have the standard weight of the scalar dual Hahn polynomials nearby c.f. [44],

$$\begin{aligned} w_{dH}(x;\gamma ,\delta ,\mathcal {N}) = \frac{(2x+\gamma +\delta +1)(\gamma +x)!(x+\gamma +\delta )!\delta !(\mathcal {N}!)^2}{x!\gamma !(\delta +x)!(\mathcal {N}-x)!(x+\gamma +\delta +\mathcal {N}+1)!}. \end{aligned}$$

Let us introduce the following shorthand to distinguish between two different expressions for \(\xi \) that hold for different parameter values,

$$\begin{aligned} \xi _{r,k,n}^{(\lambda )} = \mathfrak {X}(r,k,n,\lambda )\times {\left\{ \begin{array}{ll} \alpha _{r,k,n}^{(\lambda )} &{} n+r \ge N, \\ \beta _{r,k,n}^{(\lambda )} &{} k \le n+r< N, \\ 0 &{} n+r < k. \end{array}\right. } \end{aligned}$$

The explicit expressions for \(\alpha \) and \(\beta \) can be found in Theorem 7.9. We then write out the double sum, which we note will not have any cross terms

$$\begin{aligned} \begin{aligned} \left( \mathscr {W}_R^{(\lambda )}(0)\right) _{j,k}&= \sum _{n=0}^\infty \sum _{r=r_0}^N \frac{\mathfrak {X}(r,j,n,\lambda ) \mathfrak {X}(r,k,n,\lambda )}{\left( \mathcal {D}_n^{(\lambda )}\right) _{rr}} \alpha _{r,j,n}^{(\lambda )} \alpha _{r,k,n}^{(\lambda )} \\&\quad + \sum _{n=0}^{N-1} \sum _{r=r_1}^{N-n-1} \frac{\mathfrak {X}(r,j,n,\lambda ) \mathfrak {X}(r,k,n,\lambda )}{\left( \mathcal {D}_n^{(\lambda )}\right) _{rr}} \beta _{r,j,n}^{(\lambda )} \beta _{r,k,n}^{(\lambda )}. \end{aligned} \end{aligned}$$

We denote \(r_0 = \max (1,N-n)\) and \(r_1 = \max (1,j-n)\) and whenever a sum is over the empty set, we take it to be zero. It will come in handy to introduce another summation variable \(s=n+r\) and rearrange the summations

$$\begin{aligned} \begin{aligned} \left( \mathscr {W}_R^{(\lambda )}(0)\right) _{j,k}&= \sum _{s=N}^\infty \sum _{r=1}^N \frac{\mathfrak {X}(r,j,s-r,\lambda ) \mathfrak {X}(r,k,s-r,\lambda )}{\left( \mathcal {D}_{s-r}^{(\lambda )}\right) _{rr}} \alpha _{r,j,s-r}^{(\lambda )} \alpha _{r,k,s-r}^{(\lambda )} \\&\qquad + \sum _{s=j}^{N-1} \sum _{n=0}^{s-1} \frac{\mathfrak {X}(s-n,j,n,\lambda ) \mathfrak {X}(s-n,k,n,\lambda )}{\left( \mathcal {D}_n^{(\lambda )}\right) _{s-n,s-n}} \beta _{s-n,j,n}^{(\lambda )} \beta _{s-n,k,n}^{(\lambda )}. \end{aligned} \end{aligned}$$
(A.4)

The common part of the summands is (note that the rightmost factor is not a factorial)

$$\begin{aligned}{} & {} \frac{\mathfrak {X}(r,j,n,\lambda ) \mathfrak {X}(r,k,n,\lambda )}{\left( \mathcal {D}_n^{(\lambda )}\right) _{rr}}\\{} & {} \quad = \frac{2^\lambda \sqrt{(N-j)!(N-k)!}(\lambda +n)!}{e^{a} a^{\lambda +\frac{1}{2}(j+k)}n!(\lambda !)^2} \frac{a^{r+n} (N+\lambda -r)!(N+1-r+n+\lambda )}{(r-1)!(N-r)!(N+n+\lambda )!}, \end{aligned}$$

where we have used the expression of the inverse of the diagonal part of the squared norm in terms of factorials

$$\begin{aligned} \left( \mathcal {D}_n^{(\lambda )}\right) _{rr}^{-1} = e^{-a} \left( \frac{2}{a}\right) ^\lambda \frac{(r-1)!}{n! a^n} \frac{(N-r+n+\lambda )!(N+1-r+n+\lambda )!}{(\lambda +n)!(N-r+\lambda )!(N+n+\lambda )!}. \end{aligned}$$

The distinct parts of the summands are

$$\begin{aligned} \alpha _{r,j,s-r}^{(\lambda )} \alpha _{r,k,s-r}^{(\lambda )}&= (-1)^{j+k}\frac{(N-1)!^2}{(N-k)!(N-j)!} \\&\qquad \times \mathcal {R}_{j-1}\bigl (\ell (N-r); \, \lambda , s-N, N-1\bigr ) \mathcal {R}_{k-1}\bigl (\ell (N-r); \, \lambda , s-N, N-1\bigr ) \\ \beta _{s-n,j,n}^{(\lambda )} \beta _{s-n,k,n}^{(\lambda )}&= (-1)^{j+k}\frac{(s-1)!^2}{(s-k)!(s-j)!} \\&\qquad \times \mathcal {R}_{j-1}\bigl (\ell (n); \, \lambda , N-s, s-1\bigr ) \mathcal {R}_{k-1}\bigl (\ell (n); \, \lambda , N-s, s-1\bigr ). \end{aligned}$$

Fortunately we can find a corresponding dual Hahn weight in the common parts as follows

$$\begin{aligned}&\frac{\mathfrak {X}(r,j,s-r,\lambda ) \mathfrak {X}(r,k,s-r,\lambda )}{\left( \mathcal {D}_{s-r}^{(\lambda )}\right) _{rr}}\\&\quad = \frac{2^\lambda \sqrt{(N-j)!(N-k)!}}{e^a a^{\lambda -s+\frac{1}{2}(j+k)}} \frac{w_{dH}(N-r;\lambda ,s-N,N-1)}{\lambda !(s-N)!((N-1)!)^2}, \end{aligned}$$

and similarly in

$$\begin{aligned}&\frac{\mathfrak {X}(s-n,j,n,\lambda ) \mathfrak {X}(s-n,k,n,\lambda )}{\left( \mathcal {D}_n^{(\lambda )}\right) _{s-n,s-n}}\\&\quad = \frac{ 2^\lambda \sqrt{(N-j)!(N-k)!} }{e^a a^{\lambda -s+\frac{1}{2}(j+k)} } \frac{ w_{dH}(n;\lambda ,N-s,s-1) }{\lambda !(N-s)!((s-1)!)^2} . \end{aligned}$$

The first sum in (A.4) is then

$$\begin{aligned}&\sum _{s=N}^\infty \sum _{r=1}^N \frac{\mathfrak {X}(r,j,s-r,\lambda ) \mathfrak {X}(r,k,s-r,\lambda )}{\left( \mathcal {D}_{s-r}^{(\lambda )}\right) _{rr}} \alpha _{r,j,s-r}^{(\lambda )} \alpha _{r,k,s-r}^{(\lambda )} \\&\qquad = \frac{e^{-a}2^\lambda (-1)^{j+k} a^{-\lambda -\frac{1}{2}(j+k)}}{\lambda !\sqrt{(N-j)!(N-k)!}} \sum _{s=N}^\infty \frac{a^s}{(s-N)!} \sum _{r=1}^N w_{dH}(N-r;\lambda ,s-N,N-1) \\&\qquad \qquad \times \mathcal {R}_{j-1}\bigl (\ell (N-r); \, \lambda , s-N, N-1\bigr ) \mathcal {R}_{k-1}\bigl (\ell (N-r); \, \lambda , s-N, N-1\bigr ), \end{aligned}$$

which vanishes when \(j\ne k\) due to the orthogonality of the dual Hahn polynomials. When we do have \(j=k\) we get

$$\begin{aligned}&\frac{e^{-a}2^\lambda a^{-\lambda -j}}{\lambda !(N-j)!} \sum _{s=N}^\infty \frac{a^s}{(s-N)!} \sum _{r=1}^N w_{dH}(N-r;\lambda ,s-N,N-1) \mathcal {R}_{j-1}\bigl (\ell (N-r); \, \lambda , s-N, N-1\bigr )^2 \\&\quad = e^{-a}\left( \frac{2}{a}\right) ^\lambda \frac{(j-1)!}{(\lambda +j-1)!} \sum _{s=N}^\infty \frac{a^{s-j}}{(s-j)!}. \end{aligned}$$

The second sum in (A.4) is then

$$\begin{aligned}&\sum _{s=j}^{N} \sum _{n=0}^{s-1} \frac{\mathfrak {X}(s-n,j,n,\lambda ) \mathfrak {X}(s-n,k,n,\lambda )}{\left( \mathcal {D}_n^{(\lambda )}\right) _{s-n,s-n}} \beta _{s-n,j,n}^{(\lambda )} \beta _{s-n,k,n}^{(\lambda )} \\&\qquad = (-1)^{j+k} \sum _{s={j}}^{N} \frac{ e^{-a} a^{s-\frac{1}{2}(j+k)} }{(s-k)!(s-j)!}\left( \frac{2}{a}\right) ^\lambda \frac{ \sqrt{(N-j)!(N-k)!} }{ (N-s)!\lambda !} \sum _{n=0}^{s-1} w_{dH}(n;\lambda ,N-s,s-1) \\&\qquad \times \mathcal {R}_{j-1}\bigl (\ell (n); \, \lambda , N-s, s-1\bigr ) \mathcal {R}_{k-1}\bigl (\ell (n); \, \lambda , N-s, s-1\bigr ). \end{aligned}$$

which also vanishes for \(j\ne k\) and when we do have \(j=k\) we get

$$\begin{aligned}&\sum _{s={j}}^{N-1} \frac{ e^{-a} a^{s-j} }{(s-j)!^2}\left( \frac{2}{a}\right) ^\lambda \frac{ (N-j)!}{ (N-s)!\lambda !} \sum _{n=0}^{s-1} w_{dH}(n;\lambda ,N-s,s-1) \mathcal {R}_{j-1}\bigl (\ell (n); \, \lambda , N-s, s-1\bigr )^2 \\&\quad = e^{-a}\left( \frac{2}{a}\right) ^\lambda \frac{(j-1)!}{(\lambda +j-1)!} \sum _{s={j}}^{N-1} \frac{a^{s-j}}{(s-j)!}. \end{aligned}$$

The only thing that is left to do is combine the sums and perform the summation over s

$$\begin{aligned} \left( \mathscr {W}_R^{(\lambda )}(0)\right) _{jj} = \frac{e^{-a}(j-1)!}{(\lambda +j-1)!} \left( \frac{2}{a}\right) ^\lambda \sum _{s=j}^\infty \frac{a^{s-j}}{(s-j)!} = \frac{(j-1)!}{(\lambda +j-1)!} \left( \frac{2}{a}\right) ^\lambda = \left( T^{(\lambda )}\right) _{jj}^{-1}. \end{aligned}$$

The entries of \(T^{(\lambda )}\) were given in (6.16). Now all that is left is to use (8.12) to arrive at the zero moment

$$\begin{aligned} \mathscr {W}_0^{(\lambda )} = (I+A^*)^{-\lambda }(T^{(\lambda )})^{-1}(I+A)^{-\lambda } = \left( W^{(\lambda )}(0)\right) ^{-1}. \end{aligned}$$

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Eijsvoogel, B., Morey, L. & Román, P. Duality and Difference Operators for Matrix Valued Discrete Polynomials on the Nonnegative Integers. Constr Approx 59, 143–227 (2024). https://doi.org/10.1007/s00365-023-09637-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00365-023-09637-1

Keywords

Navigation