1 Introduction

Let \(\{P_{n}\}_{n \ge 0}\) be a sequence of orthogonal polynomials with respect to a moment linear functional u. One of the more interesting problems in the theory of orthogonal polynomials in one and in several variables is the modification of the moment linear functional u and the study of the new family of orthogonal polynomials \(\{Q_{n}\}_{n \ge 0}\) with respect to the modified moment linear functional v. Among the orthogonality properties, the problem of how to compute the new family of orthogonal polynomials \(\{Q_{n}\}_{n \ge 0}\) in terms of the original family of orthogonal polynomials \(\{P_{n}\}_{n \ge 0}\) has been the subject of several papers both in the univariate and multivariate cases. Modifications of moment functionals are related with quasi orthogonality, connection formulas between different families of orthogonal polynomials or adjacent families of classical orthogonal polynomials, quadrature formulas, orthogonal polynomials as solutions of higher order differential equations, among others.

In the univariate case, perturbations of positive Borel measures supported on the real line by the addition of mass points have been considered in the literature in the framework of spectral problems for differential operators of higher order. More precisely, for fourth order ordinary linear differential operators the analysis of their polynomial eigenfunctions has been done in Krall (1940), where three new families of orthogonal polynomials, different of the classical ones (Hermite, Laguerre, Jacobi and Bessel) as trivial solutions, appear. They are the so called Legendre type (the Lebesgue measure in the interval \([-1,1]\) plus two equal positive masses located at \(\pm 1\)), the Laguerre type (the absolutely continuous measure \(\exp (-x){\text {d}}x\) supported at \((0,+\infty )\) plus a positive mass located at \(x=0\)) and the Jacobi-type (the absolutely continuous measure \((1-x)^{\alpha }{\text {d}}x, \alpha >-1\), supported at (0, 1) plus a positive mass located at \(x=0\)). Notice that in Krall (1981) a updated approach is presented. Later on, in Chihara (1985) the author focuses his attention on the study of algebraic properties of orthogonal polynomials with respect to positive measures with masses located at the end points of the convex hull of the support of the measure. Some interesting examples are shown. In Marcellán and Maroni (1992) properties of orthogonal polynomials associated with a perturbation of a quasi definite linear functional by the addition of a mass in any point of the real line have been deeply analyzed. Notice that in this case, the authors deal with a general framework (without restrictions about the location of the mass points and the linear functional in the linear space of polynomials with real coefficients). The main problem is to analyze necessary and sufficient conditions in order that the quasi definiteness of the linear functional is preserved. Such a type of transformations are known in the literature as Uvarov transformations (see Uvarov 1969). They have considered in the framework of linear spectral transformations which are generated by Christoffel and Geronimus transformations (see Zhedanov 1997). The first ones are discrete Darboux transformations of the Jacobi matrix associated to the sequence of orthogonal polynomials with respect to the first linear functional (by using a LU factorization of the above matrix) while the second ones are related to UL factorizations of the Jacobi matrix that are also known as discrete Darboux transformations with parameter (see Bueno and Marcellán 2004 and García-Ardila et al. 2021). The modifications of moment linear functionals are usually classified in three types, namely: Uvarov or Krall-type modifications, defined by adding Dirac’s delta(s) in one or several fixed points; Christoffel modifications, constructed by multiplying the moment functional times a fixed low degree polynomial, and Geronimus modifications, obtained by dividing the moment functional by a polynomial. The main difference between Uvarov and Krall-type modifications is based in the fact that the Krall-type modifications are made on classical moment functionals such as Jacobi and Laguerre, and the perturbed polynomials satisfy higher order linear differential equations (Krall 1981). Several papers have been devoted to study the modifications of univariate moment functionals. Among others, and referring the references therein, we can cite Álvarez-Nodarse et al. (2004), Bueno and Marcellán (2004), Maroni (1991).

In several variables, the study of the Uvarov and Krall modifications of moment functionals was started in Delgado et al. (2010), where the modification was considered for a bilinear form obtained by adding a Dirac mass to a positive definite moment functional. Explicit formulas of the perturbed orthogonal polynomials were derived in terms of the orthogonal polynomials associated with the original moment functional. In Fernández et al. (2010), the case when the new measure is obtained from adding a set of mass points to another measure is analysed. Also in this case, orthogonal polynomials in several variables associated with the modified measure can be explicitly expressed in terms of orthogonal polynomials associated with the first measure, so are the reproducing kernels associated with these polynomials. A Uvarov modification of the bivariate classical measure on the unit disk by adding a finite set of equally spaced mass points on the border was studied in Delgado et al. (2012). In this situation, both orthogonal polynomials and reproducing kernels associated with the new measure were explicitly expressed in terms of those corresponding with the classical one, and asymptotics of kernel functions were studied.

Besides Uvarov modifications by adding Dirac masses at a finite and discrete set of points, in the context of several variables it is possible to modify the moment functional with other moment functionals defined on lower-dimensional manifolds such as curves, surfaces, etc. A family of orthogonal polynomials with respect to such type of Uvarov modification of the classical ball measure by means of a mass uniformly distributed over the sphere was introduced in Martínez and Piñar (2016), and the authors proved that, at least in the Legendre case, these multivariate orthogonal polynomials satisfy a fourth order partial differential equation, which constitutes a natural extension of Krall orthogonal polynomials to the multivariate case. Moreover, in Delgado et al. (2018), an inner product on the triangle defined by adding Krall terms over the border and the vertexes of the triangle is studied. For particular values of the parameters, orthogonal polynomials associated with these inner product satisfy fourth order partial differential equations with polynomial coefficients, as an extension of the classical theory introduced (Krall 1940) and developed later in Krall (1981).

The aim of this paper is to study the three term relations for orthogonal polynomials in several variables associated with a moment linear functional obtained by a Uvarov modification of a given moment functional. In this way, we define a general frame for Uvarov perturbations of a multivariate moment functional, we establish conditions for the existence of perturbed polynomials, and analyse the impact of the perturbation on the coefficients of the three term relations. Moreover, we analyse two interesting examples. First, we compute explicitly the matrix coefficients of the three term relations for the monic polynomials in the case of Jacobi measure on the simplex with mass points added on the vertices, completing the study of this case started in Delgado et al. (2010). In addition, we show the explicit expressions of the perturbed and non perturbed polynomials of low degrees for a special election of the parameters as well as we draw both families of polynomials and its zeros, where we can see the influence of the mass points. Finally, a non positive definite bivariate moment functional based on Bessel and Laguerre polynomials perturbed by a mass point at the origin is analysed. In this case we also compute the explicit expressions for the matrix coefficients of both families of bivariate orthogonal polynomials. The question about the spectral problem associated with the orthogonal polynomials corresponding to the Uvarov perturbation of the measure supported on the simplex presented in the first example remains open. Reading the results given in Delgado et al. (2018) and Martínez and Piñar (2016), we can not expect that these polynomials can be eigenfunctions of a fourth order partial differential equation since the Uvarov term is taken as discrete points. Moreover, if that kind of Uvarov polynomials satisfy higher order partial linear differential equations is a topic that remains open.

The structure of the paper is as follows. In Sect. 2 we recall the basic background about orthogonal polynomials in several variables, including necessary definitions and notations. In Sect. 3 Uvarov modifications of given linear functionals are studied in detail, providing results for the existence of orthogonal polynomials with respect to the new linear functional. In Sect. 4 we obtain the expressions of the three term relations for Uvarov orthogonal polynomials from the recurrence relations of the starting family. Finally, in Sect. 5 two examples are analyzed in detail, in order to show that the results are valid for positive definite linear functionals and also for non positive definite linear functionals. The first one deals with the Uvarov orthogonal polynomials on the simplex, obtained by adding mass points to the positive definite linear functional of very classical bivariate Jacobi polynomials on the simplex. Explicit expressions for the matrices of the three term relations satisfied by the Uvarov modification are presented, and for specific values of the parameters we compare the zeros of the classical bivariate Jacobi polynomials and those of the Uvarov family. We must point out that the zeros of multivariate polynomials constitute a theory that remains open, only a few analytic results concerning zeros are known (see Dunkl and Xu 2014; Area et al. 2015). In general, a zero of a multivariate polynomial is an algebraic curve, and different basis have different zeros. Finally, the second example is devoted to the Bessel-Laguerre case, which is non positive definite moment functional. For this case we show that the presented approach is also valid, giving rise to the matrices of the three term relations for the Uvarov modification.

2 Orthogonal polynomials in several variables

Let \({\mathbb {N}}_0\) be the set of nonnegative integers and \(d \in {{\mathbb {N}}}_{0}\). For \({m}=(m_1,\dots ,m_d)\in {\mathbb {N}}_0^d\) and \({x}=(x_1,\dots ,x_d)\in {\mathbb {R}}^d\), we write a monomial as

$$\begin{aligned} {x}^{{m}}=x_1^{m_1}\cdots x_d^{m_d}. \end{aligned}$$

The number \(|{m}|=m_1+\dots +m_d\) is called the total degree of \({x}^{{m}}\).

Along this paper, we denote by \(\Pi \) the linear space of polynomials in d variables with real coefficients, by \(\Pi _n\) the linear space of real polynomials of total degree not greater than n, and we will denote by \({\mathcal {P}}_n\) the space of homogeneous polynomials of total degree n in d variables. It is well known that (Dunkl and Xu 2014)

$$\begin{aligned} \dim \Pi _n = \left( {\begin{array}{c}n+d\\ n\end{array}}\right) , \quad \hbox { and}\quad \dim {\mathcal {P}}_n = \left( {\begin{array}{c}n+d-1\\ n\end{array}}\right) = r_n. \end{aligned}$$

For \(h, k\ge 1\), let \({{\mathcal {M}}}_{h\times k}({\mathbb {R}})\), and \({{\mathcal {M}}}_{h\times k}({\Pi })\) be the linear spaces of \((h\times k)\) matrices with real and polynomial entries, respectively, and \(I_h\) will represent the identity matrix of size \(h\times h\). In general, if \(h=k\), we will omit one of the subscript.

We define the degree of a polynomial matrix \(A=\left( a_{i,j} ({x})\right) _{i,j=1}^{h,k} \in {{\mathcal {M}}}_{h\times k}(\Pi )\) as the maximum of the degrees of the entries in such a matrix, i. e.,

$$\begin{aligned} \deg \,A = \max \{\deg \,a_{i,j}({x}), 1\le i \le h, 1\le j \le k\}. \end{aligned}$$

As usual, given a matrix M, we will denote by \(M^{\top }\) its transpose, and, if M is a square matrix, then we will say that it is non-singular if \(\det M\ne 0\).

Following (Dunkl and Xu 2014, p. 71), if \(M_1\), \(M_2\), \(\ldots \), \(M_d\) are matrices of the same size \(p\times q\), then their joint matrix M is defined as

$$\begin{aligned} M= \begin{pmatrix} M_1\\ M_2\\ \vdots \\ M_d \end{pmatrix} = \begin{pmatrix} M_1^\top \, M_2^\top \,\ldots \,M_d^\top \end{pmatrix}^\top \end{aligned}$$

of size \(d\,p\times q\).

Given a sequence of polynomials of total degree n, \(\{P^n_{m}\}_{|m|=n}\), we will use the vector notation (see Dunkl and Xu 2014) to define \({\mathbb {P}}_n\) as the column vector polynomial

$$\begin{aligned} {\mathbb {P}}_n=(P_{{m}}^n)_{|{m}|=n}= (P_{{m}_1}^n,P_{{m}_2}^n,\ldots ,P_{{m}_{r_n}}^n)^{\top } \in {{\mathcal {M}}}_{{r_n}\times 1}({{\mathcal {P}}}_n), \end{aligned}$$

where \({m}_{1},{m}_{2},\ldots ,{m}_{r_n}\) are the elements in \(\{{m} \in {\mathbb {N}}_0^d:\,|{m}|=n \}\) arranged according to the reverse lexicographical order. Observe that \({\mathbb {P}}_0\) is a constant (\(r_0 = 1\)), and \({\mathbb {P}}_1 \) is a column vector of independent multivariate polynomials of degree 1 of dimension \(r_1=d\).

Definition 1

A polynomial system (PS) is a vector polynomial sequence \(\{{\mathbb {P}}_n\}_{n\ge 0}\) such that the set of the entries of \(\{{\mathbb {P}}_m\}_{m=0}^n\) is a basis of \(\Pi _n\) for each \(n\ge 0\), and by extension we will say that \(\{{\mathbb {P}}_m\}_{m=0}^n\) is a basis of \(\Pi _n\).

The simplest case of polynomial system is the so–called canonical polynomial system, defined as

$$\begin{aligned} \{{\mathbb {X}}_n\}_{n\ge 0} = \{({x}^{{m}_1}, {x}^{{m}_2}, \ldots , {x}^{{m}_{r_n}})^\top : |{m}_k| =n, 1\le k\le r_n \}_{n\ge 0}. \end{aligned}$$

Using the vector notation, for a given polynomial system \(\{{\mathbb {P}}_n\}_{n\ge 0}\), the vector polynomial \({\mathbb {P}}_n\) can be written as

$$\begin{aligned} {\mathbb {P}}_n({x}) = G_{n,n}\, {\mathbb {X}}_n + G_{n,n-1}\, {\mathbb {X}}_{n-1} + \cdots + G_{n,0}\, {\mathbb {X}}_0, \end{aligned}$$

where \(G_n = G_{n,n}\) is called the leading coefficient of \({\mathbb {P}}_n\), which is a square matrix of size \(r_n\). Moreover, since \(\{{\mathbb {P}}_m\}_{m=0}^n\) is a basis of \(\Pi _n\), then \(G_n\) is non-singular.

We will say that two PS \(\{{\mathbb {P}}_n\}_{n\ge 0}\) and \(\{{\mathbb {Q}}_n\}_{n\ge 0}\) have the same leading coefficient if \({\mathbb {P}}_0={\mathbb {Q}}_0\), and \({\mathbb {P}}_n\) and \({\mathbb {Q}}_n\) have the same leading coefficient matrix for \(n\ge 1\), that is, the entries of the vector \({\mathbb {P}}_n - {{\mathbb {Q}}}_n\) are polynomials in \(\Pi _{n-1}\).

In addition, a polynomial system is called monic if every polynomial contains only one monic term of highest degree, that is, for \(n\ge 0\),

$$\begin{aligned} P^n_{{m}_k}({x})= {x}^{{m}_{k}} + R({x}), \quad 1 \le k \le r_n, \end{aligned}$$

where \(|{m}_k| = n\), and \(R({x})\in \Pi _{n-1}\). Equivalently, a monic polynomial system is a polynomial system such that its leading coefficient is the identity matrix, i. e., \(G_n = I_{r_n}\), for \(n\ge 0\).

Usually, a moment linear functional u defined on \(\Pi \) is introduced from its moments. In fact (Dunkl and Xu 2014), let \(\{\mu _{{m}}\}_{{m}\in {\mathbb {N}}_0^d}\) be a multi–sequence of real numbers and let u be a real valued functional defined on \({\Pi }\) by means of

$$\begin{aligned} \langle u,{x}^{{m}}\rangle = u ({x}^{{m}}) = \mu _{{m}}, \end{aligned}$$

and extended by linearity. Then, u is called the moment functional determined by \(\{\mu _{{m}}\}_{{m}\in {\mathbb {N}}_0^d}\), and the number \(\mu _{{m}}\) is called the moment of order m.

The action of a moment linear functional u is extended over polynomial matrices in the following way (see, for instance, Dunkl and Xu 2014). Let \(A= \left( a_{i,j} ({x})\right) _{i,j=1}^{h,k} \in {{\mathcal {M}}}_{h\times k}(\Pi )\) be a polynomial matrix. Then

$$\begin{aligned} \langle u, A \rangle =\left( \langle u, a_{i,j}({x})\rangle \right) _{i,j=1}^{h,k}\in {{\mathcal {M}}}_{h\times k}({\mathbb {R}}). \end{aligned}$$

Given a moment functional u in \(\Pi \), two polynomials p and q are said to be orthogonal with respect to u if \(\langle u, p\,q\rangle =0\). For each \(n\ge 0\), let \({\mathcal {V}}_n \subset \Pi _n\) be the set of polynomials of total degree n that are orthogonal with respect to the linear functional u to all polynomials in \(\Pi _{n-1}\) together with zero. Then, \({\mathcal {V}}_n\) is a linear space of dimension less than or equal to \(r_n\) ( Dunkl and Xu 2014).

Definition 2

An orthogonal polynomial system (OPS) with respect to a given linear functional u is a PS \(\{{\mathbb {P}}_n\}_{n\ge 0}\) such that

$$\begin{aligned} \langle u,{{\mathbb {P}}}_{n}{{\mathbb {P}}}_{m}^\top \rangle = H_n\, \Delta _{n,m}, \end{aligned}$$
(1)

where \(\Delta _{n,m}\) is the \((n+1)\times (m+1)\) zero matrix for \(n\ne m\), and the identity matrix for \(n=m\), and \(H_n\in {{\mathcal {M}}}_{r_n}({\mathbb {R}})\) is a symmetric and non-singular matrix.

Moreover, if \(H_n\) is a diagonal matrix, we will say that \(\{{\mathbb {P}}_n\}_{n\ge 0}\) is a mutually orthogonal polynomial system (OPS) with respect to the linear functional u.

A moment functional u is called quasi definite if there exists an OPS with respect to u. Given a quasi definite moment linear functional u, orthogonal polynomial systems with respect to u are not unique. In fact, \(\{{\mathbb {P}}_n\}_{n\ge 0}\) and \(\{\widehat{{\mathbb {P}}}_n\}_{n\ge 0}\) are OPS associated with u if and only if there exist non-singular matrices \(F_n\) such that

$$\begin{aligned} \widehat{{\mathbb {P}}}_n = F_n\,{\mathbb {P}}_n, \quad n\ge 0. \end{aligned}$$

In this case, for \(n, m\ge 0\) we get

$$\begin{aligned} \langle u,{\widehat{{\mathbb {P}}}}_{n}{\widehat{{\mathbb {P}}}}_{m}^\top \rangle = F_n\, \langle u,{{\mathbb {P}}}_{n}{{\mathbb {P}}}_{m}^\top \rangle F_m^\top = {\widehat{H}}_n\,\Delta _{n,m}, \end{aligned}$$

where \({\widehat{H}}_n = F_n\,H_n\,F_n^\top \) is a non-singular and symmetric matrix. Therefore, there exists a unique monic orthogonal polynomial system that can be obtained by

$$\begin{aligned} \widehat{{\mathbb {P}}}_n = G_n^{-1}\,{\mathbb {P}}_n, \quad n\ge 0, \end{aligned}$$

where \(G_n\) are the respective leading coefficients of \({\mathbb {P}}_n\).

Observe that u is quasi definite if and only if

$$\begin{aligned} \dim {\mathcal {V}}_n = r_n, \quad \forall n \ge 0. \end{aligned}$$

In this case, a PS \(\{{\mathbb {P}}_n\}_{n\ge 0}\) is an OPS if and only if the set of entries of the vector \({\mathbb {P}}_n\) is a basis of \({\mathcal {V}}_n\), \(n\ge 0\).

A moment functional u is called positive definite if \(\langle u, p^2({x})\rangle > 0\), for all \(p({x}) \in \Pi \), \(p({x})\not \equiv 0\). A positive definite moment functional is quasi definite, and it is possible to construct an orthonormal polynomial system, that is, an orthogonal polynomial system such that \(H_n\) is positive definite. If \(H_n\) is the identity matrix, then \(\{P_m^n\}_{|m|=n}\) is an orthonormal basis for \({\mathcal {V}}_n\) and the OPS is called an orthonormal polynomial system.

Orthogonal polynomials in several variables are characterized by d vector–matrix three term relations (see Dunkl and Xu 2014, Theorem 3.3.7, p. 74). More precisely,

Theorem 3

(Dunkl and Xu 2014) Let \(\{{\mathbb {P}}_n\}_{n\ge 0} = \{ P^n_{\mathrm {m}}(\mathrm {x}): |\mathrm {m}| = n, n\in {\mathbb {N}}_0\}, {\mathbb {P}}_0=1\), be an arbitrary sequence in \(\Pi \). Then the following statements are equivalent.

  1. 1.

    There exists a linear functional u which defines a quasi definite moment functional on \(\Pi \) and which makes \(\{{\mathbb {P}}_n\}_{n\ge 0}\) an orthogonal basis in \(\Pi \).

  2. 2.

    For \(n\ge 0\), \(1\le i\le d\), there exist matrices \(A_{n,i}\), \(B_{n,i}\) and \(C_{n,i}\) of respective sizes \(r_n\times r_{n+1}, \, r_n\times r_{n}\) and \( r_n \times r_{n-1}\), such that

    1. (a)

      the polynomials \({\mathbb {P}}_n\) satisfy the three term relations

      $$\begin{aligned} x_i {\mathbb {P}}_n = A_{n,i} {\mathbb {P}}_{n+1} + B_{n,i} {\mathbb {P}}_{n} +C_{n,i} {\mathbb {P}}_{n-1}, \quad 1\le i\le d, \end{aligned}$$
      (2)

      with \({\mathbb {P}}_{-1}=0\), \(C_{-1,i} =0\), and \(\mathrm {x} = (x_1, x_2, \ldots , x_d)\),

    2. (b)

      for \(n\ge 0\) and \(1\le i\le d\), the matrices \(A_{n,i}\) and \(C_{n+1,i}\) satisfy the rank conditions

      $$\begin{aligned} {\text {rank}}\, A_{n,i} = {\text {rank}}\, C_{n+1,i}=r_n, \end{aligned}$$
      (3)

      and, for the joint matrices \(A_n\) of \(A_{n,1}, A_{n,2},\ldots , A_{n,d}\), of size \(d\,r_{n}\times r_{n+1}\) and \(C_{n+1}^\top \) of \(C_{n+1,1}^\top , C_{n+1,2}^\top ,\ldots , C_{n+1,d}^\top \), of size \(d\,r_{n}\times r_{n+1}\), we get

      $$\begin{aligned} {\text {rank}}\, A_{n} = {\text {rank}}\, C_{n+1} = r_{n+1}. \end{aligned}$$
      (4)

    In that case, we get

    $$\begin{aligned} {\left\{ \begin{array}{ll} A_{n,i} \, H_{n+1} = \langle u, x_i\, {\mathbb {P}}_{n}{\mathbb {P}}_{n+1}^\top \rangle , \\ B_{n,i} \, H_{n} = \langle u, x_i\, {\mathbb {P}}_{n}{\mathbb {P}}_{n}^\top \rangle , \\ C_{n,i} \, H_{n-1} = \langle u, x_i\, {\mathbb {P}}_{n}{\mathbb {P}}_{n-1}^\top \rangle , \end{array}\right. } \end{aligned}$$
    (5)

    where \(H_n = \langle u, {\mathbb {P}}_{n}{\mathbb {P}}_{n}^\top \rangle \), and \(A_{n,i} \,H_{n+1} = H_n\,C_{n+1,i}^\top \).

Relations (2) can be written in a block matrix way (Kowalski 1982a, b; Dunkl and Xu 2014). In fact, for \(1\le i \le d\), we define the block Jacobi matrices

$$\begin{aligned} {\mathcal {J}}_i = \begin{pmatrix} B_{0,i} &{}\quad A_{0,i} &{}\quad &{}\quad \bigcirc \\ C_{1,i} &{}\quad B_{1,i} &{}\quad A_{1,i} &{}\quad \\ &{} C_{2,i} &{}\quad B_{2,i} &{}\quad \ddots \\ \bigcirc &{}\quad &{}\quad \ddots &{}\quad \ddots \end{pmatrix}. \end{aligned}$$
(6)

Observe that the entries of the block Jacobi matrices are the coefficients of the ith three term relation, whose sizes increase to infinity. If we denote

$$\begin{aligned} {\mathcal {P}} = ({\mathbb {P}}_0^\top , {\mathbb {P}}_1^\top , {\mathbb {P}}_2^\top ,\ldots )^\top , \end{aligned}$$

the column of all polynomials, then, three term relations (2) become to

$$\begin{aligned} x_i\,{\mathcal {P}}={\mathcal {J}}_i\,{\mathcal {P}}, \quad 1\le i\le d. \end{aligned}$$
(7)

The version of above theorem for orthonormal polynomial systems \(\{{\mathbb {P}}_n\}_{n\ge 0}\) is obtained by changing \(C_{n,i}\) by \(A_{n-1,i}^\top \),  \( 1 \le i \le d\), since \(H_n = I_{r_n}\), \( \, n \ge 0\).

If \(\{\widehat{{\mathbb {P}}}_n\}_{n\ge 0}\) is another OPS associated with u, then there exist non-singular matrices \(F_n\) such that \(\widehat{{\mathbb {P}}}_n = F_n\,{\mathbb {P}}_n, n\ge 0\). Multiplying (2) times \(F_n\), we deduce that

$$\begin{aligned} x_i \widehat{{\mathbb {P}}}_n = F_n \,A_{n,i}\,F_{n+1}^{-1} \widehat{{\mathbb {P}}}_{n+1} + F_n\,B_{n,i}\,F_n^{-1} \widehat{{\mathbb {P}}}_{n} + F_n\,C_{n,i}\,F_{n-1}^{-1} \widehat{{\mathbb {P}}}_{n-1}, \quad 1\le i\le d. \end{aligned}$$

This means that \(\{\widehat{{\mathbb {P}}}_n\}_{n\ge 0}\) satisfy the three term relations

$$\begin{aligned} x_i \widehat{{\mathbb {P}}}_n = {\widehat{A}}_{n,i} \widehat{{\mathbb {P}}}_{n+1} + {\widehat{B}}_{n,i}\widehat{{\mathbb {P}}}_{n} + {\widehat{C}}_{n,i} \widehat{{\mathbb {P}}}_{n-1}, \quad 1\le i\le d, \end{aligned}$$

where \({\widehat{A}}_{n,i} = F_n A_{n,i}F_{n+1}^{-1}\), \({\widehat{B}}_{n,i} = F_n B_{n,i} F_n^{-1}\), and \({\widehat{C}}_{n,i} = F_n C_{n,i}F_{n-1}^{-1}\). Obviously, the rank conditions (3) and (4) are preserved since the rank is unchanged upon left or right multiplication by a non-singular matrix (Horn and Johnson 2013, p. 13).

When the orthogonal polynomial system \(\{{\mathbb {P}}_n\}_{n\ge 0}\) is monic, comparing the highest coefficient matrices at both sides of (2), it follows that \(A_{n,i} = L_{n,i}\), for \(n\ge 0\), and \(1\le i\le d\) (Dunkl and Xu 2014), where \(L_{n,i}\) are matrices of size \(r_n\times r_{n+1}\) defined by

$$\begin{aligned} x_i \, {\mathbb {X}}_n = L_{n,i} \, {\mathbb {X}}_{n+1}, \quad 1 \le i \le d\,. \end{aligned}$$

These matrices verify \(L_{n,i}L_{n,i}^\top =I_{r_n}\), and \({\text {rank}}L_{n,i} = r_n\). Moreover, the rank of the joint matrix \(L_n\) of \(L_{n,i}\) is \(r_{n+1}\) (Dunkl and Xu 2014, p. 71).

For the particular case \(d=2\), we have that \(L_{n,i}\), \(i=1,2\), are the \((n+1)\times (n+2)\) matrices defined as

(8)

In the general case, comparing the leading coefficient matrices at both sides of (2), we get \( G_n\,L_{n,i} = A_{n,i}\, G_{n+1} \), that is,

$$\begin{aligned} A_{n,i} = G_n\,L_{n,i}\,G_{n+1}^{-1}. \end{aligned}$$
(9)

where \(G_{n}\) is the leading coefficient matrix of \({\mathbb {P}}_n\).

Let u be a quasi definite moment linear functional, and let \(\{{\mathbb {P}}_n\}_{n\ge 0}\) be an OPS with respect to u. In terms of \(\{{\mathbb {P}}_n\}_{n\ge 0}\), the kernel of \({\mathcal {V}}_m\), denoted by \({\mathbf {P}}_m(u;{x},{y})\) (Dunkl and Xu 2014, p. 97), is defined by

$$\begin{aligned} {\mathbf {P}}_m(u;{x},{y}) = {\mathbb {P}}_m^\top ({x})\, H^{-1}_m\, {\mathbb {P}}_m({y}) , \quad m \ge 0. \end{aligned}$$

Similarly, the kernel of \(\Pi _n\), takes the form

$$\begin{aligned} {{\textbf {K}}}_n(u; {x},{y}) = \sum _{m=0}^n {\mathbb {P}}_m^\top ({x})\, H_m^{-1}\, {\mathbb {P}}_m({y}) = \sum _{m=0}^n {\mathbf {P}}_m(u;{x},{y}), \quad n\ge 0. \end{aligned}$$

The definition of both kernels does not depend on a particular basis.

For orthogonal polynomials in one variable, the kernel function is called reproducing kernel function. In several variables, we have an analogous property (Dunkl and Xu 2014), that is,

$$\begin{aligned}&\langle u, {{\textbf {P}}}_m({{{x}}}, \cdot )\, p(\cdot ) \rangle = p({x}), \qquad p\in {\mathcal {V}}_m, \qquad m\ge 0,\\&\langle u, {{\textbf {K}}}_n({{{x}}}, \cdot )\, p(\cdot ) \rangle = p({x}), \qquad p\in \Pi _n, \qquad n\ge 0. \end{aligned}$$

3 Uvarov orthogonal polynomials in several variables

From now on, we consider u a quasi definite moment functional defined on \(\Pi \). Then orthogonal polynomials of several variables with respect to u exist, and let us denote by \(\{{\mathbb {P}}_n\}_{n\ge 0}\) an OPS associated with it.

Let \(N \ge 1\) be a positive integer and let \(\xi ^{(1)}, \xi ^{(2)}, \ldots , \xi ^{(N)}\) be distinct points in \({\mathbb {R}}^d\). Obviously, every point has d entries, then we will write

$$\begin{aligned} \xi ^{(j)} = (\xi ^{(j)}_1, \xi ^{(j)}_2, \ldots , \xi ^{(j)}_d), \quad 1\le j \le N, \end{aligned}$$

and \(\xi ^{(j)}_i \in {\mathbb {R}}\), for \(1\le i\le d\).

Let \(\Lambda \) be a symmetric matrix of size \(N \times N\). We define the new moment functional v as a Uvarov modification of the original moment functional given by

$$\begin{aligned} \langle v, p\,q \rangle = \langle u, p\, q \rangle +{\mathbf {p}}(\xi ) \, \Lambda \,{\mathbf {q}}(\xi )^\top , \end{aligned}$$
(10)

for \(p, q \in \Pi \), where

$$\begin{aligned} {\mathbf {p}}(\xi ) = \big (p(\xi ^{(1)}), p(\xi ^{(2)}), \ldots , p(\xi ^{(N)})\big ), \end{aligned}$$

denotes the vector of evaluations of the polynomial p(x) at the points \(\xi ^{(1)}, \xi ^{(2)}, \ldots , \xi ^{(N)}\).

We want to know how the new inner product (10) acts over polynomial systems. Given a PS \(\{{\mathbb {P}}_n\}_{n\ge 0}\), if we denote by \({\mathsf {P}}_n(\xi )\) the matrix that has \({\mathbb {P}}_n(\xi ^{(j)})\) as columns,

$$\begin{aligned} {\mathsf {P}}_n(\xi ):= \left( {\mathbb {P}}_n(\xi ^{(1)}) | {\mathbb {P}}_n(\xi ^{(2)}) | \ldots | {\mathbb {P}}_n(\xi ^{(N)}) \right) \in {\mathcal {M}}_{r_n\times N}, \end{aligned}$$
(11)

then, the action of (10) is as follows:

$$\begin{aligned} \langle v, {\mathbb {P}}_n\, {\mathbb {P}}_m^\top \rangle = \langle u, {\mathbb {P}}_n\, {\mathbb {P}}_m^\top \rangle + {\mathsf {P}}_n(\xi )\,\Lambda \,{\mathsf {P}}_m^\top (\xi ) \in {\mathcal {M}}_{r_n\times r_m}({\mathbb {R}}). \end{aligned}$$

If (10) is quasi definite, we denote by \(\{{\mathbb {Q}}_n\}_{n\ge 0}\) an orthogonal polynomial system associated with it, such that \({\mathbb {P}}_n\) and \({\mathbb {Q}}_n\) have the same leading coefficient, for all \(n\ge 0\).

Following Delgado et al. (2010), if u is given by means of a measure \(d\mu ({x})\) on \({\mathbb {R}}^d\) with all finite moments and we assume that \(d \mu \) is positive definite in the sense that

$$\begin{aligned} {\langle u, p^2\rangle } = \int _{{\mathbb {R}}^d} p^2({x}) d\mu ({x}) >0, \quad p \in \Pi , \quad p \not \equiv 0, \end{aligned}$$

and the matrix \(\Lambda \) is positive definite, then v is positive definite and an OPS with respect to v exist.

Our first goal is to study the existence of orthogonal polynomials with respect to the moment functional v defined in (10).

We need to introduce several extra notations. If \(\{{\mathbb {P}}_n\}_{n\ge 0}\) is an OPS with respect to u, then we denote by \({\mathsf {K}}_{n-1}\) the square matrix of constants whose entries are \({\mathbf {K}}_{n-1}(u;\xi ^{(j)},\xi ^{(k)})\),

$$\begin{aligned} {\mathsf {K}}_{n-1} := \big ({\mathbf {K}}_{n-1}(u;\xi ^{(j)},\xi ^{(k)}) \big )_{j,k=1}^N \in {\mathcal {M}}_{N\times N}, \end{aligned}$$
(12)

and, denote by \({\mathbb {K}}_{n-1}(\xi ,{x})\) the \(N\times 1\) vector of polynomials

$$\begin{aligned} {{\mathbb {K}}_{n-1}(\xi ,{x})} = \big ({\mathbf {K}}_{n-1}(u; \xi ^{(1)},{x}), {\mathbf {K}}_{n-1}(u;\xi ^{(2)},{x}), \ldots , {\mathbf {K}}_{n-1}(u;\xi ^{(N)},{x})\big )^\top . \end{aligned}$$
(13)

In (12) and (13) for \(n=0\) we assume \({\mathbf {K}}_{-1}(u;{x},{y}) = 0\).

From the fact that \({\mathbf {K}}_{n}(u; {x}, {y}) - {\mathbf {K}}_{n-1}(u; {x}, {y}) = {\mathbf {P}}_n(u;{x},{y})\), we have immediately the following relations,

$$\begin{aligned} {\mathsf {P}}_n^\top (\xi ) \, H_n^{-1}\,{\mathbb {P}}_n ({x})&= {\mathbb {K}}_n(\xi ,{x}) - {\mathbb {K}}_{n-1}(\xi ,{x}), \nonumber \\ {\mathsf {P}}_n^\top (\xi ) \, H_n^{-1}\, {\mathsf {P}}_n (\xi )&= {\mathsf {K}}_n - {\mathsf {K}}_{n-1}, \end{aligned}$$
(14)

which will be used below.

In Fernández et al. (2010), a necessary and sufficient condition in order to be v quasi definite when \(N=1\) is given. In addition, orthogonal polynomials with respect to v can be expressed in terms of those with respect to the linear functional u. That result can be extended for \(N \ge 1\) by using a similar technique as in Delgado et al. (2010), but in this case, we focus the attention on the analysis of the quasi definite character of the perturbed linear functional.

Theorem 4

The moment linear functional v is quasi definite if and only if the \(N\times N\) matrix

$$\begin{aligned} \Lambda _n = I_N + \Lambda \,{{\mathsf {K}}_{n}}, \end{aligned}$$

is non-singular for \(n\ge 0\). In this case, if \(\{{\mathbb {P}}_n\}_{n\ge 0}\) denotes an OPS with respect to the linear functional u, the system of polynomials \(\{{\mathbb {Q}}_n\}_{n\ge 0}\) defined by

$$\begin{aligned} {\mathbb {Q}}_n(\mathrm {x}) = {\mathbb {P}}_n(\mathrm {x}) - {{\mathsf {P}}_n(\xi )}\,\Xi _{n-1} \,{{\mathbb {K}}_{n-1}(\xi ,\mathrm {x})}, \qquad n\ge 0, \end{aligned}$$
(15)

is an OPS with respect to the linear functional v taking \({\mathsf {K}}_{-1}(\cdot ,\cdot )=0\), where

$$\begin{aligned} \Xi _{n}=(I_N + \Lambda \,{{\mathsf {K}}_{n}})^{-1}\,\Lambda . \end{aligned}$$
(16)

Moreover,

$$\begin{aligned} {\mathsf {Q}}_n(\xi ) = {\mathsf {P}}_n(\xi ) (I_N + \,\Lambda \,{\mathbf {K}}_{n-1})^{-1}. \end{aligned}$$

In the rest of the section, we will suppose that v is quasi definite, and we denote by \(\{{\mathbb {Q}}_n\}_{n\ge 0}\) an OPS with respect to v defined by (15). Also we denote

$$\begin{aligned} {\widetilde{H}}_n := \langle v, {\mathbb {Q}}_n\, {\mathbb {Q}}_n^\top \rangle . \end{aligned}$$

Then \({\widetilde{H}}_n\) is a \(r_n\times r_n\) symmetric non-singular matrix. It turns out that both \({\widetilde{H}}_n\) and \({\widetilde{H}}_n^{-1}\) can be expressed in terms of matrices that only involve \(\{{\mathbb {P}}_m\}_{m \ge 0}\). In Delgado et al. (2010) was consider the simplest case when u is positive definite and \(\{{\mathbb {P}}_n\}_{n\ge 0}\) is orthonormal, that is, \(H_n = I_{r_n}\).

Proposition 5

For \(n \ge 0\),

$$\begin{aligned} {\widetilde{H}}_n&= H_n + {\mathsf {P}}_n(\xi )\, \Xi _{n-1} {\mathsf {P}}_n^\top (\xi ), \\ {\widetilde{H}}^{-1}_n&= H_{n}^{-1} - H_{n}^{-1}\,{\mathsf {P}}_n(\xi )\, \Xi _{n} {\mathsf {P}}_n^\top (\xi )\,H_{n}^{-1}. \end{aligned}$$

where \(\Xi _{n}\) is defined in (16).

Proof

We compute directly,

$$\begin{aligned} {\widetilde{H}}_n = \langle v, {\mathbb {Q}}_n\, {\mathbb {Q}}_n^\top \rangle = \langle v, {\mathbb {Q}}_n\, {\mathbb {P}}_n^\top \rangle = \langle u, {\mathbb {Q}}_n\, {\mathbb {P}}_n^\top \rangle + {\mathsf {Q}}_n(\xi ) \,\Lambda \, {\mathsf {P}}_n^\top (\xi ) = H_n + {\mathsf {P}}_n(\xi ) \, \Xi _{n-1} \, {\mathsf {P}}_n^\top (\xi ). \end{aligned}$$

In order to study the inverse, we calculate

$$\begin{aligned}&{\widetilde{H}}_n [H_{n}^{-1} - H_{n}^{-1}\,{\mathsf {P}}_n(\xi ) \, \Xi _{n}\, {\mathsf {P}}_n^\top (\xi )\,H_{n}^{-1}]\\&\quad = [H_n + {\mathsf {P}}_n(\xi ) \, \Xi _{n-1}\, {\mathsf {P}}_n^\top (\xi )] [H_{n}^{-1} - H_{n}^{-1}\,{\mathsf {P}}_n(\xi ) \, \Xi _{n}\, {\mathsf {P}}_n^\top (\xi )\,H_{n}^{-1}] \\&\quad = I_{r_n} + {\mathsf {P}}_n(\xi ) \, \Xi _{n-1}\, {\mathsf {P}}_n^\top (\xi )H_{n}^{-1} - H_n H_{n}^{-1}\,{\mathsf {P}}_n(\xi ) \, \Xi _{n}\, {\mathsf {P}}_n^\top (\xi )\,H_{n}^{-1}\\&\qquad - {\mathsf {P}}_n(\xi ) \, \Xi _{n-1}\, {\mathsf {P}}_n^\top (\xi )H_{n}^{-1}\,{\mathsf {P}}_n(\xi ) \, \Xi _{n}\, {\mathsf {P}}_n^\top (\xi )\,H_{n}^{-1}. \end{aligned}$$

We study this last term. Observe that, from (14), we get

$$\begin{aligned} \Lambda {\mathsf {P}}_n^\top (\xi )H_{n}^{-1}\,{\mathsf {P}}_n(\xi ) = \Lambda \, ({\mathsf {K}}_{n} - {\mathsf {K}}_{n-1}) = (I_N + \Lambda \,{\mathsf {K}}_{n}) - (I_N + \Lambda \, {\mathsf {K}}_{n-1}), \end{aligned}$$

and then,

$$\begin{aligned}&{\mathsf {P}}_n(\xi ) \, \Xi _{n-1}\, {\mathsf {P}}_n^\top (\xi )H_{n}^{-1}\,{\mathsf {P}}_n(\xi ) \, \Xi _{n}\, {\mathsf {P}}_n^\top (\xi )\,H_{n}^{-1} \\&\quad = {\mathsf {P}}_n(\xi ) (I_N + \,\Lambda \,{\mathsf {K}}_{n-1})^{-1} [(I_N + \Lambda \,{\mathsf {K}}_{n}) - (I_N + \Lambda \, {\mathsf {K}}_{n-1})] \, \Xi _{n}\, {\mathsf {P}}_n^\top (\xi )\,H_{n}^{-1}\\&\quad = {\mathsf {P}}_n(\xi ) \, \Xi _{n-1}\, {\mathsf {P}}_n^\top (\xi )\,H_{n}^{-1} - {\mathsf {P}}_n(\xi ) \, \Xi _{n}\, {\mathsf {P}}_n^\top (\xi )\,H_{n}^{-1}. \end{aligned}$$

Substituting above, we get

$$\begin{aligned} {\widetilde{H}}_n [H_{n}^{-1} - H_{n}^{-1}\,{\mathsf {P}}_n(\xi ) \, \Xi _{n}\, {\mathsf {P}}_n^\top (\xi )\,H_{n}^{-1}] = I_{r_n}, \end{aligned}$$

and the result holds. \(\square \)

Next result gives explicit formulas for the reproducing kernels associated with v, which we denote by

$$\begin{aligned} {\mathbf {P}}_m(v; {x},{y}):= {\mathbb {Q}}_m^\top ({x})\, {\widetilde{H}}^{-1}_m\, {\mathbb {Q}}_m({y}) \quad \hbox {and} \quad {\mathbf {K}}_n(v; {x},{y}) := \sum _{m=0}^n {\mathbf {P}}_m(v;{x},{y}). \end{aligned}$$

Theorem 6

For \(m\ge 0\), \(\Xi _{m}\) defined in (16) is a symmetric matrix, and

$$\begin{aligned} {\mathbf {P}}_m(v; \mathrm {x} ,\mathrm {y}) = {\mathbf {P}}_m(u; \mathrm {x},\mathrm {y}) - {\mathbb {K}}_m^\top (\xi ,\mathrm {x}) \, \Xi _{m}\, {\mathbb {K}}_m(\xi ,\mathrm {y}) + {\mathbb {K}}_{m-1}^\top (\xi ,\mathrm {x}) \, \Xi _{m-1}\, {\mathbb {K}}_{m-1}(\xi ,\mathrm {y}), \end{aligned}$$

where we assume \({\mathbb {K}}_{-1}(\mathrm {x},\mathrm {y}) \equiv 0\). Furthermore, for \(n \ge 0\),

$$\begin{aligned} {\mathbf {K}}_n(v; \mathrm {x},\mathrm {y}) = {\mathbf {K}}_n(u; \mathrm {x},\mathrm {y}) - {\mathbb {K}}_n^\top (\xi ,\mathrm {x}) \, \Xi _{n}\, {\mathbb {K}}_n(\xi ,\mathrm {y}). \end{aligned}$$

4 Three term relations for Uvarov orthogonal polynomials

In this section, let \(\{{\mathbb {P}}_n\}_{n\ge 0}\) be an OPS with respect to u, and let \(\{{\mathbb {Q}}_n\}_{n\ge 0}\) be an OPS with respect to v such that they have the same leading coefficient.

Both OPS satisfy three term relations, denoted by

$$\begin{aligned} x_i \, {\mathbb {P}}_n = A_{n,i} {\mathbb {P}}_{n+1} + B_{n,i} {\mathbb {P}}_{n} + C_{n,i} {\mathbb {P}}_{n-1}, \quad 1\le i\le d, \end{aligned}$$
(17)

where \({\mathbb {P}}_{-1}=0\) and \({\mathbb {P}}_{0}=G_0 \ne 0\), and by

$$\begin{aligned} x_i \, {\mathbb {Q}}_n = {\widetilde{A}}_{n,i} {\mathbb {Q}}_{n+1} + {\widetilde{B}}_{n,i} {\mathbb {Q}}_{n} + {\widetilde{C}}_{n,i} {\mathbb {Q}}_{n-1}, \quad 1\le i\le d, \end{aligned}$$
(18)

with \({\mathbb {Q}}_{-1}=0\) and \({\mathbb {Q}}_{0}=G_0 \ne 0\). The coefficients \(B_{n,i}\) and \({\widetilde{B}}_{n,i}\), for \(n\ge 0\), are \(r_n\times r_n\) matrices, and, \(A_{n,i}\), \({\widetilde{A}}_{n,i}\), \(C_{n,i}\), and \({\widetilde{C}}_{n,i}\) are, respectively, \(r_n \times r_{n+1}\), and \(r_n\times r_{n-1}\) coefficient matrices given by (5) satisfying the respective rank conditions (3) and (4). Here, \( {x} = (x_1, x_2, \ldots , x_d). \)

Theorem 7

The matrices \({\widetilde{A}}_{n,i}\), \({\widetilde{B}}_{n,i}\) and \({\widetilde{C}}_{n,i}\) of the three term relations (18) for the vector polynomials \(\{{\mathbb {Q}}_n\}_{n\ge 0}\) orthogonal with respect to the linear functional v defined in (10) are given by

$$\begin{aligned} \begin{aligned} {\widetilde{A}}_{n,i}&= A_{n,i}, \quad n\ge 0,\\ {\widetilde{B}}_{n,i}&= B_{n,i} + A_{n,i}{\mathsf {P}}_{n+1}(\xi )\Xi _{n}{\mathsf {P}}_n^\top (\xi ) H_n^{-1} - {\mathsf {P}}_{n}(\xi )\Xi _{n-1}{\mathsf {P}}_{n-1}^\top (\xi ) H_{n-1}^{-1}A_{n-1,i}, \quad n\ge 0 ,\\ {\widetilde{C}}_{n,i}&= [I_{r_{n}}+{\mathsf {P}}_{n}(\xi ) \Xi _{n-1} {\mathsf {P}}_{n}^\top (\xi )H_{n}^{-1}]\,C_{n,i}\, [I_{r_{n-1}}-{\mathsf {P}}_{n-1}(\xi ) \Xi _{n-1} {\mathsf {P}}_{n-1}^\top (\xi )H_{n-1}^{-1}], \quad n\ge 1, \end{aligned} \end{aligned}$$
(19)

where the matrices \(A_{n,i}\), \(B_{n,i}\) and \(C_{n,i}\) are those of the three term relations for the polynomials \(\{{\mathbb {P}}_n\}_{n\ge 0}\) orthogonal with respect to the linear functional u, \({\mathsf {P}}_n(\xi )\) is defined in (11), the matrices \(H_{n}\) are defined in (1), and \(\Xi _{n}\) is defined in (16).

Proof

Since both OPS have the same leading coefficient, by (9) we get \(A_{n,i} = {\widetilde{A}}_{n,i}\), for \(1\le i \le d\), and \(n\ge 0\).

We will compute \({\widetilde{B}}_{n,i}\) from the explicit expressions of the vector polynomials in terms of the canonical basis. In this way, we know that \({\mathbb {P}}_0({x}) = {\mathbb {Q}}_0({x})\), and, for \(n\ge 1\), we can express

$$\begin{aligned} {\mathbb {P}}_n({x}) = G_n\,{\mathbb {X}}_n + G^{n}_{n-1}\, {\mathbb {X}}_{n-1} + \sum _{m=0}^{n-2} G^{n}_{m}\, {\mathbb {X}}_m, \quad {\mathbb {Q}}_n({x}) = {\widetilde{G}}_n\,{\mathbb {X}}_n + {\widetilde{G}}^{n}_{n-1}\, {\mathbb {X}}_{n-1} + \sum _{m=0}^{n-2} {\widetilde{G}}^{n}_{m}\, {\mathbb {X}}_m, \end{aligned}$$

where \(G_n = {\widetilde{G}}_n\) are non-singular matrices of size \(r_n\), and \(G_m^n\) and \({\widetilde{G}}_m^n\), for \(m=0, 1, \ldots , n-1\), are matrices of constants of dimension \(r_n\times r_m\). Relating the coefficient of the term \({\mathbb {X}}_n\) in (18), we get

$$\begin{aligned} {\widetilde{G}}^{n}_{n-1}\,L_{n-1,i} = {\widetilde{A}}_{n,i}\,{\widetilde{G}}^{n+1}_{n} + {\widetilde{B}}_{n,i}\,{\widetilde{G}}_{n}, \quad n\ge 0, \end{aligned}$$
(20)

and, analogously, for the first family, using (17)

$$\begin{aligned} G^{n}_{n-1}\,L_{n-1,i} = A_{n,i}\,G^{n+1}_{n} + B_{n,i}\, G_n, \quad n\ge 0, \end{aligned}$$
(21)

by defining \(L_{-1,i} =0\).

Next, we want to deduce the matrix coefficient of \({\mathbb {X}}_n\) in expression (15) written for \(n+1\). First, we observe that, for \(1\le j\le d\) and \(n\ge 0\), we have

$$\begin{aligned} {{\textbf {K}}}_n(u; \xi ^{(j)},{x}) = {\mathbb {P}}_n^\top (\xi ^{(j)})\, H_n^{-1}\, {\mathbb {P}}_n({x}) + \sum _{m=0}^{n-1} {\mathbb {P}}_{m}^\top (\xi ^{(j)})\, H_m^{-1}\, {\mathbb {P}}_m({x}), \end{aligned}$$

and then, the coefficient of \({\mathbb {X}}_n\) in \( {{\textbf {K}}}_n(u; \xi ^{(j)},{x})\) is given by \({\mathbb {P}}_{n}^\top (\xi ^{(j)})\, H_n^{-1}\).

Therefore, the vector of kernels can be written as

$$\begin{aligned} {{\mathbb {K}}_{n}(\xi ,{x})}&= \begin{pmatrix} {\mathbf {K}}_{n}(u; \xi ^{(1)},\,{x})\\ {\mathbf {K}}_{n}(u;\xi ^{(2)},\,{x})\\ \vdots \\ {\mathbf {K}}_{n}(u;\xi ^{(N)},\,{x}) \end{pmatrix} = \begin{pmatrix} {\mathbb {P}}_{n}^\top (\xi ^{(1)}) \\ {\mathbb {P}}_{n}^\top (\xi ^{(2)})\\ \vdots \\ {\mathbb {P}}_{n}^\top (\xi ^{(N)}) \end{pmatrix}H_n^{-1} \,{\mathbb {P}}_n({x}) + \sum _{m=0}^{n-1}\begin{pmatrix} {\mathbb {P}}_{m}^\top (\xi ^{(1)}) \\ {\mathbb {P}}_{m}^\top (\xi ^{(2)})\\ \vdots \\ {\mathbb {P}}_{m}^\top (\xi ^{(N)}) \end{pmatrix}H_m^{-1} \,{\mathbb {P}}_m({x}) \\&= {\mathsf {P}}_{n}^\top (\xi ) \, H_n^{-1} \,{\mathbb {P}}_n({x}) + \sum _{m=0}^{n-1}{\mathsf {P}}_{m}^\top (\xi )\, H_m^{-1} \,{\mathbb {P}}_m({x}). \end{aligned}$$

Thus, the coefficient of \({\mathbb {X}}_n\) in (15) for \(n+1\) is

$$\begin{aligned} {\widetilde{G}}^{n+1}_n = G^{n+1}_n - {{\mathsf {P}}_{n+1}(\xi )} \, \Xi _{n}\, \,{\mathsf {P}}_{n}^\top (\xi ) \, H_n^{-1}\,G_n. \end{aligned}$$

Substituting in (20), and using (21), we get

$$\begin{aligned} {\widetilde{B}}_{n,i}{\widetilde{G}}_n = B_{n,i}G_n + A_{n,i}{\mathsf {P}}_{n+1}(\xi ) \Xi _{n}{\mathsf {P}}_{n}^\top (\xi ) H_n^{-1}\,G_n - {\mathsf {P}}_{n}(\xi ) \Xi _{n-1}\,{\mathsf {P}}_{n-1}^\top (\xi ) H_{n-1}^{-1} A_{n-1,i}G_n, \quad n\ge 0. \end{aligned}$$

Since \(G_n= {\widetilde{G}}_n\) is a non-singular matrix, we get the announced expression.

Now, we compute \({\widetilde{C}}_{n+1,i}\), \(n\ge 0\). From (5), we know that

$$\begin{aligned} {\widetilde{C}}_{n+1,i} = {\widetilde{H}}_{n+1}\, {\widetilde{A}}_{n,i}^\top \,{\widetilde{H}}_n^{-1} = {\widetilde{H}}_{n+1}\, A_{n,i}^\top \,{\widetilde{H}}_n^{-1}. \end{aligned}$$

Using Proposition 5, we get

$$\begin{aligned} {\widetilde{C}}_{n+1,i}&= [H_{n+1} + {\mathsf {P}}_{n+1}(\xi ) \, \Xi _{n}\, {\mathsf {P}}_{n+1}^\top (\xi )] \,A_{n,i}^\top [H_{n}^{-1} - H_{n}^{-1}\,{\mathsf {P}}_n(\xi ) \, \Xi _{n}\, {\mathsf {P}}_n^\top (\xi )\,H_{n}^{-1}]\\&= H_{n+1}\,A_{n,i}^\top \,H_n^{-1} + {\mathsf {P}}_{n+1}(\xi ) \, \Xi _{n}\, {\mathsf {P}}_{n+1}^\top (\xi )\,A_{n,i}^\top \,H_{n}^{-1} - H_{n+1}\,A_{n,i}^\top \,H_{n}^{-1}\,{\mathsf {P}}_n(\xi ) \, \Xi _{n}\, {\mathsf {P}}_n^\top (\xi )\,H_{n}^{-1}\\&\quad - {\mathsf {P}}_{n+1}(\xi ) \, \Xi _{n}\, {\mathsf {P}}_{n+1}^\top (\xi ) \,A_{n,i}^\top \,H_{n}^{-1}\,{\mathsf {P}}_n(\xi ) \, \Xi _{n}\, {\mathsf {P}}_n^\top (\xi )\,H_{n}^{-1} \\&= C_{n+1,i} + {\mathsf {P}}_{n+1}(\xi ) \, \Xi _{n}\, {\mathsf {P}}_{n+1}^\top (\xi )H_{n+1}^{-1}\,C_{n+1,i}- C_{n+1,i} {\mathsf {P}}_n(\xi ) \, \Xi _{n}\, {\mathsf {P}}_n^\top (\xi )\,H_{n}^{-1}\\&\quad - {\mathsf {P}}_{n+1}(\xi ) \, \Xi _{n}\, {\mathsf {P}}_{n+1}^\top (\xi ) H_{n+1}^{-1} C_{n+1,i} {\mathsf {P}}_n(\xi ) \, \Xi _{n}\, {\mathsf {P}}_n^\top (\xi )\,H_{n}^{-1}, \end{aligned}$$

which completes the proof. \(\square \)

We would like to notice that, in the monic case, the result simplifies by substituting \(A_{n,i} = {\widetilde{A}}_{n,i} = L_{n,i}\).

Remark 1

We can give a block matrix perspective of expressions (19). Let

$$\begin{aligned} \widetilde{{\mathcal {J}}}_i = \begin{pmatrix} {\widetilde{B}}_{0,i} &{}\quad {\widetilde{A}}_{0,i} \quad &{}\quad \bigcirc \\ {\widetilde{C}}_{1,i} &{}\quad {\widetilde{B}}_{1,i} &{}\quad {\widetilde{A}}_{1,i} &{}\quad \\ &{}\quad {\widetilde{C}}_{2,i} &{}\quad {\widetilde{B}}_{2,i} &{}\quad \ddots \\ \bigcirc &{} \quad &{}\quad \ddots &{}\quad \ddots \end{pmatrix}. \end{aligned}$$
(22)

be the block Jacobi matrices associated with the three term relations for the Uvarov polynomials. We also define the block matrices

where \(\bigcirc \) means a zero matrix of adequate size, and the omitted elements are considered as zeros.   Then, the block Jacobi matrix associated with the Uvarov orthogonal polynomials is a perturbation of the original block Jacobi matrix in the form

5 Examples

In this section we analyse two examples in the bivariate case to apply our results. In the first example we study Uvarov polynomials on the bivariate simplex with mass points at the vertexes. We transform the standard basis of the polynomials on the simplex to the monomial basis, and compute explicitly all matrices that we need. We compare the some low total degree polynomials, by showing their explicit expressions both for the classical case and for the Uvarov modification, by choosing some specific values of the parameters. Second example is devoted to a non positive definite case based on Bessel and Laguerre polynomials.

In both cases, we compute explicitly the involved coefficients. For simplicity in this bivariate case, we will denote \(x= x_1\), \(y =x_2\).

5.1 Bivariate Uvarov orthogonal polynomials on the simplex

For \(0 \le m \le n\), and \(\alpha , \beta , \gamma > -1\), let us consider the family of classical bivariate polynomials on the simplex

$$\begin{aligned} P_{n,m}^{(\alpha ,\beta ,\gamma )}(x,y)={\bar{P}}_{n-m}^{(\beta +\gamma +2m+1,\alpha )}(x)\,(1-x)^{m}\, {\bar{P}}_{m}^{(\gamma ,\beta )}\left( \frac{y}{1-x} \right) , \end{aligned}$$
(23)

where

$$\begin{aligned} {\bar{P}}_{n}^{(\alpha ,\beta )}(x) = \sum _{i=0}^{n} \frac{(-1)^{i+n}(\beta +i+1)_{n-i} (\alpha +\beta +n+1)_i}{i! (n-i)!}\, x^i \end{aligned}$$

stands for classical univariate Jacobi orthogonal polynomials in [0, 1], i.e.

$$\begin{aligned} \int _{0}^{1} {\bar{P}}_{n}^{(\alpha ,\beta )}(x){\bar{P}}_{m}^{(\alpha ,\beta )}(x) (1-x)^{\alpha } x^{\beta }{\text {d}}x=\frac{\Gamma (\alpha +n+1) \Gamma (\beta +n+1)}{n! (\alpha +\beta +2 n+1) \Gamma (\alpha +\beta +n+1)}\delta _{n,m}, \end{aligned}$$

where \(\delta _{n,m}\) denotes the Kronecker delta.

The orthogonality relation for the bivariate polynomials (23) can be written as

$$\begin{aligned} \int _{0}^{1} \int _{0}^{1-x} P_{n,m}^{(\alpha ,\beta ,\gamma )}(x,y)P_{r,s}^{(\alpha ,\beta ,\gamma )}(x,y) x^{\alpha } y^{\beta } (1-x-y)^{\gamma } {\text {d}}y {\text {d}}x = \frac{\Gamma (\beta +m+1) \Gamma (\gamma +m+1) }{m! (n-m)! (\beta +\gamma +2 m+1) } \\ \times \frac{\Gamma (\alpha + n-m+1) \Gamma (\beta +\gamma +n+m+2)}{(\alpha +\beta +\gamma + 2 n +2)\Gamma (\beta +\gamma +m+1) \Gamma (\alpha +\beta +\gamma + n + m+2)} \delta _{n,r} \delta _{m,s}. \end{aligned}$$

Hence, if

$$\begin{aligned} {\mathbb {P}}_n^{(\alpha ,\beta ,\gamma )} = \left( P_{n,0}^{(\alpha ,\beta ,\gamma )}(x,y), P_{n,1}^{(\alpha ,\beta ,\gamma )}(x,y),\dots , P_{n,n}^{(\alpha ,\beta ,\gamma )}(x,y)\right) ^{\top }, \end{aligned}$$
(24)

then, the OPS \(\{{\mathbb {P}}_n^{(\alpha ,\beta ,\gamma )} \}_{n\ge 0}\) is a mutually OPS, and

$$\begin{aligned} \int _{0}^{1} \int _{0}^{1-x} {\mathbb {P}}_n^{(\alpha ,\beta ,\gamma )}\, \left( {\mathbb {P}}_m^{(\alpha ,\beta ,\gamma )}\right) ^{\top }x^{\alpha } y^{\beta } (1-x-y)^{\gamma } {\text {d}}y {\text {d}}x = {\mathbb {H}}_{n}^{(\alpha ,\beta ,\gamma )} \Delta _{n,m}, \end{aligned}$$

where the diagonal and non-singular matrix \({\mathbb {H}}_{n}^{(\alpha ,\beta ,\gamma )}\) has as i-th entry

$$\begin{aligned}&h_{i,n}^{(\alpha ,\beta ,\gamma )} \nonumber \\&\quad =\frac{\Gamma (\beta +i+1) \Gamma (\gamma +i+1) \Gamma (\alpha +n-i+1) \Gamma (\beta +\gamma +n+i+2)}{i! (n-i)! (\beta +\gamma +2 i+1) (\alpha +\beta +\gamma +2 n+2) \Gamma (\beta +\gamma +i+1) \Gamma (\alpha +\beta +\gamma +n+i+2)},\nonumber \\ \end{aligned}$$
(25)

for \(0\le i \le n\).

Moreover, the bivariate polynomials (23) are solutions of the following potentially self-adjoint second order partial differential equation of hypergeometric type (Area et al. 2012a, b) which has been deeply analyzed in the literature (see e.g. Appell and Kampé de Fériet (1926), p. 104, formula (28), Suetin (1999), Chapter III, Krall and Sheffer (1967), or Dunkl and Xu (2014))

$$\begin{aligned}&x(x-1) \frac{\partial ^{2}}{\partial x^{2}} P_{n,m}^{(\alpha ,\beta ,\gamma )}(x,y) + y(y-1) \frac{\partial ^{2}}{\partial y^{2}} P_{n,m}^{(\alpha ,\beta ,\gamma )}(x,y) + 2 x y \frac{\partial ^{2}}{\partial x \partial y} P_{n,m}^{(\alpha ,\beta ,\gamma )}(x,y) \nonumber \\&\quad + ((3+\alpha +\beta +\gamma )x-1-\alpha ) \frac{\partial }{\partial x} P_{n,m}^{(\alpha ,\beta ,\gamma )}(x,y) + ((3+\alpha +\beta +\gamma )y-1-\beta ) \frac{\partial }{\partial y} P_{n,m}^{(\alpha ,\beta ,\gamma )}(x,y) \nonumber \\&\quad -n(n+\alpha +\beta +\gamma +2) P_{n,m}^{(\alpha ,\beta ,\gamma )}(x,y)=0. \end{aligned}$$
(26)

The family of bivariate monic polynomials

(27)

is also solution of the partial differential equation (26). Let

$$\begin{aligned} \widehat{{\mathbb {P}}}_n^{(\alpha ,\beta ,\gamma )}= \left( {\widehat{P}}_{n,0}^{(\alpha ,\beta ,\gamma )}(x,y), {\widehat{P}}_{n,1}^{(\alpha ,\beta ,\gamma )}(x,y),\dots , {\widehat{P}}_{n,n}^{(\alpha ,\beta ,\gamma )}(x,y)\right) ^{\top }. \end{aligned}$$

In the monic case, the recurrence relations can be written as

$$\begin{aligned} \begin{aligned} x \widehat{{\mathbb {P}}}_n^{(\alpha ,\beta ,\gamma )}&= L_{n,1} \widehat{{\mathbb {P}}}_{n+1}^{(\alpha ,\beta ,\gamma )} + {\widehat{B}}_{n,1}^{(\alpha ,\beta ,\gamma )} \widehat{{\mathbb {P}}}_{n}^{(\alpha ,\beta ,\gamma )} + {\widehat{C}}_{n,1}^{(\alpha ,\beta ,\gamma )} \widehat{{\mathbb {P}}}_{n-1}^{(\alpha ,\beta ,\gamma )}, \\ y \widehat{{\mathbb {P}}}_n^{(\alpha ,\beta ,\gamma )}&= L_{n,2} \widehat{{\mathbb {P}}}_{n+1}^{(\alpha ,\beta ,\gamma )} + {\widehat{B}}_{n,2}^{(\alpha ,\beta ,\gamma )} \widehat{{\mathbb {P}}}_{n}^{(\alpha ,\beta ,\gamma )} + {\widehat{C}}_{n,2}^{(\alpha ,\beta ,\gamma )} \widehat{{\mathbb {P}}}_{n-1}^{(\alpha ,\beta ,\gamma )}, \end{aligned} \end{aligned}$$
(28)

with the initial conditions \(\widehat{{\mathbb {P}}}_{-1}^{(\alpha ,\beta ,\gamma )}=0\) and \(\widehat{{\mathbb {P}}}_{0}^{(\alpha ,\beta ,\gamma )}=1\), where the matrices \(L_{n,j}\) of size \((n+1) \times (n+2)\) are defined by (8), \({{\widehat{B}}}_{n,j}^{(\alpha ,\beta ,\gamma )}\) are of size \((n+1) \times (n+1)\), and \( {{\widehat{C}}}_{n,j}^{(\alpha ,\beta ,\gamma )}\) are matrices of size \((n+1) \times n\).

The square matrices \({\widehat{B}}_{n,1}^{(\alpha ,\beta ,\gamma )}\) and \({\widehat{B}}_{n,2}^{(\alpha ,\beta ,\gamma )}\) are, respectively, lower and upper bidiagonal

$$\begin{aligned} {\widehat{B}}_{n,1}^{(\alpha ,\beta ,\gamma )}= \begin{pmatrix} {{\widehat{b}}}_{0,0}^{1} &{} &{} &{} \bigcirc \\ {{\widehat{b}}}_{1,0}^{1} &{} {{\widehat{b}}}_{1,1}^{1} &{} &{} \\ &{} \ddots &{} \ddots &{} \\ \bigcirc &{} &{} {{\widehat{b}}}_{n,n-1}^{1} &{} {{\widehat{b}}}_{n,n}^{1} \end{pmatrix}\,, \quad {\widehat{B}}_{n,2}^{(\alpha ,\beta ,\gamma )}= \begin{pmatrix} {\widehat{b}}_{0,0}^{2} &{} {\widehat{b}}_{0,1}^{2} &{} &{} \bigcirc \\ &{} {\widehat{b}}_{1,1}^{2} &{} \ddots &{} \\ &{} &{} \ddots &{} {\widehat{b}}_{n-1,n}^{2} \\ \bigcirc &{} &{} &{} {\widehat{b}}_{n,n}^{2} \end{pmatrix}\,, \end{aligned}$$
(29)

where

$$\begin{aligned} {{\widehat{b}}}_{i,i}^{1}&= \frac{(i-n) (\alpha -i+n)}{\alpha +\beta +\gamma +2 n+1}+\frac{(-i+n+1) (\alpha -i+n+1)}{\alpha +\beta +\gamma +2 n+3}, \quad 0 \le i \le n,\\ {{\widehat{b}}}_{i+1,i}^{1}&= -\frac{2 (i+1) (\beta +i+1)}{(\alpha +\beta +\gamma +2 n+1) (\alpha +\beta +\gamma +2 n+3)},\quad 0 \le i \le n-1,\\ {\widehat{b}}_{i,i}^{2}&= \frac{(i+1) (\beta +i+1)}{\alpha +\beta +\gamma +2 n+3}-\frac{i (\beta +i)}{\alpha +\beta +\gamma +2 n+1}, \quad 0 \le i \le n,\\ {\widehat{b}}_{i,i+1}^{2}&= \frac{2 (i-n) (\alpha -i+n)}{(\alpha +\beta +\gamma +2 n+1) (\alpha +\beta +\gamma +2 n+3)},\quad 0 \le i \le n-1. \end{aligned}$$

Moreover,

$$\begin{aligned} {\widehat{C}}_{n,1}^{(\alpha ,\beta ,\gamma )}= \begin{pmatrix} {\widehat{c}}_{0,0}^{1} &{} &{} &{} &{} \bigcirc \\ {\widehat{c}}_{1,0}^{1} &{} {\widehat{c}}_{1,1}^{1} &{} &{} &{} \\ {\widehat{c}}_{2,0}^{1} &{} {\widehat{c}}_{2,1}^{1} &{} {\widehat{c}}_{2,2}^{1} &{} &{} \\ &{} {\widehat{c}}_{3,2}^{1} &{} \ddots &{} \ddots &{}\\ &{} &{} \ddots &{} \ddots &{} {\widehat{c}}_{n-1,n-1}^{1}\\ \bigcirc &{} &{} &{} {\widehat{c}}_{n,n-2}^{1} &{} {\widehat{c}}_{n,n-1}^{1} \end{pmatrix}\,, \end{aligned}$$
(30)

where

$$\begin{aligned} {\widehat{c}}_{i,i}^{1}&=\frac{(n-i) (\alpha -i+n) (\beta +\gamma +i+n+1) (\alpha +\beta +\gamma +i+n+1)}{(\alpha +\beta +\gamma +2 n) (\alpha +\beta +\gamma +2 n+1)^2 (\alpha +\beta +\gamma +2 n+2)}, \quad 0 \le i \le n-1,\\ {\widehat{c}}_{i+1,i}^{1}&=-\frac{(i+1) (\beta +i+1) \left( \alpha ^2+\alpha (\beta +\gamma +2 n+1)-2 (i-n) (\beta +\gamma +i+n)-\beta -\gamma -4 i+2 n-2\right) }{(\alpha +\beta +\gamma +2 n) (\alpha +\beta +\gamma +2 n+1)^2 (\alpha +\beta +\gamma +2 n+2)}, \\&\quad 0 \le i \le n-1,\\ {\widehat{c}}_{i+2,i}^{1}&=\frac{(i+1) (i+2) (\beta +i+1) (\beta +i+2)}{(\alpha +\beta +\gamma +2 n) (\alpha +\beta +\gamma +2 n+1)^2 (\alpha +\beta +\gamma +2 n+2)} ,\quad 0 \le i \le n-2, \end{aligned}$$

and

$$\begin{aligned} {\widehat{C}}_{n,2}^{(\alpha ,\beta ,\gamma )}= \begin{pmatrix} {\widehat{c}}_{0,0}^{2} &{} {\widehat{c}}_{0,1}^{2} &{} &{} &{} \bigcirc \\ {\widehat{c}}_{1,0}^{2} &{} {\widehat{c}}_{1,1}^{2} &{} {\widehat{c}}_{1,2}^{2} &{} &{} \\ &{} {\widehat{c}}_{2,1}^{2} &{} {\widehat{c}}_{2,2}^{2} &{} \ddots &{} \\ &{} &{} \ddots &{} \ddots &{} {\widehat{c}}_{n-2,n-1}^{2} \\ &{} &{} &{} {\widehat{c}}_{n-1,n-2}^{2} &{} {\widehat{c}}_{n-1,n-1}^{2} \\ \bigcirc &{} &{} &{} &{} {\widehat{c}}_{n,n-1}^{2} \end{pmatrix}\,, \end{aligned}$$
(31)

with entries

$$\begin{aligned} \quad {\widehat{c}}_{i,i}^{2}&=\frac{(i-n) (\alpha -i+n) \left( \alpha (\beta +2 i+1)+\beta ^2+\beta \gamma +2 n (\beta +2 i+1)+\beta +2 \gamma i+\gamma -2 i^2\right) }{(\alpha +\beta +\gamma +2 n) (\alpha +\beta +\gamma +2 n+1)^2 (\alpha +\beta +\gamma +2 n+2)},\\ \quad {\widehat{c}}_{i+1,i}^{2}&=\frac{(i+1) (\beta +i+1) (\alpha +\gamma -i+2 n) (\alpha +\beta +\gamma -i+2 n)}{(\alpha +\beta +\gamma +2 n) (\alpha +\beta +\gamma +2 n+1)^2 (\alpha +\beta +\gamma +2 n+2)}, \end{aligned}$$

for \(0 \le i \le n-1\), and

$$\begin{aligned} {\widehat{c}}_{i,i+1}^{2} =\frac{(-i+n-1) (n-i) (\alpha -i+n-1) (\alpha -i+n)}{(\alpha +\beta +\gamma +2 n) (\alpha +\beta +\gamma +2 n+1)^2 (\alpha +\beta +\gamma +2 n+2)}, \end{aligned}$$

for \(0 \le i \le n-2\).

Both families of orthogonal polynomials (23) and (27) (with respect to the same weight function on the same domain, and solution of the same partial differential equation) are related as

$$\begin{aligned} {\mathbb {P}}_{n}^{(\alpha ,\beta ,\gamma )} = {{\mathbb {U}}}_{n}^{(\alpha ,\beta ,\gamma )} \widehat{{\mathbb {P}}}_{n}^{(\alpha ,\beta ,\gamma )}, \qquad \text {where} \qquad {{\mathbb {U}}}_{n}^{(\alpha ,\beta ,\gamma )}=\begin{pmatrix} u_{i,j,n}^{(\alpha ,\beta ,\gamma )} \end{pmatrix}_{i,j=1}^{n+1}, \end{aligned}$$
(32)

with \(u_{i,j,n}^{(\alpha ,\beta ,\gamma )}=0\) if \(j>i\) and

$$\begin{aligned} u_{i,j,n}^{(\alpha ,\beta ,\gamma )}=\displaystyle {\frac{(\beta +j)_{i-j} (\beta +\gamma +i)_{j-1} (\alpha +\beta +\gamma +i+n+1)_{n-i+1}}{\Gamma (j) \Gamma (i-j+1) \Gamma (n-i+2)}} \end{aligned}$$
(33)

if \(j \le i\). Therefore, if we multiply (28) by \({{\mathbb {U}}}_{n}^{(\alpha ,\beta ,\gamma )}\) we obtain the following recurrence relations for the initial family (24)

$$\begin{aligned} \begin{aligned} x {\mathbb {P}}_n^{(\alpha ,\beta ,\gamma )}&= A_{n,1}^{(\alpha ,\beta ,\gamma )} {\mathbb {P}}_{n+1}^{(\alpha ,\beta ,\gamma )} + B_{n,1}^{(\alpha ,\beta ,\gamma )} {\mathbb {P}}_{n}^{(\alpha ,\beta ,\gamma )} + C_{n,1}^{(\alpha ,\beta ,\gamma )} {\mathbb {P}}_{n-1}^{(\alpha ,\beta ,\gamma )},\\ y {\mathbb {P}}_n^{(\alpha ,\beta ,\gamma )}&= A_{n,2}^{(\alpha ,\beta ,\gamma )} {\mathbb {P}}_{n+1}^{(\alpha ,\beta ,\gamma )} + B_{n,2}^{(\alpha ,\beta ,\gamma )} {\mathbb {P}}_{n}^{(\alpha ,\beta ,\gamma )} + C_{n,2}^{(\alpha ,\beta ,\gamma )} {\mathbb {P}}_{n-1}^{(\alpha ,\beta ,\gamma )}, \end{aligned} \end{aligned}$$

where

$$\begin{aligned} A_{n,j}^{(\alpha ,\beta ,\gamma )}= & {} {{\mathbb {U}}}_{n}^{(\alpha ,\beta ,\gamma )} L_{n,j} \left( {{\mathbb {U}}}_{n+1}^{(\alpha ,\beta ,\gamma )} \right) ^{-1}, \quad B_{n,j}^{(\alpha ,\beta ,\gamma )} = {{\mathbb {U}}}_{n}^{(\alpha ,\beta ,\gamma )} {\widehat{B}}_{n,j}^{(\alpha ,\beta ,\gamma )} \left( {{\mathbb {U}}}_{n}^{(\alpha ,\beta ,\gamma )} \right) ^{-1}, \\ C_{n,j}^{(\alpha ,\beta ,\gamma )}= & {} {{\mathbb {U}}}_{n}^{(\alpha ,\beta ,\gamma )} {\widehat{C}}_{n,j}^{(\alpha ,\beta ,\gamma )} \left( {{\mathbb {U}}}_{n-1}^{(\alpha ,\beta ,\gamma )} \right) ^{-1}. \end{aligned}$$

By using (see e.g. Area et al. (2017), p. 776 or Suetin (1999), Eq. (15), p. 81)

$$\begin{aligned} \frac{ \Gamma (\alpha +\beta +\gamma +3)}{\Gamma (\alpha +1)\Gamma (\beta +1)\Gamma (\gamma +1)}\int _{0}^{1}\int _{0}^{1-x} x^{n} y^{m}x^{\alpha } y^{\beta } (1-x-y)^{\gamma } {\text {d}}y{\text {d}}x=\frac{(\alpha +1)_{n}\,(\beta +1)_{m}}{(\alpha +\beta +\gamma +3)_{n+m}} \end{aligned}$$

we get

The orthogonality relation for the monic polynomials reads

$$\begin{aligned} \int _{0}^{1} \int _{0}^{1-x} \widehat{{\mathbb {P}}}_{n}^{(\alpha ,\beta ,\gamma )} \left( \widehat{{\mathbb {P}}}_m^{(\alpha ,\beta ,\gamma )}\right) ^{\top } x^{\alpha }y^{\beta }\left( 1-x-y\right) ^{\gamma } {\text {d}}y{\text {d}}x= \widehat{{{\mathbb {H}}}}_{n}^{(\alpha ,\beta ,\gamma )} \Delta _{n,m}, \end{aligned}$$

where for \(0\le i,j\le n\) the matrix \(\widehat{{{\mathbb {H}}}}_{n}^{(\alpha ,\beta ,\gamma )}\) of size \((n+1) \times (n+1) \) has as (ij)-entry

Let us compute the inverse of \(\widehat{{{\mathbb {H}}}}_{n}^{(\alpha ,\beta ,\gamma )}\). For standard polynomials, we just need to compute the inverse of the diagonal matrix \({{\mathbb {H}}}_{n}^{(\alpha ,\beta ,\gamma )}\) of size \((n+1)\times (n+1)\) with entries given in (25). By using (32) we have

$$\begin{aligned} \widehat{{\mathbb {H}}}_{n}^{(\alpha ,\beta ,\gamma )} =({\mathbb {U}}_{n}^{(\alpha ,\beta ,\gamma )})^{-1} {\mathbb {H}}_{n}^{(\alpha ,\beta ,\gamma )} \left( ({{\mathbb {U}}}_{n}^{(\alpha ,\beta ,\gamma )})^{-1}\right) ^{\top }. \end{aligned}$$

Thus,

$$\begin{aligned} \left( \widehat{{\mathbb {H}}}_{n}^{(\alpha ,\beta ,\gamma )} \right) ^{-1} = \left( w_{i,j,n}^{(\alpha ,\beta ,\gamma )} \right) _{i,j=1}^{n+1} \qquad \text {where} \qquad w_{i,j,n}^{(\alpha ,\beta ,\gamma )}= {\left\{ \begin{array}{ll} \displaystyle \sum \limits _{k=j}^{n+1} \frac{u_{k,i,n}^{(\alpha ,\beta ,\gamma )}\,u_{k,j,n}^{(\alpha ,\beta ,\gamma )}}{ h_{n,k-1}^{(\alpha ,\beta ,\gamma )}} , &{} j\ge i,\\ \displaystyle \sum \limits _{k=i}^{n+1} \frac{u_{k,i,n}^{(\alpha ,\beta ,\gamma )}\,u_{k,j,n}^{(\alpha ,\beta ,\gamma )}}{ h_{n,k-1}^{(\alpha ,\beta ,\gamma )}}, &{} j<i, \end{array}\right. } \end{aligned}$$

where \( h_{n,k-1}^{(\alpha ,\beta ,\gamma )}\) and \(u_{k,i,n}^{(\alpha ,\beta ,\gamma )}\) are defined in (25) and (33), respectively.

Now, we introduce the Uvarov moment functional on the simplex. Let us denote by u the linear functional associated with the bivariate orthogonal polynomials on the simplex

$$\begin{aligned} \langle u,f \rangle = \int _{0}^{1} \int _{0}^{1-x} f(x,y) x^{\alpha } y^{\beta } (1-x-y)^{\gamma } {\text {d}}y {\text {d}}x \end{aligned}$$

and let the matrix \(\Lambda \) in (10) be the diagonal matrix

$$\begin{aligned} \Lambda =\begin{pmatrix} M_{1} &{} 0 &{} 0 \\ 0 &{} M_{2} &{} 0 \\ 0 &{} 0 &{} M_{3} \end{pmatrix}, \end{aligned}$$

where \(M_1, M_2, M_3\) are positive real numbers. Hence, the Uvarov linear functional v is defined by

$$\begin{aligned} \langle v,f \rangle = \langle u, f \rangle + M_{1} f(0,0) + M_{2} f(1,0) + M_{3} f(0,1). \end{aligned}$$

In this case, the matrices \({\mathsf {K}}_{n}\) defined in (12) are explicitly given by

$$\begin{aligned} {\mathsf {K}}_{n}&= \frac{\Gamma (\alpha +\beta +\gamma +n+3)}{n! \, \Gamma (\alpha +1) \Gamma (\beta +1) \Gamma (\gamma +1)} \begin{pmatrix} \displaystyle {\frac{(\alpha +\beta +3)_n}{(\gamma +1)_n}} &{} (-1)^{n} &{} (-1)^{n} \\ (-1)^{n} &{} \displaystyle {\frac{(\beta +\gamma +3)_n}{(\alpha +1)_n}} &{} (-1)^{n}\\ (-1)^{n} &{} (-1)^{n} &{} \displaystyle { \frac{(\alpha +\gamma +3)_n}{(\beta +1)_n}} \end{pmatrix}\\&= A_{n}-b_{n}I_{3}+b_{n}u~u^{\top }, \end{aligned}$$

where we denote \(u=(1,1,1)^\top \), \(A_{n} = {{\,\mathrm{diag}\,}}\left\{ a_{n,1},a_{n,2},a_{n,3}\right\} \), and

$$\begin{aligned} a_{n,1}&=\frac{\Gamma (\alpha +\beta +\gamma +n+3)}{n! \, \Gamma (\alpha +1) \Gamma (\beta +1) \Gamma (\gamma +1)}\frac{(\alpha +\beta +3)_n}{(\gamma +1)_n},\\ a_{n,2}&=\frac{\Gamma (\alpha +\beta +\gamma +n+3)}{n! \, \Gamma (\alpha +1) \Gamma (\beta +1) \Gamma (\gamma +1)}\frac{(\beta +\gamma +3)_n}{(\alpha +1)_n},\\ a_{n,3}&=\frac{\Gamma (\alpha +\beta +\gamma +n+3)}{n! \, \Gamma (\alpha +1) \Gamma (\beta +1) \Gamma (\gamma +1)}\frac{(\alpha +\gamma +3)_n}{(\beta +1)_n},\\ b_{n}&= \frac{\Gamma (\alpha +\beta +\gamma +n+3)}{n! \, \Gamma (\alpha +1) \Gamma (\beta +1) \Gamma (\gamma +1)}(-1)^n. \end{aligned}$$

Since \(\Lambda \) is positive definite, the inverse of the matrix \(I_{3}+\Lambda {{\mathsf {K}}}_{n}\) is computed as

$$\begin{aligned} \Xi _{n}=\left( I_{3}+\Lambda {{\mathsf {K}}}_{n}\right) ^{-1}\Lambda =\left( \Lambda ^{-1}+{{\mathsf {K}}}_{n}\right) ^{-1}=\left( \Lambda ^{-1}+A_{n} -b_{n}I_{3}+b_{n}u~u^\top \right) ^{-1}. \end{aligned}$$

Let us denote

$$\begin{aligned} Z_{n}:=\Lambda ^{-1}+A_{n}-b_{n}I_{3}=\text {diag}\left\{ z_{n,i} ~,~i=1,2,3\right\} {,} \end{aligned}$$

where \(z_{n,i}=M_{i}^{-1}+a_{n,i}-b_{n}.\) Using the Sherman–Morrison–Woodbury identity, it follows (see Golub and Van Loan 1996)

$$\begin{aligned} \Xi _{n}=\left( I_{3}+\Lambda {{\mathsf {K}}}_{n}\right) ^{-1}\Lambda&=\left( Z_{n}+b_{n}u~u^\top \right) ^{-1}\\&=Z_{n}^{-1}-\frac{b_{n}}{1+b_{n} { \sum \nolimits _{i=1}^{3}} z_{n,i}^{-1}}\left( \begin{array}{c} z_{n,1}^{-1}\\ z_{n,2}^{-1}\\ z_{n,3}^{-1} \end{array} \right) \left( \begin{array}{ccc} z_{n,1}^{-1}&z_{n,2}^{-1}&z_{n,3}^{-1} \end{array} \right) . \end{aligned}$$

The matrix \({\mathsf {P}}_n(\xi )\) defined in (11) is explicitly given by

$$\begin{aligned} {\mathsf {P}}_n(\xi )= \left( {{{\mathbb {P}}}}_n^{(\alpha ,\beta ,\gamma )}(0,0) | {{{\mathbb {P}}}}_n ^{ (\alpha ,\beta ,\gamma )}(1,0) | {{{\mathbb {P}}}}_n ^{(\alpha ,\beta ,\gamma )}(0,1) \right) , \end{aligned}$$

where for \(m=0,1,\ldots ,n\)

$$\begin{aligned} {P}_{n,m}^{(\alpha ,\beta ,\gamma )}\left( 0,0\right)&=\left( -1\right) ^{n}\left( \begin{array}{c} n-m+\alpha \\ n-m \end{array} \right) \left( \begin{array}{c} m+\beta \\ m \end{array} \right) ,\\ {P}_{n,m}^{(\alpha ,\beta ,\gamma )}\left( 1,0\right)&=\left( \begin{array}{c} n+\beta +\gamma +1\\ n \end{array} \right) ~\delta _{n,n-m},\\ {P}_{n,m}^{(\alpha ,\beta ,\gamma )}\left( 0,1\right)&=\left( -1\right) ^{n-m}\left( \begin{array}{c} n-m+\alpha \\ n-m \end{array} \right) \left( \begin{array}{c} m+\gamma \\ m \end{array} \right) , \end{aligned}$$

from the properties

$$\begin{aligned} {{\bar{P}}}_{n}^{\left( \alpha ,\beta \right) }\left( 0\right) =\left( -1\right) ^{n}~\left( \begin{array}{c} n+\beta \\ n \end{array} \right) , \qquad {{\bar{P}}}_{n}^{\left( \alpha ,\beta \right) }\left( 1\right) =\left( \begin{array}{c} n+\alpha \\ n \end{array} \right) . \end{aligned}$$

Then, we can express the monic Uvarov polynomials on the simplex by using the explicit expression (15). Applying Theorem 7, the coefficients of the three term relations for Uvarov polynomials are given by (19), where the involved matrices are already explicitly computed.

Finally, we analyse a particular example. Consider \(\alpha = \beta = 1\), \(\gamma =1/2\), and \(M_1 = M_2 = M_3 =1/2\). The Uvarov inner product is given by

$$\begin{aligned} (f, g)_K&= \int _{0}^{1} \int _{0}^{1-x} f(x,y)\,g(x,y)\, x\, y\, (1-x-y)^{1/2} {\text {d}}y {\text {d}}x \\&\quad + \frac{1}{2}f(0,0)\,g(0,0) + \frac{1}{2}f(1,0)\,g(1,0) + \frac{1}{2}f(0,1)\,g(0,1). \end{aligned}$$
Table 1 Table of the first (total degree \(0 \le n \le 2\)) monic classical polynomials on the simplex and the monic Uvarov perturbation described in Example 5.1, for \(\alpha =\beta =1\), \(\gamma =1/2\) and \(M_{1}=M_{2}=M_{3}=1/2\)

Table 1 shows the explicit expression of the first classical monic polynomials on the simplex, and the Uvarov monic polynomials perturbed as above. We have plotted the polynomials of degree 2 up to degree 5 of the zeros (as algebraic curves Area et al. (2015)) of both families as well as the corresponding surfaces in Figs. 1, 2, 3, 4, 5. We must point out that the zeros of multivariate polynomials constitute a theory that remains open, only a few analytic results concerning zeros are known (see Dunkl and Xu 2014; Area et al. 2015). In general, a zero of a multivariate polynomial is an algebraic curve, and different basis have different zeros. Here, we wanted to show the impact of the Uvarov modification on the orthogonal polynomials as well as its zeros.

Fig. 1
figure 1

Zeros of the classical polynomials \(P_{2,0}\), \(P_{2,1}\) and \(P_{2,2}\) on the simplex (left graphics) and of the Uvarov polynomials (right graphics) of total degree \(n=2\), for \(x \in [0,1]\) and \(y \in [0,x]\)

Fig. 2
figure 2

Zeros of the monic polynomials of degree 3, \(P_{3,0}\), \(P_{3,1}\), \(P_{3,2}\), and \(P_{3,3}\). In the first row, the classical polynomials on the simplex, and in the second row, the Uvarov modification, for \(x \in [0,1]\) and \(y \in [0,x]\)

Fig. 3
figure 3

Zeros of the classical polynomials of total degree 4 \(P_{4,0}\), \(P_{4,1}\), \(P_{4,2}\), \(P_{4,3}\) and \(P_{4,4}\). In the first row, the classical polynomials on the simplex, and in the second row, the Uvarov modification, for \(x \in [0,1]\) and \(y \in [0,x]\)

Fig. 4
figure 4

Zeros of the classical polynomials of total degree 5 \(P_{5,0}\), \(P_{5,1}\), \(P_{5,2}\), \(P_{5,3}\), \(P_{5,4}\) and \(P_{5,5}\). In the first row, the classical polynomials on the simplex, and in the second row, the Uvarov modification, for \(x \in [0,1]\) and \(y \in [0,x]\)

Fig. 5
figure 5

Surfaces of the classical polynomials on the simplex (first row) and of the Uvarov polynomials (second rows) of total degree \(n=5\), for \(x \in [0,1]\) and \(y \in [0,x]\), with the modification described in Sect. 5.1

5.2 Bivariate Uvarov Bessel–Laguerre orthogonal polynomials

Let us consider the classical Laguerre polynomials

orthogonal with respect to the positive definite univariate moment functional

$$\begin{aligned} \langle v_L^{(\alpha )}, f\rangle = \int _0^{+\infty } f(t) \, w^{(\alpha )}(t)\,{\text {d}}t, \end{aligned}$$

with \(w^{(\alpha )}(t) = t^\alpha \, e^{-t}, \quad \alpha > -1\). Formula (5.1.1) in Szegő (1975) provides

$$\begin{aligned} \int _0^{+\infty } \left( L_{n}^{(\alpha )}(t) \right) ^2 w^{(\alpha )}(t)\,{\text {d}}t = \frac{\Gamma (n+\alpha +1)}{n!}, \quad n\ge 0. \end{aligned}$$

Let

$$\begin{aligned} B_{n}^{(a,b)}(z)=(-1)^{n} \,n!\,\frac{z^{n}}{b^{n}} \,L_{n}^{(1-2n-a)}\left( \frac{b}{z} \right) , \end{aligned}$$

be the classical univariate Bessel polynomials orthogonal with respect to the quasi definite but not positive definite moment functional ( Krall and Frink 1949)

$$\begin{aligned} \langle u_B^{(a, b)}, f\rangle = \int _T f(z)\,w^{(a,b)}(z)\,{\text {d}}z, \end{aligned}$$

where

$$\begin{aligned} w^{(a,b)}(z) =\frac{1}{2\,\pi \,i}\,z^{a-2} \,e^{-b/z}, \end{aligned}$$

for \(a\ne 0, -1, -2, \ldots \), \(b\ne 0\), where \(i^{2}=-1\), and T is the unit circle oriented in the counter-clockwise direction ( Krall and Frink (1949), eq. (58)). Moreover, for \(a \ge 2\) integer and b real we have

$$\begin{aligned} \int _T B_{n}^{(a,b)}(z) \, B_{m}^{(a,b)}(z) \,w^{(a,b)}(z)\,{\text {d}}z = \frac{(-1)^{n+a+1} \,b^{a-1}\,n!}{(2n+a-1)\Gamma (n+a-1)} \delta _{n,m}, \end{aligned}$$

which implies

$$\begin{aligned} \int _T \left( B_{n}^{(a,b)}(z)\right) ^2 \,w^{(a,b)}(z)\,{\text {d}}z = \frac{(-1)^{n+a+1} \,b^{a-1}\,n!}{(2n+a-1)\Gamma (n+a-1)}. \end{aligned}$$
(34)

This means that Bessel polynomials are associated with a quasi definite but not positive definite moment functional.

For \(n\ge 0\), \(g, \gamma \in {\mathbb {R}}\), such that \(g + n \ne 0\), \(g \gamma + n \ne 0\), the bivariate Bessel-Laguerre orthogonal polynomials are defined by

$$\begin{aligned} B_{n,m}^{(g,\gamma )}(x,y) = B_{n-m}^{(g+2m,-g)}(x)\, x^{m} \, L_{m}^{(g \gamma -1)} \,\left( \frac{gy}{x} \right) , \quad 0\le m \le n. \end{aligned}$$
(35)

Following (Dunkl and Xu 2014, p. 39), Bessel-Laguerre polynomials (35) are mutually orthogonal with respect to a non positive definite moment functional w acting as follows. We define

$$\begin{aligned} W(x,y) = w^{(g-1,-g)}(x)\,w^{(g\gamma -1)}\left( \frac{gy}{x}\right) = \frac{1}{2\,\pi \, i}\,x^{g-3}\,e^{g/x}\,\left( \frac{gy}{x}\right) ^{g\gamma -1}\,e^{-gy/x}, \end{aligned}$$

on the region \(R=\{(x,y): x\in T,\,\, 0< gy/x < + \infty \}.\) The bivariate moment functional is defined as

$$\begin{aligned} \langle w, f\rangle = \int _R f(x,y) \, W(x,y)\,{\text {d}}x {\text {d}}y. \end{aligned}$$
(36)

Therefore, if \(g\ge 2\) is integer

$$\begin{aligned} \langle w, B_{n,m}^{(g,\gamma )} B_{r,s}^{(g,\gamma )}\rangle&= \int _R B_{n,m}^{(g,\gamma )}(z,y)\,B_{r,s}^{(g,\gamma )}(z,y) \,W(z,y)\,{\text {d}}z\,{\text {d}}y\\&= \frac{1}{2\,\pi \, i} \int _R B_{n-m}^{(g+2m,-g)}(z)\, z^{m} \, L_{m}^{(g \gamma -1)} \,\left( \frac{gy}{z} \right) \\&\quad \times B_{r-s}^{(g+2s,-g)}(z)\, z^{s} \, L_{s}^{(g \gamma -1)} \,\left( \frac{gy}{z} \right) \,z^{g-3}\,e^{g/z}\,\left( \frac{gy}{z}\right) ^{g\gamma -1}\,e^{-gy/z}\,{\text {d}}z\,{\text {d}}y\\&= \frac{1}{2\,\pi \,i\,g}\int _{T}B_{n-m}^{(g+2m,-g)}(z)\, B_{r-s}^{(g+2s,-g)}(z) z^{m+s+g-2}e^{g/z}{\text {d}}z \\&\quad \times \int _{0}^{+\infty }L_{m}^{(g\gamma -1) }(t)\, L_{s}^{(g\gamma -1)}(t) t^{g\gamma -1}e^{-t}{\text {d}}t \\&=\frac{\Gamma (m+g\gamma ) }{m!\,2\,\pi \, i\,g} \left[ \int _{T}B_{n-m}^{\left( g+2m,-g\right) }\left( z\right) B_{r-m}^{\left( g+2m,-g\right) }\left( z\right) z^{2m+g-2}e^{g/z}{\text {d}}z \right] \delta _{m,s}\\&=\frac{(-1)^{n+m} g^{g+2m-2}\Gamma (m + g\gamma )\,( n-m) !}{m!2\pi i~( 2n+g-1) \Gamma (g+n+m-1)}\,\delta _{n,r}\,\delta _{m,s}. \end{aligned}$$

Bessel–Laguerre polynomials appear in Kwon et al. (2001) as solutions of the partial differential equation

$$\begin{aligned}&x^{2} \frac{\partial ^{2}}{\partial x^{2}}B_{n,m}^{(g,\gamma )}(x,y) + 2 x y \frac{\partial ^{2}}{\partial x \partial y} B_{n,m}^{(g,\gamma )}(x,y) + (y^{2}-y) \frac{\partial ^{2}}{\partial y^{2} } B_{n,m}^{(g,\gamma )}(x,y) \nonumber \\&\quad + g(x-1) \frac{\partial }{\partial x} B_{n,m}^{(g,\gamma )}(x,y) + g(y-\gamma ) \frac{\partial }{\partial y} B_{n,m}^{(g,\gamma )}(x,y) - n (n+g-1) B_{n,m}^{(g,\gamma )}(x,y)=0,\nonumber \\ \end{aligned}$$
(37)

and were considered later in Area et al. (2012a) and Marriaga et al. (2017), among others. The partial differential equation (37) has as monic solution

$$\begin{aligned} {\widehat{B}}_{n,k}^{(g,\gamma )}(x,y)= & {} \sum _{s=0}^{k} \left( {\begin{array}{c}k\\ s\end{array}}\right) \sum _{r=s}^{n-k+s} \frac{(-1)^{s} g^{r-s} (1-g \gamma -k)_{s}}{(2-g-2n)_{r}}\left( {\begin{array}{c}n-k\\ r-s\end{array}}\right) x^{n-k+s-r} y^{k-s} \nonumber \\= & {} \sum _{s=0}^{k} \left( {\begin{array}{c}k\\ s\end{array}}\right) \frac{(-1)^s \Gamma (-g-2 n+2) \Gamma (-g \gamma -k+s+1) }{\Gamma (-g \gamma -k+1) \Gamma (-g-2 n+s+2)} \nonumber \\&\quad \times x^{n-k} y^{k-s} \, _1F_1\left( k-n;-g-2 n+s+2;-\frac{g}{x}\right) . \end{aligned}$$
(38)

Let us define the Bessel–Laguerre mutually orthogonal polynomial system \(\{{\mathbb {P}}_n^{(g,\gamma )}\}_{n\ge 0}\) given by

$$\begin{aligned} {\mathbb {P}}_n^{(g,\gamma )} = ( {B}_{n,0}^{(g,\gamma )}(x,y), {B}_{n,1}^{(g,\gamma )}(x,y),\dots , {B}_{n,n}^{(g,\gamma )}(x,y))^\top . \end{aligned}$$
(39)

We get

$$\begin{aligned} \langle w, {\mathbb {P}}_n^{(g,\gamma )} \,({\mathbb {P}}_{m}^{(g,\gamma )})^\top \rangle = {\mathbb {H}}_{n}^{(g,\gamma )} \Delta _{n,m}, \end{aligned}$$

where \( {\mathbb {H}}_{n}^{(g,\gamma )} = {{\,\mathrm{diag}\,}}\{h_{n,0}^{(g,\gamma )} , h_{n,1}^{(g,\gamma )} , \ldots , h_{n,n}^{(g,\gamma )} \}\) is a \((n+1)\) diagonal matrix with entries

$$\begin{aligned} h_{n,m}^{(g,\gamma )} = \left\langle w, \left( B_{n,m}^{(g,\gamma )}\right) ^2\right\rangle = \frac{(-1)^{n+m} g^{g+2m-2}\Gamma (m + g\gamma )\,( n-m) !}{m!2\pi i~( 2n+g-1) \Gamma (g+n+m-1)}. \end{aligned}$$

Let us also consider the monic Bessel-Laguerre system

$$\begin{aligned} \widehat{{\mathbb {P}}}_n^{(g,\gamma )} = \left( {\widehat{B}}_{n,0}^{(g,\gamma )}(x,y), {\widehat{B}}_{n,1}^{(g,\gamma )}(x,y),\dots , {\widehat{B}}_{n,n}^{(g,\gamma )}(x,y)\right) ^{\top }. \end{aligned}$$
(40)

Both families of orthogonal polynomials (35) and (38) (with respect to the same weight function on the same domain, and solution of the same partial differential equation) are related as

$$\begin{aligned} {\mathbb {P}}_{n}^{(g,\gamma )} = {\mathbb {U}}_{n}^{(g,\gamma )} \widehat{{\mathbb {P}}}_n^{(g,\gamma )}, \qquad \text {where} \qquad {\mathbb {U}}_{n}^{(g,\gamma )} =\begin{pmatrix} u_{i,j,n}^{(g,\gamma )} \end{pmatrix}_{i,j=1}^{n+1} \end{aligned}$$
(41)

with \(u_{i,j,n}^{(g,\gamma )}=0\) if \(j>i\) and

$$\begin{aligned} u_{i,j,n}^{(g,\gamma )}=\displaystyle {\frac{(-1)^{n-i-j} g^{i+j-n-2} (g+i+n-2)_{n-i+1} (g \gamma +j-1)_{i-j}}{(j-1)! (i-j)!}}, \end{aligned}$$

if \( j \le i\). The orthogonality relation for the monic polynomials reads

$$\begin{aligned} \langle w, \widehat{{\mathbb {P}}}_{n}^{(g,\gamma )}\, \left( \widehat{{\mathbb {P}}}_{m}^{(g,\gamma )} \right) ^\top \rangle = \widehat{{{\mathbb {H}}}}_{n}^{(g,\gamma )} \Delta _{n,m}, \end{aligned}$$

where \(\widehat{{{\mathbb {H}}}}_{n}^{(g,\gamma )}\) is a non-singular matrix of size \((n+1) \times (n+1)\). By using (41) we have

$$\begin{aligned} \widehat{{{\mathbb {H}}}}_{n}^{(g,\gamma )} = \left( {{\mathbb {U}}}_{n}^{(g,\gamma )} \right) ^{-1} {\mathbb {H}}_{n}^{(g,\gamma )} \left( \left( {{\mathbb {U}}}_{n}^{(g,\gamma )} \right) ^{-1} \right) ^{\top }. \end{aligned}$$

Thus

$$\begin{aligned} \left( \widehat{{{\mathbb {H}}}}_{n}^{(g,\gamma )} \right) ^{-1} = \left( w_{i,j,n}^{(g,\gamma )}\right) _{i,j=1}^{n+1}, \qquad \text {where} \qquad w_{i,j,n}^{(g,\gamma )}= {\left\{ \begin{array}{ll} \displaystyle \sum \limits _{k=j}^{n+1} \frac{ u_{k,i,n}^{(g,\gamma )}\, u_{k,j,n}^{(g,\gamma )}}{h_{n,k-1}^{(g,\gamma )}}, &{} j\ge i,\\ \displaystyle \sum \limits _{k=i}^{n+1} \frac{ u_{k,i,n}^{(g,\gamma )}\, u_{k,j,n}^{(g,\gamma )}}{h_{n,k-1}^{(g,\gamma )}}, &{} j<i. \end{array}\right. } \end{aligned}$$

The monic Bessel-Laguerre polynomials \( \{\widehat{{\mathbb {P}}}_n^{(g,\gamma )}\}_{n\ge 0}\) satisfy three term relations

$$\begin{aligned} x_j \widehat{{\mathbb {P}}}_n^{(g,\gamma )}={L}_{n,j}\widehat{{\mathbb {P}}}_{n+1}^{(g,\gamma )} + {\widehat{B}}_{n,j}^{(g,\gamma )} \widehat{{\mathbb {P}}}_{n}^{(g,\gamma )} + {\widehat{C}}_{n,j}^{(g,\gamma )} \widehat{{\mathbb {P}}}_{n-1}^{(g,\gamma )}, \quad j =1, 2, \end{aligned}$$
(42)

where now

$$\begin{aligned} {{\widehat{b}}}_{i,i}^{1}&=\frac{g (g+2i-2)}{(g+2n)(g+2n-2)},\quad 0 \le i \le n,\\ {{\widehat{b}}}_{i+1,i}^{1}&=-\frac{2 (g\gamma +i) (i+1)}{(g+2 n) (g+2n-2)},\quad 0 \le i \le n-1,\\ {\widehat{b}}_{i,i}^{2}&= \frac{(g+2n-2i-2)(g\gamma +2i)+ 2i(i+1)}{(g+2n)(g+2n-2)},\quad 0 \le i \le n,\\ {\widehat{b}}_{i,i+1}^{2}&= -\frac{2(n-i)g}{(g+2n)(g+2n-2)},\quad 0 \le i \le n-1. \end{aligned}$$

Moreover, \({\widehat{C}}_{n,1}^{(g,\gamma )}\) and \({\widehat{C}}_{n,2}^{(g,\gamma )}\) have the same structure as (30) and (31), respectively, where now

$$\begin{aligned} {\widehat{c}}_{i,i}^{1}&=-\frac{(n-i) g^2(g+n+i-2)}{(g+2 n-2)^2 (g+2n-3)(g+2n-1)},\quad 0 \le i \le n-1,\\ {\widehat{c}}_{i+1,i}^{1}&=-\frac{(i+1)g (g\gamma +i)(g+2i-1)}{(g+2 n-2)^2 (g+2n-3)(g+2n-1)}, \quad 0 \le i \le n-1,\\ {\widehat{c}}_{i+2,i}^{1}&=\frac{(i+1)(i+2) (g\gamma +i) (g\gamma +i+1)}{(g+2 n-2)^2 (g+2n-3)(g+2n-1)},\quad 0 \le i \le n-2, \end{aligned}$$

and

$$\begin{aligned} {\widehat{c}}_{i,i}^{2}&=-\frac{g(n-i)((g\gamma +2i)(g+2n-2i-3)+2i(i+1))}{(g+2n-3)(g+2n-1)(g+2n-2)^2}, \quad 0 \le i \le n-1, \\ {\widehat{c}}_{i+1,i}^{2}&=\frac{(i+1)(g+2n-3-i)(g\gamma +i)(g-g\gamma +2n-2-i)}{(g+2n-3)(g+2n-1)(g+2n-2)^2} \quad 0 \le i \le n-1,\\ {\widehat{c}}_{i,i+1}^{2}&=\frac{(n-i)(n-i-1)g^2 }{(g+2n-3)(g+2n-1)(g+2n-2)^2}, \quad 0 \le i \le n-2. \end{aligned}$$

The mutually OPS of non monic Bessel-Laguerre polynomials \( \{{\mathbb {P}}_n^{(g,\gamma )}\}_{n\ge 0}\) also satisfy the three term relations (see Marriaga et al. 2017)

$$\begin{aligned} x_j {{\mathbb {P}}}_n^{(g,\gamma )}={A}_{n,j}^{(g,\gamma )} {{\mathbb {P}}}_{n+1}^{(g,\gamma )} + {B}_{n,j}^{(g,\gamma )} {{\mathbb {P}}}_{n}^{(g,\gamma )} + {C}_{n,j}^{(g,\gamma )} {{\mathbb {P}}}_{n-1}^{(g,\gamma )}, \quad j =1, 2. \end{aligned}$$
(43)

If we multiply (43) by \({{\mathbb {U}}}_{n}^{(g,\gamma )}\) we obtain

$$\begin{aligned} {A}_{n,j}^{(g,\gamma )}&={{\mathbb {U}}}_{n}^{(g,\gamma )} \, {L}_{n,j} \, \left( {{\mathbb {U}}}_{n+1}^{(g,\gamma )} \right) ^{-1}, \\ {B}_{n,j}^{(g,\gamma )}&={{\mathbb {U}}}_{n}^{(g,\gamma )} \, {\widehat{B}}_{n,j}^{(g,\gamma )} \, \left( {{\mathbb {U}}}_{n}^{(g,\gamma )} \right) ^{-1}, \\ {C}_{n,j}^{(g,\gamma )}&={{\mathbb {U}}}_{n}^{(g,\gamma )} \, {\widehat{C}}_{n,j}^{(g,\gamma )} \, \left( {{\mathbb {U}}}_{n-1}^{(g,\gamma )} \right) ^{-1}. \end{aligned}$$

The explicit expressions of the matrices in (43) are as follows. The recursion coefficients \(A_{n,j}^{(g,\gamma )}\) are the \((n+1)\times (n+2)\) matrices

where

$$\begin{aligned} a_{i,i,n}^{1} = \frac{(n+i+g-1)\,(-g)}{(2n+g-1)\,(2n+g)},\qquad 0\le i\le n, \end{aligned}$$

and

with entries

$$\begin{aligned} a_{i,i,n}^{2}&= -\frac{\left( g\gamma +2i\right) \left( g+n-1+i\right) }{\left( 2n+g-1\right) \left( 2n+g\right) } ~,~~0\le i\le n,\\ a_{i,i+1,n}^{2}&=-\frac{\left( i+1\right) \left( g+n-1+i\right) (g+n+i)}{g \left( 2n+g-1\right) \left( 2n+g\right) } ~,~~0\le i\le n,\\ a_{i+1,i,n}^{2}&=-\frac{\left( g\gamma +i\right) g}{\left( 2n+g-1\right) \left( 2n+g\right) }~,~~0\le i\le n-1. \end{aligned}$$

The coefficients \(B_{n,j}^{(g,\gamma )}\) are

$$\begin{aligned} B_{n,1}^{(g,\gamma )}=\left( \begin{array}{cccc} b_{0,0,n}^{1}&{} &{} &{} \bigcirc \\ &{} b_{1,1,n}^{1} &{} &{} \\ &{} &{} \ddots &{} \\ \bigcirc &{} &{} &{} b_{n,n,n}^{1} \end{array} \right) , \quad B_{n,2}^{(g,\gamma )}=\left( \begin{array}{cccc} b_{0,0,n}^{2} &{} b_{0,1,n}^{2} &{} &{} \bigcirc \\ b_{1,0,n}^{2} &{} b_{1,1,n}^{2} &{} \ddots &{} \\ &{} \ddots &{} \ddots &{} b_{n-1,n,n}^{2}\\ \bigcirc &{} &{} b_{n,n-1,n}^{2} &{} b_{n,n,n}^{2} \end{array} \right) ~, \end{aligned}$$

where

$$\begin{aligned} b_{i,i,n}^{1}&=\frac{g\left( 2i+g-2\right) }{\left( 2n+g-2\right) \left( 2n+g\right) },~~0\le i\le n,\\ b_{i,i,n}^{2}&=\frac{\left( g\gamma +2i\right) \left( g-2+2i\right) }{\left( 2n+g-2\right) \left( 2n+g\right) }~,~~0\le i\le n,\\ b_{i,i+1,n}^{2}&=-\frac{2\left( i+1\right) \left( g+ n+i-1\right) \left( n-i\right) }{g \left( 2n+g-2\right) \left( 2n+g\right) }~,~~0\le i\le n-1,\\ b_{i+1,i,n}^{2}&=\frac{2g \left( g\gamma +i\right) }{\left( 2n+g-2\right) \left( 2n+g\right) }~,~~0\le i\le n-1. \end{aligned}$$

Moreover,

$$\begin{aligned} C_{n,1}^{(g,\gamma )}=\left( \begin{array}{cccc} c_{0,0,n}^{1} &{} &{} &{} \bigcirc \\ &{} c_{1,1,n}^{1} &{} &{} \\ &{} &{} \ddots &{} \\ \bigcirc &{} &{} &{} c_{n-1,n-1,n}^{1}\\ \hline 0 &{} \cdots &{} \cdots &{} 0 \end{array} \right) ~~, \end{aligned}$$

where

$$\begin{aligned} c_{i,i,n}^{1}=\frac{g(n-i)}{\left( 2n+g-2\right) \left( 2n+g-1\right) },~~0\le i\le n-1, \end{aligned}$$

and

$$\begin{aligned} C_{n,2}^{(g,\gamma )}=\left( \begin{array}{cccc} c_{0,0,n}^{2} &{} c_{0,1,n}^{2} &{} &{} \bigcirc \\ c_{1,0,n}^{2} &{} c_{1,1,n}^{2} &{} \ddots &{} \\ &{} \ddots &{} \ddots &{} c_{n-2,n-1,n}^{2}\\ \bigcirc &{} &{} c_{n-1,n-2,n}^{2} &{} c_{n-1,n-1,n}^{2}\\ \hline 0 &{} \cdots &{} 0 &{} c_{n,n-1,n}^{2} \end{array} \right) ~~, \end{aligned}$$

with entries

$$\begin{aligned} c_{i,i,n}^{2}&=\frac{(n-i)(g\gamma +2i)}{\left( 2n+g-2\right) \left( 2n+g-1\right) }~,~~0\le i\le n-1,\\ c_{i+1,i,n}^{2}&=-\frac{g(g\gamma +i)}{\left( 2n+g-2\right) \left( 2n+g-1\right) }~,~~0\le i\le n-1, \\ c_{i,i+1,n}^{2}&=-\frac{(i+1)(n-i)(n-i-1)}{g \left( 2n+g-2\right) \left( 2n+g-1\right) }~~~,~~0\le i\le n-2. \end{aligned}$$

Now, we define the Uvarov modification. We take \(\xi = (0,0)\). Let us denote by w the non positive definite moment functional associated with the bivariate Bessel-Laguerre polynomials defined in (36) and let \(\Lambda \)=M be a real number such that

$$\begin{aligned} M \ne (-1)^{n} \frac{\Gamma (n+1) \Gamma (g \gamma )}{ 2\pi i g^{2-g} \Gamma (g+n)},\quad n\ge 0. \end{aligned}$$

Hence, the Uvarov linear functional v is defined by

$$\begin{aligned} \langle v,f \rangle = \langle w, f \rangle + M f(0,0). \end{aligned}$$
(44)

By using (35) as well as \(B_{n-m}^{(g+2m,-g)}(0)=1\), we get

$$\begin{aligned} B_{n,m}^{(g,\gamma )}(0,y) = \frac{(-g)^m}{m!}y^m, \qquad 0\le m \le n, \end{aligned}$$

and then,

$$\begin{aligned} B_{n,0}^{(g,\gamma )}(0,0) = 1,\qquad B_{n,m}^{(g,\gamma )}(0,0) = 0, \quad 1\le m \le n. \end{aligned}$$

Observe that

$$\begin{aligned} {\mathbb {P}}_n^{(g,\gamma )}(0,0) =&(1, 0, \ldots , 0)^\top . \end{aligned}$$

The matrix \({\mathsf {P}}_n(\xi )\) defined in (11) is explicitly given by using the above value in

$$\begin{aligned} {\mathsf {P}}_n(\xi )= \left( {{{\mathbb {P}}}}_n^{(g,\gamma )}(0,0) \right) . \end{aligned}$$

Now, we compute the matrix \(\Lambda _n = 1 + \Lambda \,{{\mathsf {K}}_{n}}\).

In this case, \({\mathsf {K}}_{n}\) is explicitly given by

$$\begin{aligned} {\mathsf {K}}_{n}= & {} {\mathbf {K}}_n(u;(0,0),(0,0)) = \sum _{m=0}^n \left( {\mathbb {P}}_m^{(g,\gamma )}(0,0)\right) ^\top \, \left( {{{\mathbb {H}}}_{m}^{(g,\gamma )}}\right) ^{-1}\,{\mathbb {P}}_m^{(g,\gamma )}(0,0) \\= & {} \sum _{m=0}^n \frac{(-1)^{m} 2\pi i (2m+g-1)\Gamma (g+m-1)}{g^{g-2}\Gamma (g\gamma ) m!}\\= & {} \frac{2\pi i}{\Gamma (g\gamma )g^{g-2}}\sum _{m=0}^n \frac{(-1)^{m} (2m+g-1)\Gamma (g+m-1)}{m!} = (-1)^n \frac{2\pi i g^{2-g} \Gamma (g+n)}{\Gamma (n+1) \Gamma (g \gamma )} \end{aligned}$$

and

$$\begin{aligned} \Xi _{n}=\left( 1+\Lambda {{\mathsf {K}}}_{n}\right) ^{-1}\Lambda =\left( \Lambda ^{-1}+{{\mathsf {K}}}_{n}\right) ^{-1}=\frac{M\Gamma (n+1)\Gamma (g \gamma )}{\Gamma (n+1) \Gamma (g \gamma )+M(-1)^{n} 2\pi i g^{2-g} \Gamma (g+n)}. \end{aligned}$$

We can now give explicitly the matrices \({\widetilde{A}}_{n,i}\), \({\widetilde{B}}_{n,i}\) and \({\widetilde{C}}_{n,i}\) of the three term relations for the vector polynomials \(\{{\mathbb {Q}}_n\}_{n\ge 0}\) orthogonal with respect to the linear functional v defined in (44) according to Theorem (7). First of all,

$$\begin{aligned} {\widetilde{A}}_{n,i}= A_{n,i}^{(g,\gamma )}, \quad n\ge 0. \end{aligned}$$

Moreover,

$$\begin{aligned} {\widetilde{B}}_{n,1}=\left( \begin{array}{cccc} {\widetilde{b}}_{0,0,n}^{1}&{} &{} &{} \bigcirc \\ &{} {\widetilde{b}}_{1,1,n}^{1} &{} &{} \\ &{} &{} \ddots &{} \\ \bigcirc &{} &{} &{} {\widetilde{b}}_{n,n,n}^{1} \end{array} \right) , ~~ \end{aligned}$$
Table 2 Table of Bessel-Laguerre polynomials (total degree \(0 \le n \le 2\)) and the Uvarov perturbation described in Example 5.2, for \(g=3\), \(\gamma =M=1/2\)

where

$$\begin{aligned} {\widetilde{b}}_{0,0,n}^{1} =b_{0,0,n}^{1}+\frac{\Xi _{n} a_{0,0,n}^{1}}{h_{n,0}^{(g,\gamma )}}-\frac{\Xi _{n-1} a_{0,0,n-1}^{1}}{h_{n-1,0}^{(g,\gamma )}},\quad {\widetilde{b}}_{i,i,n}^{1} =b_{i,i,n}^{1},~~1\le i\le n, \end{aligned}$$

and

$$\begin{aligned} {\widetilde{B}}_{n,2}=\left( \begin{array}{cccc} {\widetilde{b}}_{0,0,n}^{2} &{} {\widetilde{b}}_{0,1,n}^{2} &{} &{} \bigcirc \\ {\widetilde{b}}_{1,0,n}^{2} &{} {\widetilde{b}}_{1,1,n}^{2} &{} \ddots &{} \\ &{} \ddots &{} \ddots &{} {\widetilde{b}}_{n-1,n,n}^{2}\\ \bigcirc &{} &{} {\widetilde{b}}_{n,n-1,n}^{2} &{} {\widetilde{b}}_{n,n,n}^{2} \end{array} \right) ~, \end{aligned}$$

with entries

$$\begin{aligned} {\widetilde{b}}_{0,0,n}^{2}&=b_{0,0,n}^{2}+\frac{\Xi _{n}a_{0,0,n}^{2}}{h_{n,0}^{(g,\gamma )}}-\frac{\Xi _{n-1}a_{0,0,n-1}^{2}}{h_{n-1,0}^{(g,\gamma )}},\quad {\widetilde{b}}_{i,i,n}^{2} =b_{i,i,n}^{2}~,~~1\le i\le n,\\ {\widetilde{b}}_{0,1,n}^{2}&=b_{0,1,n}^{2}-\frac{\Xi _{n-1}a_{0,1,n-1}^{2}}{h_{n-1,0}^{(g,\gamma )}},\quad {\widetilde{b}}_{i,i+1,n}^{2} =b_{i,i+1,n}^{2}~,~~1\le i\le n-1,\\ {\widetilde{b}}_{1,0,n}^{2}&=b_{1,0,n}^{2}+\frac{\Xi _{n}a_{1,0,n}^{2}}{h_{n,0}^{(g,\gamma )}},\quad {\widetilde{b}}_{i+1,i,n}^{2} =b_{i+1,i,n}^{2}~,~~1\le i\le n-1. \end{aligned}$$

Finally,

$$\begin{aligned} {\widetilde{C}}_{n,1}=\left( \begin{array}{cccc} {\widetilde{c}}_{0,0,n}^{1} &{} &{} &{} \bigcirc \\ &{} {\widetilde{c}}_{1,1,n}^{1} &{} &{} \\ &{} &{} \ddots &{} \\ \bigcirc &{} &{} &{} {\widetilde{c}}_{n-1,n-1,n}^{1}\\ \hline 0 &{} \cdots &{} \cdots &{} 0 \end{array} \right) ~~, \end{aligned}$$

where

$$\begin{aligned} {\widetilde{c}}_{0,0,n}^{1} =c_{0,0,n}^{1}\left( 1+\frac{\Xi _{n-1}}{h_{n,0}^{(g,\gamma )}}\right) \left( 1-\frac{\Xi _{n-1}}{h_{n-1,0}^{(g,\gamma )}} \right) ,\quad {\widetilde{c}}_{i,i,n}^{1} =c_{i,i,n}^{1},~~1\le i\le n-1, \end{aligned}$$

and

$$\begin{aligned} {\widetilde{C}}_{n,2}=\left( \begin{array}{cccc} {\widetilde{c}}_{0,0,n}^{2} &{} {\widetilde{c}}_{0,1,n}^{2} &{} &{} \bigcirc \\ {\widetilde{c}}_{1,0,n}^{2} &{} {\widetilde{c}}_{1,1,n}^{2} &{} \ddots &{} \\ &{} \ddots &{} \ddots &{} {\widetilde{c}}_{n-2,n-1,n}^{2}\\ \bigcirc &{} &{} {\widetilde{c}}_{n-1,n-2,n}^{2} &{} {\widetilde{c}}_{n-1,n-1,n}^{2}\\ \hline 0 &{} \cdots &{} 0 &{} {\widetilde{c}}_{n,n-1,n}^{2} \end{array} \right) ~~, \end{aligned}$$

with entries

$$\begin{aligned} {\widetilde{c}}_{0,0,n}^{2}&=c_{0,0,n}^{2} \left( 1+\frac{\Xi _{n-1}}{h_{n,0}^{(g,\gamma )}} \right) \left( 1-\frac{\Xi _{n-1}}{h_{n-1,0}^{(g,\gamma )}} \right) ,\quad {\widetilde{c}}_{i,i,n}^{2} =c_{i,i,n}^{2}~,~~1\le i\le n-1,\\ {\widetilde{c}}_{1,0,n}^{2}&=c_{1,0,n}^{2} \left( 1-\frac{\Xi _{n-1}}{h_{n-1,0}^{(g,\gamma )}}\right) ,\quad {\widetilde{c}}_{i+1,i,n}^{2} =c_{i+1,i,n}^{2}~,~~1\le i\le n-1, \\ {\widetilde{c}}_{0,1,n}^{2}&=c_{0,1,n}^{2} \left( 1+ \frac{\Xi _{n-1}}{h_{n,0}^{(g,\gamma )}} \right) ,\quad {\widetilde{c}}_{i,i+1,n}^{2} =c_{i,i+1,n}^{2}~~~,~~1\le i\le n-2. \end{aligned}$$