1 Introduction

In 1959, Kramer [29] established his famous sampling theorem by generalizing a well-known result due to Whittaker, Shannon, and Kotel’nikov. Since then Kramer’s theorem and its far-reaching generalizations have proved to be extremely useful in various fields of mathematics and related sciences, notably in sampling theory, interpolation theory or signal analysis. In particular, significant results were achieved by Paul Leo Butzer and—often in a very fruitful cooperation—by his students, colleagues and research visitors at his institute in Aachen, see e.g. [2, 3, 19, 21]. Butzer’s work in signal theory up to 1998, for instance, was highly appreciated by Higgins [20] in a survey article including valuable comments on Kramer’s lemma (Sect. 3.4) and sampling theories generated by differential operators (Sect. 3.5).

The following ‘weighted’ version of Kramer’s theorem is particularly suited for our purpose, cf. e.g. [18, 21, Chap. 5].

Theorem 1.1

Let I be an interval of the real line and w a real valued weight function with \(w>0\) almost everywhere on I. Define a Hilbert space \(L_w^2(I)\) of Lebesgue measurable functions \(f:\,I\rightarrow \mathbb {C}\) with inner product and norm

$$\begin{aligned} (f,g)_w:=\int _{I}f(x)\overline{g(x)}w(x)dx,\, \Vert f\Vert _w=\sqrt{(f,f)_w}\,, \,f,g\in L_w^2(I). \end{aligned}$$
(1.1)

Moreover, let a so-called Kramer kernel \(K(x,\lambda ):I\times \mathbb {R}\rightarrow \mathbb {R}\) be given with the properties

  1. (i)

       \(K(\cdot ,\lambda )\in L_w^2(I),\lambda \in \mathbb {R},\)

  2. (ii)

    There exists a monotone increasing, unbounded sequence of real numbers \(\{\lambda _k\}_{k\in \mathbb {N}_0},\) such that the functions \( \{K(x,\lambda _k)\}_{k\in \mathbb {N}_0}\) form a complete, orthogonal set in \( L_w^2(I)\).

If a function \(F(\lambda ),\,\lambda \in \mathbb {R}\), can be given, for some function \(g\in L_w^2(I)\), as

$$\begin{aligned} F(\lambda )=\int _{I}K(x,\lambda )g(x)w(x)dx, \end{aligned}$$
(1.2)

then F can be reconstructed from its samples in terms of an absolutely convergent series,

$$\begin{aligned} F(\lambda )=\sum _{k=0}^{\infty }F(\lambda _k)S_k(\lambda ),\,S_k(\lambda )= \frac{\big (K(\cdot ,\lambda ),K(\cdot ,\lambda _k)\big )_w}{\big (K(\cdot ,\lambda _k),K(\cdot , \lambda _k)\big )_w}. \end{aligned}$$
(1.3)

The convergence of the series (1.3) can be seen as follows: By the properties (i), (ii) of the Kramer kernel \(K(x,\lambda )\), the functions \(\chi _k(x):=K(x,\lambda _k)/\Vert K(\cdot ,\lambda _k)\Vert _w,x \in I,\,k\in \mathbb {N}_0\), form a complete orthonormal set in \(L_w^2(I)\). Hence, the generalized Fourier coefficients \(K(\cdot ,\lambda )^\wedge (k)\) with respect to the sequence \(\{\chi _k\}\) satisfy, for all \(\lambda \in \mathbb {R}\),

$$\begin{aligned} \begin{aligned}&K(\cdot ,\lambda )^\wedge (k)=\Vert K(\cdot ,\lambda _k)\Vert _w^{-1}\int _{I}K(x,\lambda )K(x,\lambda _k)w(x)dx= \Vert K(\cdot ,\lambda _k)\Vert _w S_k(\lambda ),\\&K(\cdot ,\lambda )^\wedge (k)\,\chi _k(x)=\Vert K(\cdot ,\lambda _k)\Vert _w S_k(\lambda )\frac{K(x,\lambda _k)}{\Vert K(\cdot ,\lambda _k)\Vert _w }= S_k(\lambda )K(x,\lambda _k). \end{aligned} \end{aligned}$$

Due to the completeness of \(\{\chi _k\}\), this yields for all \(\lambda \in \mathbb {R}\),

$$\begin{aligned} \begin{aligned}&\lim _{N\rightarrow \infty }\big \Vert K(\cdot ,\lambda )-\sum _{k=0}^{N}K(\cdot ,\lambda _k)S_k(\lambda )\big \Vert _w^2\\&\quad = \lim _{N\rightarrow \infty } \int _{I}\big \vert K(x,\lambda )-\sum _{k=0}^{N}K(\cdot ,\lambda )^\wedge (k)\,\chi _k(x)\big \vert ^2w(x)dx=0. \end{aligned} \end{aligned}$$

Now apply the Cauchy–Schwarz inequality to get the pointwise convergence

$$\begin{aligned} \begin{aligned}&\big \vert F(\lambda )-\sum _{k=0}^{N}F(\lambda _k)S_k(\lambda )\big \vert = \big \vert \int _I \big \lbrace K(x,\lambda )-\sum _{k=0}^{N}K(x,\lambda _k)S_k(\lambda ) \big \rbrace g(x)w(x)dx \big \vert \\&\quad \le \big \Vert K(\cdot ,\lambda )-\sum _{k=0}^{N}K(\cdot ,\lambda _k)S_k(\lambda )\big \Vert _w \cdot \Vert g\Vert _w \rightarrow 0\,(N\rightarrow \infty ). \end{aligned} \end{aligned}$$

As to the absolute convergence, it follows by means of

$$\begin{aligned} \frac{F(\lambda _k)}{\Vert K(\cdot ,\lambda _k)\Vert _w }= \frac{\int _I K(\varvec{x},\lambda _k)g(x)w(x)dx}{\Vert K(\cdot ,\lambda _k)\Vert _w }= \int _I g(x)\chi _k(x)w(x)dx =:g^\wedge (k) \end{aligned}$$

and of the Bessel inequality for orthogonal systems in Hilbert spaces that

$$\begin{aligned} \begin{aligned}&\sum _{k=0}^{\infty }\big \vert F(\lambda _k)S_k(\lambda )\big \vert = \sum _{k=0}^{\infty }\big \vert g^\wedge (k)K(\cdot ,\lambda )^\wedge (k)\big \vert \\&\quad \le \big \lbrace \sum _{k=0}^{\infty }\big \vert g^\wedge (k)\big \vert ^2 \big \rbrace ^{1/2} \big \lbrace \sum _{k=0}^{\infty }\big \vert K(\cdot ,\lambda )^\wedge (k) \big \vert ^2\big \rbrace ^{1/2}=\Vert g\Vert _w\,\Vert K(\cdot ,\lambda )\Vert _w\,<\infty . \end{aligned} \end{aligned}$$

Moreover, the series (1.3) is uniformly convergent on any subset of the real line where \(\Vert K(\cdot ,\lambda )\Vert _w\) is bounded by a constant C independent of \(\lambda \), for then

$$\begin{aligned} \big \vert F(\lambda )-\sum _{k=0}^{N}F(\lambda _k)S_k(\lambda )\big \vert \le C \big \lbrace \sum _{k=N+1}^{\infty }\big \vert g^\wedge (k)\big \vert ^2 \big \rbrace ^{1/2} \rightarrow 0\,(N\rightarrow \infty ). \end{aligned}$$

As noticed by Kramer [29] already and later by Campbell [7], the richest source to reveal a suited Kramer kernel is to study a self-adjoint boundary value problem on an interval \(I\subset \mathbb {R}\), which is generated by a linear differential operator \(\mathcal {L}\) of some even order. Then the functions \(K(x,\lambda _k),x\in I,k\in \mathbb {N}_0\), arise as the eigenfunctions of \(\mathcal {L}\), where the corresponding eigenvalues are given by the required countable set \(\{\lambda _k\}_{k\in \mathbb {N}_0}\).

In addition, it turns out that Kramer’s sampling expansion (1.3) may be seen as a generalized Lagrange interpolation formula, if the kernel \(K(x,\lambda )\) arises from a Sturm–Liouville boundary value problem, see e.g. [17, 19, 21, 35, 37, 38]. In fact, if \(G(\lambda )\) denotes a function with \(\{\lambda _k\}_{k\in \mathbb {N}_0}\) as its simple zeros, then the sampling functions may be represented as

$$\begin{aligned} S_k(\lambda )=\frac{G(\lambda )}{[{\lambda -\lambda _k]}G'(\lambda _k)},\,k\in \mathbb {N}_0. \end{aligned}$$
(1.4)

Starting off from a regular boundary value problem of order \(n=2m,\,m\in \mathbb {N}\), Butzer and Schöttler [4] and Zayed et al. [36, 37] utilized Kramer’s approach in a more general framework. Nevertheless, almost all relevant examples given so far in the literature, are associated with a Sturm–Liouville problem built upon the singular, second-order Bessel, Laguerre, and Jacobi equations, see e.g. [1, 18, 35, 38]. More specifically, let the (normalized) Bessel functions of the first kind, \(\{J_{\lambda }^\alpha \}_{\lambda >0}\), the Laguerre polynomials \(\{L_n^\alpha \}_{n\in \mathbb {N}_0}\), and the Jacobi polynomials \(\{P_n^{\alpha ,\beta }\}_{n\in \mathbb {N}_0}\) be respectively given in terms of a (generalized) hypergeometric function with one or two parameters \(\alpha>-1, \beta >-1\) by

$$\begin{aligned} \begin{aligned}&J_\lambda ^\alpha (x)=\dfrac{2^\alpha \Gamma (\alpha +1)}{(\lambda x)^\alpha }J_\alpha (\lambda x)={}_0F_1(-;\alpha +1;-\dfrac{1}{4}(\lambda x)^2),\,0\le x<\infty ,\\&L_n^\alpha (x)=\frac{(\alpha +1)_n}{n!}R_n^\alpha (x),\,R_n^\alpha (x):={}_1F_1(-n;\alpha +1;x),\,0\le x<\infty ,\\&P_n^{\alpha ,\beta }(x)=\frac{(\alpha +1)_n}{n!}R_n^{\alpha ,\beta }(x)\,\text {with}\\&\qquad R_n^{\alpha ,\beta }(x):={}_2F_1(-n,n+\alpha +\beta +1;\alpha +1;\dfrac{1-x}{2}),\, -1\le x\le 1, \end{aligned} \end{aligned}$$
(1.5)

(as usual, \((a)_0=1,\,(a)_n=a(a+1)\cdots (a+n-1),\,a\in \mathbb {C}\)). Then their spectral differential equations are of the common form [9, Secs. 7, 10]

$$\begin{aligned} \mathcal {L}_x y(x)=\dfrac{1}{w(x)}\dfrac{d}{dx}\big [p(x)\dfrac{d}{dx} y(x) \big ]=-\Lambda \,y(x),\,a<x<b, \end{aligned}$$
(1.6)

where the entries are respectively given by

$$\begin{aligned}{} & {} (B)\, w(x)=p(x)=x^{2\alpha +1};\,\Lambda =\lambda ^2,\,y(x)=J_\lambda ^\alpha (x),\,\lambda >0,\,0<x<\infty , \end{aligned}$$
(1.7)
$$\begin{aligned}{} & {} \begin{aligned} (L)\,w(x)&=e^{-x}x^\alpha ,\,p(x)=xw(x);\\ \Lambda&=n,\,y(x)=R_n^\alpha (x),\, n\in \mathbb {N}_0,\,0<x<\infty , \end{aligned} \end{aligned}$$
(1.8)
$$\begin{aligned}{} & {} \begin{aligned} (J)\, w(x)&=(1-x)^\alpha (1+x)^\beta ,\,p(x)=(1-x^2)w(x);\\ \Lambda&=n(n+\alpha +\beta +1),\,y(x)=R_n^{\alpha ,\beta }(x),\, n\in \mathbb {N}_0,\,-1<x<1. \end{aligned} \end{aligned}$$
(1.9)

Furthermore, there exists a discrete eigenfunction system associated with the Bessel equation, as well. In fact, when restricting the range of the equation to the finite interval (0, 1] and imposing the boundary condition \(J_\lambda ^\alpha (1)=0\) at the right endpoint, the parameter \(\lambda \) of the spectrum takes the discrete values \(\{\gamma _k^\alpha \}_{k\in \mathbb {N}_0}\), which arise as the consecutive positive zeros of the Bessel function \(J_\alpha (\lambda )\). The corresponding eigenfunctions \(\{J_{\gamma _k^\alpha }^\alpha (x)\}_{k\in \mathbb {N}_0}\), usually called the Fourier–Bessel functions, form an orthogonal system in the Hilbert space \(L_w^2(0,1)\) with weight function \(w(x)=x^{2\alpha +1}\), see e.g. [9, Sec.7.15], [33, Sec.18].

In the present paper, all three equations will serve as a starting point towards a sampling theorem. But even more, they give rise to far-reaching extensions: While it is well-known that the Laguerre and Jacobi polynomials belong to the very few orthogonal polynomial systems satisfying a spectral differential equation of second order, there is a long history in searching for polynomials which solve a higher-order equation, cf. [11]. Successfully, this led to the ‘Laguerre-type’ polynomials \(\{L_n^{\alpha ,N}(x)\}_{n\in \mathbb {N}_0}\) and the ‘Jacobi-type’ polynomials \(\{P_n^{\alpha ,\beta ,N}(x)\}_{n\in \mathbb {N}_0}\), which are orthogonal with respect to the inner products

$$\begin{aligned} \begin{aligned}&(f,g)_{w_{\alpha ,N}}:=\int _{0}^{\infty }f(x)g(x)w_\alpha (x)dx+N\,f(0)g(0),\\&(f,g)_{w_{\alpha ,\beta ,N}}:=\int _{-1}^{1}f(x)g(x)w_{\alpha ,\beta }(x)dx+N\,f(1)g(1), \end{aligned} \end{aligned}$$
(1.10)

respectively. Here, \(N>0\) is an additional point mass, and the (normalized) Laguerre and Jacobi weight functions are given by

$$\begin{aligned} \begin{aligned} w_\alpha (x):=\,&\dfrac{1}{\Gamma (\alpha +1)}e^{-x}x^\alpha ,\,0<x<\infty ,\\ w_{\alpha ,\beta }(x):=\,&h_{\alpha ,\beta }^{-1}(1-x)^\alpha (1+x)^\beta ,\\&h_{\alpha ,\beta }^{-1}= \frac{\Gamma (\alpha +\beta +2)}{2^{\alpha +\beta +1}\Gamma (\alpha +1)\Gamma (\beta +1)},\,-1<x<1. \end{aligned} \end{aligned}$$
(1.11)

Moreover, both polynomial systems are the eigenfunctions of a differential operator of order \(2\alpha +4\), provided that \(\alpha \in \mathbb {N}_0\), see Prop. 3.2 and Prop. 4.1.

Similarly, by adding a point mass \(N>0\) to the (normalized) Bessel weight function

$$\begin{aligned} \omega _\alpha (x):=\dfrac{2}{\alpha !}x^{2\alpha +1},\,0<x<\infty , \end{aligned}$$
(1.12)

at the origin, Everitt and the author [12] introduced a continuous system of Bessel-type functions solving an equation of the same higher order. Furthermore, by restricting the Bessel-type equation to the finite interval (0, 1] and imposing a suited boundary condition at \(x=1\), we obtained a discrete eigenfunction system which generalizes the Fourier–Bessel functions, cf. [15, 30, 31]. One major aim of this paper is to determine the spectrum of their eigenvalues and to investigate, to which extent this approach gives rise to a new sampling theorem.

In the proof of a sampling theorem associated with a classical Sturm–Liouville differential operator \(\mathcal {L}_x\) in (1.6), the knowledge of the corresponding Green formula turned out to be essential: Given two real-valued functions fg defined on an appropriate domain \(D(\mathcal {L}_x)\subset L_w^2(a,b),\,-\infty \le a<b\le \infty \), and equipped with the skew-symmetric bilinear form \([f,g]_x:=p(x)\{f'(x)g(x)-f(x)g'(x)\},\) this formula reads

$$\begin{aligned} \int _{a}^{b}\big \lbrace (\mathcal {L}_xf)(x)g(x)-f(x)(\mathcal {L}_xg)(x) \big \rbrace w(x)dx =[f,g]_b-[f,g]_a. \end{aligned}$$
(1.13)

Imposing now, if necessary, certain boundary conditions on the functions fg, such that \([f,g]_x\) vanishes at both endpoints of the interval, the operator \(\mathcal {L}_x\) becomes (formally) self-adjoint. As a simple consequence, two eigenfunctions of Eq. (1.6) with eigenvalues \(\Lambda _1\ne \Lambda _2\), say, are orthogonal in \(L_w^2(a,b)\) by means of \(\lbrace \Lambda _2-\Lambda _1\rbrace \int _{a}^{b}y_{\Lambda _1}^{}(x)y_{\Lambda _2}^{}(x)w(x)dx=0\).

So, when looking for a sampling theory associated with a higher-order differential equation, one has to establish a Green-type formula that generalizes the classical formula (1.13), appropriately. It will be another purpose of the paper to provide such formulas for of all three higher-order equations of Bessel-, Laguerre-, and Jacobi-type.

The paper is organized as follows. In Section 2, we first state the higher-order Bessel-type differential equation on the half line and introduce the continuous system of Bessel-type functions together with the eigenvalues. In particular, we present the Green-type formula for the corresponding differential operator \(\mathcal {B}_x^{\alpha ,N},\,\alpha \in \mathbb {N}_0,\,N>0\), with respect to the inner product

$$\begin{aligned} (f,g)_{\omega _{\alpha ,N}}^{}:=\int _{0}^{1}(f)(x)\overline{g(x)}\omega _\alpha (x)dx+N\,f(0) \overline{g(0)},\,f,g\in L_{\omega _{\alpha ,N}}^2((0,1]), \end{aligned}$$
(1.14)

where the weight function \(\omega _\alpha (x)\) is given in (1.12), but now restricted to \(0<x\le 1\), see Thm. 2.1. Then we impose a boundary condition at \(x=1\) to be fulfilled by the discrete system of the so-called Fourier–Bessel-type functions. The evaluation of the inner product (1.14) involving both a Bessel-type function as well as a (discrete) Fourier–Bessel-type function, will play an essential role in determining a corresponding sampling theory. As a first new example, a Kramer-type theorem is explicitly established in case of the fourth-order Bessel-type equation on the interval (0, 1].

Sections 3 is devoted to the Whittaker equation as the continuous counterpart of the Laguerre equation. Again we proceed from the Green formula (1.13), now associated with the Laguerre differential operator \(\mathcal {L}_x^\alpha \). But in general, the eigenfunctions of the Whittaker equation do not belong to the corresponding Hilbert space \(L_{w_\alpha }^2(0,\infty ).\) So similarly as in the Fourier–Bessel case, we restrict the differential equation to the finite interval (0, 1] and impose a boundary condition at the right endpoint to be fulfilled by its solutions. Actually we succeeded to determine a discrete spectrum of eigenvalues such that the eigenfunctions are orthogonal in the space \(L_{w_\alpha }^2((0,1])\). This gives rise to a new sampling theorem, see Thm. 3.1. Furthermore we present the higher-order differential equation for the Laguerre-type polynomials and introduce the corresponding Whittaker-type equation and its eigenfunctions. Then we establish the Green-type formula for the higher-order differential operator which may serve as an essential tool towards an extended theory.

Finally, in Section 4, we first recall a known sampling theorem associated with the classical Jacobi equation (1.6),(1.9), see [18, 35] as well as La. 4.1. In La. 4.2, we add some details of the proof which pave the way to establish a new sampling theorem based on the higher-order Jacobi-type equation on the interval \([-1,1]\), see Cor. 4.4. To this end we introduce the Jacobi-type polynomials \(\{P_n^{\alpha ,\beta ,N}(x)\}_{n\in \mathbb {N}_0}\) and their eigenvalues \(\{\lambda _n^{\alpha ,\beta ,N}\}_{n\in \mathbb {N}_0}\). In a second step, we then define their continuous counterparts, cf. (4.10–12), and close with a new Green-type formula for the Jacobi-type differential operator \(\mathcal {L}_x^{\alpha ,\beta ,N}\).

2 Sampling theory associated with the higher-order Bessel-type differential equation

In view of (1.61.7), the Bessel (eigen)functions \(y_\lambda (x)=J_\lambda ^\alpha (x)\), satisfy, for any \(\alpha >-1\), the differential equation

$$\begin{aligned} \mathcal {B}_x^\alpha \, y_\lambda (x):=x^{-2\alpha -1}D_x[x^{2\alpha +1}D_x\, y_\lambda (x)]= -\lambda ^2 y_\lambda (x),\,0<x,\lambda <\infty . \end{aligned}$$
(2.1)

With the new parameter \(N>0\), the Bessel-type functions are then given in the three equivalent ways, see [12],

$$\begin{aligned} \begin{aligned}&J_\lambda ^{\alpha ,N}(x)=a_\lambda ^{\alpha ,N}J_\lambda ^\alpha (x)-N\,b_\lambda ^\alpha J_\lambda ^{\alpha +1}(x),\, a_\lambda ^{\alpha ,N}:=1+Nb_\lambda ^\alpha ,\,b_\lambda ^\alpha :=\frac{(\lambda /2)^{2\alpha +2}}{\Gamma (\alpha +2)}\\&\quad =a_\lambda ^{\alpha ,N}J_\lambda ^\alpha (x)+\dfrac{N}{2x}\,b_\lambda ^{\alpha -1}( J_\lambda ^\alpha )'(x) \,\text {with}\,( J_\lambda ^\alpha )'(x)=-\frac{\lambda ^2x}{2(\alpha +1)}J_\lambda ^{\alpha +1}(x)\\&\quad =J_\lambda ^\alpha (x)+N\,K_\lambda ^\alpha (x)\,\text {with}\, K_\lambda ^\alpha (x)=-\frac{1}{\alpha +1}b_\lambda ^{\alpha +1}x^2J_\lambda ^{\alpha +2}(x). \end{aligned} \end{aligned}$$
(2.2)

Proposition 2.1

[12, 31] For \(N>0\) and provided that \(\alpha \in \mathbb {N}_0\), the Bessel-type functions \(\{J_\lambda ^{\alpha ,N}(x)\}_{\lambda >0}\) satisfy the differential equation

$$\begin{aligned} \begin{aligned} \mathcal {B}_x^{\alpha ,N} J_\lambda ^{\alpha ,N}(x):=&\big \lbrace \mathcal {B}_x^\alpha +\frac{N}{2^{2\alpha +2}(\alpha +2)!} \mathcal {T}_x^\alpha \big \rbrace J_\lambda ^{\alpha ,N}(x)\\ =&-\Lambda _\lambda ^{\alpha ,N} J_\lambda ^{\alpha ,N}(x),\,0<x<\infty , \end{aligned} \end{aligned}$$
(2.3)

where \( \mathcal {B}_x^\alpha \) is the classical Bessel operator given in (2.1) or, equivalently, by

$$\begin{aligned} \mathcal {B}_x^\alpha y(x)=\big [D_x^2+\dfrac{2\alpha +1}{x}D_x\big ]y(x) =[x^2\delta _x^2+(2\alpha +2)\delta _x]y(x),\,\delta _x:=x^{-1}D_x, \end{aligned}$$

and \(\mathcal {T}_x^\alpha \) denotes the differential operator of order \(2\alpha +4\),

$$\begin{aligned} \begin{aligned} \mathcal {T}_x^\alpha y(x)&=(-1)^{\alpha +1} \bigg [D_x^2+\frac{2\alpha +1}{x}D_x-\frac{4\alpha +4}{x^2}\bigg ]^{\alpha +2} y(x)\\&=(-1)^{\alpha +1}x^2\delta _x^{2\alpha +4}[x^{2\alpha +2}y(x)]. \end{aligned} \end{aligned}$$
(2.4)

Similarly, the eigenvalue parameter splits up into \( \Lambda _\lambda ^{\alpha ,N}=\lambda ^2+\dfrac{N}{2^{2\alpha +2}(\alpha +2)!} \lambda ^{2\alpha +4}.\)

Here and in the following, \(D_x^{i}\equiv (D_x)^{i},\,i\in \mathbb {N}\), denotes an i-fold differentiation with respect to x, while \(\delta _x^{i}\equiv (\delta _x)^{i}\) is the iterated ’Bessel derivative’. Notice that the continuous system of Bessel-type functions gives rise to a generalized Hankel transform [10], but do not belong to \(L_{\omega _{\alpha ,N}}^2(0,\infty )\). So, being interested in a possible sampling theorem, we reduce the range of Eq. (2.3) to the finite interval \(I=(0,1]\) as in the classical Fourier–Bessel case.

For functions \(f:[0,1]\rightarrow \mathbb {C}\) belonging to the domain

$$\begin{aligned} D( \hat{\mathcal {B}}_x^{\alpha ,N})=\bigg \lbrace \begin{aligned}&f:f^{(j)}\in AC_{loc}(0,1],\,j=0,1,\dots ,2\alpha +3,\\&\mathcal {T}_x^\alpha f\in L_{\omega _\alpha }^2((0,1]), \lim _{x\rightarrow 0+}f(x)=f(0) \end{aligned} \bigg \rbrace , \end{aligned}$$
(2.5)

let the Bessel-type differential operator \(\hat{\mathcal {B}}_x^{\alpha ,N}\) be defined by

$$\begin{aligned} ( \hat{\mathcal {B}}_x^{\alpha ,N}f)(x):= {\left\{ \begin{array}{ll} (\mathcal {B}_x^{\alpha ,N}f)(x),\,0<x \le 1\\ 2(\alpha +1)\delta _xf(x)\vert _{x=0+},\,x=0. \end{array}\right. } \end{aligned}$$
(2.6)

Notice that for \(f(x)=J_\lambda ^{\alpha ,N}(x)\), the definition at the origin is justified since

$$\begin{aligned} \begin{aligned}&2(\alpha +1)\delta _x J_\lambda ^{\alpha ,N}(x)\vert _{x=0+}=\lbrace 2(\alpha +1)\delta _x J_\lambda ^\alpha (x)-2N\,b_\lambda ^{\alpha +1}\delta _x\big [x^2 J_\lambda ^{\alpha +2}(x)\big ]\rbrace \vert _{x=0+}\\&\quad =-\lambda ^2J_\lambda ^{\alpha +1}(0)-N\,\frac{4(\lambda /2)^{2\alpha +4}}{(\alpha +2)!}J_\lambda ^{\alpha +2}(0)= -\Lambda _\lambda ^{\alpha ,N}\equiv \lim _{x\rightarrow 0+}\big [\mathcal {B}_x^{\alpha ,N} J_\lambda ^{\alpha ,N}(x)\big ]. \end{aligned} \end{aligned}$$

An essential feature of the Bessel-type operator is its Green-type formula.

Theorem 2.1

For \(\alpha \in \mathbb {N}_0,\,N>0\), let \((\cdot ,\cdot )_{\omega _{\alpha ,N}}^{}\) denote the inner product defined in (1.14). Then for any \(f,g\in D(\hat{\mathcal {B}}_x^{\alpha ,N})\), there holds

$$\begin{aligned} \begin{aligned}&(\hat{\mathcal {B}}_x^{\alpha ,N}f,g)_{\omega _{\alpha ,N}}^{}- (f,\hat{\mathcal {B}}_x^{\alpha ,N}g)_{\omega _{\alpha ,N}}^{}\\&\quad =\frac{2}{\alpha !}\int _{0+}^{1}\big \lbrace (\mathcal {B}_x^{\alpha ,N}f)(x)g(x)\!-\!f(x) (\mathcal {B}_x^{\alpha ,N}g)(x)\big \rbrace x^{2\alpha +1}dx\\&\qquad +N\big \lbrace (\hat{\mathcal {B}}_x^{\alpha ,N}f)(0)g(0)-f(0) (\hat{\mathcal {B}}_x^{\alpha ,N}g)(0)\big \rbrace =\frac{2}{\alpha !}\lbrace f'(1)g(1)-f(1)g'(1)\rbrace \\&\qquad +\frac{N}{2^{2\alpha +1}\alpha !(\alpha +2)!}\sum _{j=0}^{2\alpha +3}(-1)^{\alpha +1+j} \delta _x^{2\alpha +3-j}\big [x^{2\alpha +2}f(x)\big ]\delta _x^j\big [x^{2\alpha +2}g(x)\big ]\vert _{x=1}. \end{aligned} \end{aligned}$$

Proof

Obviously, the Bessel operator \(\mathcal {B}_x^\alpha \) satisfies the Green formula

$$\begin{aligned} \begin{aligned}&\int _{0+}^{1}\big \lbrace ( \mathcal {B}_x^\alpha f)(x)g(x)\!-\!f(x) \mathcal ({B}_x^\alpha g)(x)\big \rbrace x^{2\alpha +1}dx\\&\quad =\int _{0+}^{1}\big \lbrace [x^{2\alpha +1}f'(x)]'g(x)\!-\! f(x)[x^{2\alpha +1}g'(x)]'\big \rbrace dx\\&\quad =x^{2\alpha +1}\big \lbrace f'(x)g(x)-f(x)g'(x)\big \rbrace \big \vert _{x=0+}^1= f'(1)g(1)-f(1)g'(1). \end{aligned} \end{aligned}$$
(2.7)

Furthermore, by observing that \( \delta _x^{2j}=x^{-1}D_x\delta _x^{2j-1}\), an \((\alpha +2)\)-fold integration by parts yields

$$\begin{aligned} \begin{aligned}&\int _{0+}^{1}\big \lbrace (\mathcal {T}_x^\alpha f)(x)g(x)-f(x) \mathcal ({T}_x^\alpha g)(x)\big \rbrace x^{2\alpha +1}dx\\&\quad =(-1)^{\alpha +1}\! \int _{0+}^{1}\big \lbrace D_x\delta _x^{2\alpha +3}[x^{2\alpha +2}f(x) ]g(x)-f(x)D_x\delta _x^{2\alpha +3}[x^{2\alpha +2}g(x) ]\big \rbrace x^{2\alpha +2}dx\\&\quad =\dots =\lbrace H(f,g)(1)-H(g,f)(1)\rbrace -\lbrace H(f,g)(0)-H(g,f)(0)\rbrace ,\,\text {where}\\&\qquad H(f,g)(x):=\sum _{j=0}^{\alpha +1} (-1)^{\alpha +1+j}\delta _x^{2\alpha +3-j}[x^{2\alpha +2}f(x)]\,\delta _x^j [x^{2\alpha +2}g(x)]. \end{aligned} \end{aligned}$$

Here, the remaining integrals have cancelled each other by symmetry in fg. Concerning the value of the sum H(fg) at the origin, it turns out that only the term for \(j=\alpha +1\) does not vanish. Hence, by some straightforward calculations, we get

$$\begin{aligned} \begin{aligned} H(f,g)(0)&=\delta _x^{\alpha +2}[x^{2\alpha +2}f(x)]\,\delta _x^{\alpha +1} [x^{2\alpha +2}g(x)]\big \vert _{x=0}\\&=2^{2\alpha +2}(\alpha +2)!(\alpha +1)!\,(\delta _x f)(0)g(0) \end{aligned} \end{aligned}$$

and thus

$$\begin{aligned} -\frac{N\lbrace H(f,g)(0)-H(g,f)(0)\rbrace }{2^{2\alpha +1}\alpha !(\alpha +2)!} =-N(2\alpha +2)\big \lbrace (\delta _xf)(0)-f(0)(\delta _xg)(0)\big \rbrace . \end{aligned}$$

This expression, however, annihilates the term \(N\lbrace (\hat{\mathcal {B}}_x^{\alpha ,N}f)(0)-f(0)(\hat{\mathcal {B}}_x^{\alpha ,N}g)(0)\rbrace \) by definition (2.6). Putting things together and observing that

$$\begin{aligned} H(f,g)(1)-H(g,f)(1)=\sum _{j=0}^{2\alpha +3} (-1)^{\alpha +1+j}\delta _x^{2\alpha +3-j}[x^{2\alpha +2}f(x)]\,\delta _x^j [x^{2\alpha +2}g(x)]\big \vert _{x=1} \end{aligned}$$

complete the proof of Thm. 2.1. \(\square \)

As an immediate consequence of this Green-type formula, the Bessel-type differential operator \(\hat{\mathcal {B}}_x^{\alpha ,N}\) can be made (formally) self-adjoint with respect to the inner product \((\cdot ,\cdot )_{\omega _{\alpha ,N}}^{}\) by imposing a boundary condition at the right endpoint \(x=1\), namely

$$\begin{aligned} \begin{aligned}&\lbrace f'(1)g(1)-f(1)g'(1)\rbrace +\frac{N}{2^{2\alpha +2}(\alpha +2)!}\cdot \\&\quad \cdot \sum _{j=0}^{2\alpha +3} (-1)^{\alpha +1+j}\delta _x^{2\alpha +3-j}[x^{2\alpha +2}f(x)]\,\delta _x^j [x^{2\alpha +2}g(x)]\big \vert _{x=1}=0. \end{aligned} \end{aligned}$$
(2.8)

Choosing now \(f(x)=y_\lambda (x):=J_\lambda ^{\alpha ,N}(x)\), \(g(x)=y_\mu (x):=J_\mu ^{\alpha ,N}(x)\) as two distinct eigenfunctions of the operator \(\hat{\mathcal {B}}_x^{\alpha ,N}\) and using the abbreviation

$$\begin{aligned} Y_\lambda ^{(k)}(x):=\delta _x^k[x^{2\alpha +2}y_\lambda (x)],\,k\in \mathbb {N}_0, \end{aligned}$$
(2.9)

so that \(Y_\lambda ^{(0)}(1)=y_\lambda (1),\,Y_\lambda ^{(1)}(1)=y_\lambda '(1)+(2\alpha +2)y_\lambda (1)\), condition (2.8) becomes

$$\begin{aligned} \begin{aligned}&\lbrace Y_\lambda ^{(1)}(1)Y_\mu ^{(0)}(1)-Y_\lambda ^{(0)}(1)Y_\mu ^{(1)}(1)\rbrace +\\&\qquad +\frac{N}{2^{2\alpha +2}(\alpha +2)!} \sum _{j=0}^{2\alpha +3}(-1)^{\alpha +1+j}Y_\lambda ^{(2\alpha +3-j)}(1)Y_\mu ^{(j)}(1)=0. \end{aligned} \end{aligned}$$
(2.10)

Since \(N>0\) is arbitrary, both terms in (2.10) must vanish simultaneously. This however holds, if there exists a constant \(c=c(\alpha ,N)\in \mathbb {R}\) not depending on the eigenvalue parameter, such that

$$\begin{aligned} \begin{aligned}&cY_\lambda ^{(0)}(1)+Y_\lambda ^{(1)}(1)=0,\quad cY_\mu ^{(0)}(1)+Y_\mu ^{(1)}(1)=0,\\&N\sum _{j=0}^{2\alpha +3}(-1)^{\alpha +1+j}Y_\lambda ^{(2\alpha +3-j)}(1)Y_\mu ^{(j)}(1)=0. \end{aligned} \end{aligned}$$
(2.11)

In the limit case \(N=0\), it is easy to see that (2.11) is satisfied if \(\lambda ,\mu \) are zeros of the Bessel function \(J_\alpha (x)\) or, more generally, of the combination \((c+\alpha +2)J_\alpha (x)+xJ_\alpha '(x)\). This actually gives rise to the classical Fourier–Bessel- and Fourier-Dini-functions, respectively, cf. [9, 7.10.4], [33, Chap. 18].

If \(N>0\), however, it is a crucial task to determine the constant \(c(\alpha ,N)\) and to restrict the eigenvalue parameter such that (2.11) is fulfilled. In order to simplify the derivatives occurring in the sum of (2.11) up to the order \(2\alpha +3\), a promising strategy is to start off from a \(\lambda \)-dependent, second-order differential equation for the Bessel-type functions \(\{J_\lambda ^{\alpha ,N}(x)\}_{\lambda >0}\). Then, after deriving this equation sufficiently often and evaluating each of the resulting equations at the endpoint \(x=1\), all quantities \(Y_\lambda ^{(2\alpha +3-j)}(1)\) (resp. \(Y_\mu ^{(j)}(1)\)) can be expressed in terms of \(Y_\lambda ^{(1)}(1),\,Y_\lambda ^{(0)}(1)\), and finally of \(Y_\lambda ^{(0)}(1)\) alone.

In the following, we illustrate our strategy in the first non-trivial case \(\alpha =0\). For simplicity we drop this parameter, as long as there is no confusion. Here, the differential equation (2.3) for the Bessel type functions \(y_\lambda (x):=J_\lambda ^{0,N}(x)\) becomes the fourth-order equation

$$\begin{aligned} \begin{aligned}&\mathcal {B}_x^N y_\lambda (x):=\big \lbrace \mathcal {B}_x+\dfrac{N}{8}\mathcal {T}_x\big \rbrace y_\lambda (x) =-\Lambda _\lambda ^N y_\lambda (x),\,0<x\le 1,\,\text {with}\\&\mathcal {B}_x y_\lambda (x)=\big [D_x^2+\dfrac{1}{x}D_x\big ]y_\lambda (x),\, \mathcal {T}_x y_\lambda (x)=-\big [D_x^2+\dfrac{1}{x}D_x-\dfrac{4}{x^2} \big ]^2 y_\lambda (x) \end{aligned} \end{aligned}$$
(2.12)

and eigenvalue parameter \(\Lambda _\lambda ^N=\lambda ^2+(N/8)\,\lambda ^4\). Notice that Eq. (2.12), when multiplied by \(-8x/N,\,N>0\), takes the Lagrange symmetric form

$$\begin{aligned} \big (xy_\lambda ''(x)\big )''-\big (\big [\frac{8x}{N}+\frac{9}{x}\big ]y_\lambda '(x)\big )' =\Lambda x\,y_\lambda (x), \,\Lambda =\lambda ^4+\dfrac{8}{N}\lambda ^2. \end{aligned}$$

In a number of papers by Everitt and the author [13,14,15], we used this equation to develop a spectral theory for the Bessel-type differential operator on the half line and, in particular, on the finite interval \(0<x\le 1\). In this latter case, we found a discrete spectrum of eigenvalues, which are based on the real zeros of an even function \(\varphi _N(x)\) involving the classical Bessel functions \(J_0(x),\,J_1(x)\), see (2.19).

By using the above approach, we obtain the function \(\varphi _N(x)\) as follows: The Bessel-type functions \(y_\lambda (x):=J_\lambda ^{0,N}(x)\) satisfy the \(\lambda \)-dependent, second-order equation, see [14, La. 7.1],

$$\begin{aligned} \begin{aligned}&\big [a_\lambda ^2x^2-N\big ]y_\lambda ^{(2)}(x)+\big [a_\lambda ^2x-\frac{3N}{x}\big ]y_\lambda ^{(1)}(x)+\big [(\lambda ^2x^2-4)a_\lambda ^2+4\big ]y_\lambda (x)=0,\\&\quad \text {where}\, a_\lambda :=a_\lambda ^{0,N}=1+\frac{N}{4} \lambda ^2. \end{aligned} \end{aligned}$$

With \(Y_\lambda ^{(k)}(x):=\delta _x^k[x^2y_\lambda (x)]\) as in (2.9), a straightforward calculation yields

$$\begin{aligned} \begin{aligned}&\big [a_\lambda ^2x^2-N\big ]Y_\lambda ^{(2)}(x)-2a_\lambda ^2 Y_\lambda ^{(1)}(x)+\big [\lambda ^2a_\lambda ^2+\dfrac{4}{x^2}\big ]Y_\lambda (x)=0,\\&\big [a_\lambda ^2x^2-N\big ]Y_\lambda ^{(3)}(x)+\big [\lambda ^2a_\lambda ^2+\dfrac{4}{x^2}\big ]Y_\lambda ^{(1)}(x)-\dfrac{8}{x^4} \, Y_\lambda (x)=0. \end{aligned} \end{aligned}$$
(2.13)

Assuming now that there is a constant \(c=c(N)\) with \(cY_\lambda ^{(0)}(1)+Y_\lambda ^{(1)}(1)=0\) and setting \(Y_\lambda ^{(k)}:=Y_\lambda ^{(k)}(1)\), for simplicity, we obtain, for \(x=1\),

$$\begin{aligned} \begin{aligned}&\big [a_\lambda ^2-N\big ]Y_\lambda ^{(2)}=2a_\lambda ^2 Y_\lambda ^{(1)}-\big [\lambda ^2a_\lambda ^2+4\big ]Y_\lambda =-\big [(2c+\lambda ^2)a_\lambda ^2+4\big ]Y_\lambda ,\\&\big [a_\lambda ^2-N\big ]Y_\lambda ^{(3)}=-\big [\lambda ^2a_\lambda ^2+4\big ]Y_\lambda ^{(1)}+8Y_\lambda =\big [c\big (\lambda ^2 a_\lambda ^2+4\big )+8\big ]Y_\lambda . \end{aligned} \end{aligned}$$
(2.14)

Hence, the boundary condition in (2.11) is equivalent to

$$\begin{aligned} \begin{aligned}&\big [a_\lambda ^2-N\big ]\big [a_\mu ^2-N\big ]\big \lbrace -Y_\lambda ^{(3)}Y_\mu +Y_\lambda ^{(2)}Y_\mu ^{(1)}-Y_\lambda ^{(1)}Y_\mu ^{(2)} +Y_\lambda Y_\mu ^{(3)}\big \rbrace \\&\quad =\big [a_\mu ^2-N\big ]\big \lbrace \big [2c^2a_\lambda ^2-8 \big ]Y_\lambda Y_\mu \big \rbrace - \big [a_\lambda ^2-N\big ]\big \lbrace \big [2c^2a_\mu ^2-8\big ]Y_\lambda Y_\mu \big \rbrace \\&\quad =\big \lbrace 2c^2\big [a_\lambda ^2a_\mu ^2-Na_\lambda ^2\big ]- 2c^2\big [a_\lambda ^2a_\mu ^2-Na_\mu ^2\big ]+8\big [a_\lambda ^2-a_\mu ^2\big ]\big \rbrace Y_\lambda Y_\mu \\&\quad =\big \lbrace \big [8-2c^2N\big ]\big [a_\lambda ^2-a_\mu ^2\big ]\big \rbrace Y_\lambda Y_\mu =0. \end{aligned} \end{aligned}$$

This, however, holds for any values of \(\lambda ,\mu \), if \(c^2=4/N\) with the two solutions \(c=c_\pm :=\pm 2/\sqrt{N}\). So it remains to determine the eigenvalues such that

$$\begin{aligned} c_\pm Y_\lambda ^{(0)}+Y_\lambda ^{(1)}\equiv [c_\pm +2]J_\lambda ^{0,N}(1)+(J_\lambda ^{0,N})'(1)=0. \end{aligned}$$

In view of the representation (2.2) and the classical Bessel equation (2.1), there holds

$$\begin{aligned} \begin{aligned} J_\lambda ^{0,N}(x)=&a_\lambda J_\lambda ^0(x)+\frac{N}{2x}(J_\lambda ^0)'(x),\\ (J_\lambda ^{0,N})'(x)=&\big [a_\lambda -\frac{N}{2x^2}\big ](J_\lambda ^0)'(x)+ \frac{N}{2x}(J_\lambda ^0)''(x)\\ =&\big [a_\lambda -\frac{N}{x^2}\big ](J_\lambda ^0)'(x)-\frac{N\lambda ^2}{2x}J_\lambda ^0(x). \end{aligned} \end{aligned}$$

Since \(J_\lambda ^0(1)=J_0(\lambda ),\,(J_\lambda ^0)'(1)=-(\lambda ^2/2)J_\lambda ^1(1)=-\lambda J_1(\lambda )\), it follows that

$$\begin{aligned} \begin{aligned}&J_\lambda ^{0,N}(1)=a_\lambda J_0(\lambda )-\frac{N}{2}\lambda J_1(\lambda ),\\&(J_\lambda ^{0,N})'(1)=-\dfrac{N\lambda ^2}{2}J_0(\lambda )-[a_\lambda -N]\,\lambda J_1(\lambda ), \end{aligned} \end{aligned}$$
(2.15)

and thus

$$\begin{aligned} \begin{aligned} 0&=[c_\pm +2]J_\lambda ^{0,N}(1)+(J_\lambda ^{0,N})'(1)\\&=[c_\pm +2]\big \lbrace a_\lambda J_0(\lambda )-\frac{N}{2}\lambda J_1(\lambda )\big \rbrace - \frac{N}{2}\lambda ^2 J_0(\lambda )-[a_\lambda -N]\,\lambda J_1(\lambda )\\&=J_0(\lambda )\big \lbrace -\dfrac{N}{2}\lambda ^2+[c_\pm +2]\,a_\lambda \big \rbrace -\lambda J_1(\lambda )\big \lbrace a_\lambda -N+[c_\pm +2]\,\frac{N}{2}\big \rbrace \\&=J_0(\lambda )\{c_\pm a_\lambda +2\}-\frac{\lambda }{c_\pm }J_1(\lambda )\big \lbrace c_\pm a_\lambda + c_\pm ^2\frac{N}{2}\big \rbrace \\&=\big \lbrace J_0(\lambda )\mp \frac{\sqrt{N}}{2}\lambda J_1(\lambda )\big \rbrace \big \lbrace \pm \frac{2}{\sqrt{N}}(1+\frac{N}{4}\lambda ^2)+2\big \rbrace . \end{aligned} \end{aligned}$$
(2.16)

Henceforth we focus on the case \(c_+=2/\sqrt{N}\), while the case \(c_-=-2/\sqrt{N}\) can be treated similarly.

Proposition 2.2

For \(N>0\), let the operator \(\mathcal {B}_x^N\) in (2.12) be extended to, cf. (2.6),

$$\begin{aligned} (\hat{\mathcal {B}}_x^N f)(x):=\bigg \lbrace \begin{aligned}&(\mathcal {B}_x^Nf)(x),\,0<x \le 1\\&2\delta _xf(x)\vert _{x=0+},\,x=0, \end{aligned} \end{aligned}$$
(2.17)

where the domain is restricted by two boundary conditions involving \(c=2/\sqrt{N}\), i.e.,

$$\begin{aligned} D_c:={\left\{ \begin{array}{ll} f:f\in AC_{loc}^{(3)}(0,1],\,\mathcal {T}_x f\in L^2((0,1],x),\lim _{x\rightarrow 0+}f(x)=f(0),\\ (c+2)f(1)+f'(1)=0,\\ f'''(1)+(c+3)f''(1)+[8-(c+1)(c+2)]f(1)=0. \end{array}\right. } \end{aligned}$$
(2.18)

Furthermore, let \(\{\gamma _{k,N}^{}\}_{k=0}^\infty \) denote the unbounded sequence of positive, strictly increasing zeros of

$$\begin{aligned} \varphi _N(\lambda ) =J_0(\lambda )-\frac{\sqrt{N}}{2}\lambda J_1(\lambda ). \end{aligned}$$
(2.19)
  1. (a)

    The operator \(\hat{\mathcal {B}}_x^N\) is self-adjoint in the space \(L_{\omega _N}^2((0,1])\) with inner product and norm

    $$\begin{aligned} (f,g)_{\omega _N}:=2\int _{0}^{1}f(x)\overline{g(x)}dx+Nf(0)\overline{g(0)},\,\Vert f\Vert _{\omega _N}=\sqrt{(f,f)_{\omega _N}}. \end{aligned}$$
    (2.20)
  2. (b)

    Its discrete, simple spectrum is given by \(\Lambda _{\gamma _{k,N}}^N:=\gamma _{k,N}^2\big (1+\frac{N}{8}\gamma _{k,N}^2\big )\), while the corresponding eigenfunctions are the Fourier–Bessel-type functions \(J_{\gamma _{k,N}}^{0,N}(x),\,0\le x\le 1,\,k\in \mathbb {N}_0\), normalized by \(J_{\gamma _{k,N}}^{0,N}(0)=1\). In particular, they satisfy the two boundary conditions in (2.18).

  3. (c)

    The Fourier–Bessel-type functions form a complete orthogonal system in \(L_{\omega _N}^2((0,1])\) with

    $$\begin{aligned} \big (J_{\gamma _{k,N}}^{0,N},J_{\gamma _{n,N}}^{0,N}\big )_{\omega _N}=h_k^N\,\delta _{k,n},\, h_k^N:=\big (1+\frac{N}{4}\gamma _{k,N}^2\big )^3 \big (J_1(\gamma _{k,N}^{})\big )^2,\, k,n\in \mathbb {N}_0. \end{aligned}$$
    (2.21)
  4. (d)

    The corresponding eigenfunction expansion of a function \(f\in L_{\omega _N}^2((0,1])\), called its Fourier–Bessel-type series, converges to f in the mean, i.e.,

    $$\begin{aligned} \lim _{n\rightarrow \infty }\big \Vert f- \sum _{k=0}^{n}(h_k^N)^{-1} \big (f,J_{\gamma _{k,N}}^{0,N}\big )_{\omega _N} J_{\gamma _{k,N}}^{0,N}\big \Vert _{\omega _N}=0. \end{aligned}$$

Proof

See [14, Sec. 6], [15, Sec. 2]. To show that the Fourier–Bessel-type functions satisfy the boundary conditions in (2.18), we consider their equivalent form

$$\begin{aligned} cY_\lambda ^{(0)}+Y_\lambda ^{(1)}=0,\,Y_\lambda ^{(3)}+cY_\lambda ^{(2)}-3(c-1)Y_\lambda ^{(1)}- c(c-3)Y_\lambda ^{(0)}=0. \end{aligned}$$

But in view of (2.14) with \(c^2=4/N\), this is true since

$$\begin{aligned} \begin{aligned}&\big [a_\lambda ^2-N\big ]\big [Y_\lambda ^{(3)}+cY_\lambda ^{(2)}-3(c-1)Y_\lambda ^{(1)}- c(c-3)Y_\lambda ^{(0)} \big ]\\&\quad =\big [c(\lambda ^2a_\lambda ^2+4)+8 \big ]-c\,\big [(2c+\lambda ^2)a_\lambda ^2+4 \big ]+2c^2\big [a_\lambda ^2-N \big ]=0. \end{aligned} \end{aligned}$$

\(\square \)

For \(N=0,1,4\), the first five zeros of \(\varphi _N(\lambda )\) (up to 6 decimals) are listed in

Table 1 Numerical values

Now we can state a Kramer-type theorem with respect to the Fourier–Bessel-type functions.

Theorem 2.2

In the Hilbert space \(L_{\omega _N}^2((0,1])\) with inner product (2.20), let a Kramer-type kernel be given, for any \(\lambda >0\), by the Bessel-type function

$$\begin{aligned} K(x,\lambda )=J_\lambda ^{0,N}(x),\,0\le x\le 1. \end{aligned}$$

Moreover, let \(\{\gamma _{k,N}^{}\}_{k=0}^\infty \) denote the sequence of positive zeros of \(\varphi _N(\lambda )\) defined in (2.19), so that the functions \(\{K(x,\gamma _{k,N}^{})\}_{k=0}^\infty \) form a complete orthogonal system in \(L_{\omega _N}^2((0,1])\) by virtue of (2.21).

  1. (a)

    If a function \(F(\lambda ),\,\lambda >0\), is given, for some function \(g\in L_{\omega _N}^2((0,1])\), as

    $$\begin{aligned} F(\lambda )=2\int _{0}^{1}K(x,\lambda )\overline{g(x)}x\,dx+N\overline{g(0)}, \end{aligned}$$

    then F can be reconstructed from its samples via the absolutely convergent series

    $$\begin{aligned} F(\lambda )=\sum _{k=0}^{\infty }F(\gamma _{k,N}^{})S_k^N(\lambda ),\,S_k^N(\lambda )= (h_k^N)^{-1}\big (J_\lambda ^{0,N},J_{\gamma _{k,N}}^{0,N}\big )_{\omega _N}. \end{aligned}$$
    (2.22)
  2. (b)

    The sampling functions \(S_k^N(\lambda ),\,k\in \mathbb {N}_0\), are explicitly given by

    $$\begin{aligned} S_k^N(\lambda )=\frac{-\gamma _{k,N}^{}\,\varphi _N(\lambda )}{a_{\gamma _{k,N}}^3 J_1(\gamma _{k,N}^{})} \bigg [\frac{2a_\lambda \,a_{\gamma _{k,N}}}{\lambda ^2-\gamma _{k,N}^2}-\frac{N^{3/2}}{2} \bigg ],\quad a_\lambda =1+\frac{N}{4}\lambda ^2. \end{aligned}$$
    (2.23)

Proof

(a) This is an immediate application of Kramer’s Theorem 1.1, see (1.21.3).

(b) One way to determine the functions \( S_k^N(\lambda )\) is to utilize the fact that the inner product of two Bessel-type functions is related to that of the related classical Bessel functions by, see [12, Thm. 4.2],

$$\begin{aligned} \begin{aligned}&\big (J_\lambda ^{0,N},J_\mu ^{0,N}\big )_{\omega _N}:=2\int _{0}^{1}J_\lambda ^{0,N}(x) J_\mu ^{0,N}(x)x\,dx+N\\&\quad =2a_\lambda a_\mu \int _{0}^{1}J_\lambda ^0(x)J_\mu ^0(x)x\,dx+N\,J_\lambda ^0(1)J_\mu ^0(1)- \frac{N^2}{4}(J_\lambda ^0)'(1)(J_\mu ^0)'(1),\lambda ,\mu \ge 0. \end{aligned} \end{aligned}$$

In view of the Bessel equation (2.1) and the Green formula for the operator \(\mathcal {B}_x^0\), there holds

$$\begin{aligned} \begin{aligned} \big [\lambda ^2-\mu ^2 \big ]\int _{0}^{1}J_\lambda ^0(x)J_\mu ^0(x)x\,dx&= \int _{0}^{1}\big \lbrace J_\lambda ^0(x)(\mathcal {B}_x^0 J_\mu ^0)(x) -(\mathcal {B}_x^0 J_\lambda ^0)(x))J_\mu ^0(x)\big \rbrace x\,dx\\&=J_\lambda ^0(1)(J_\mu ^0)'(1)-J_\mu ^0(1)(J_\lambda ^0)'(1). \end{aligned} \end{aligned}$$

So, recalling that \(J_\lambda ^0(1)=J_0(\lambda ),\,(J_\lambda ^0)'(1)=-\lambda \,J_1(\lambda )\) and choosing \(\mu =\gamma _{k,N}^{}\), we obtain

$$\begin{aligned} \begin{aligned} \big (J_\lambda ^{0,N},J_{\gamma _{k,N}}^{0,N}\big )_{\omega _N}=&2a_\lambda a_{\gamma _{k,N}}\frac{\lambda J_1(\lambda )J_0(\gamma _{k,N})-\gamma _{k,N}J_1(\gamma _{k,N})J_0(\lambda )}{\lambda ^2-\gamma _{k,N}^2}\\&+N\,\big [J_0(\lambda )J_0(\gamma _{k,N})-\frac{N}{4}\lambda J_1(\lambda )\,\gamma _{k,N} J_1(\gamma _{k,N})\big ]. \end{aligned} \end{aligned}$$

Since \( \varphi _N(\gamma _{k,N}) =J_0(\gamma _{k,N})-\frac{\sqrt{N}}{2}\gamma _{k,N} J_1(\gamma _{k,N})=0\), the right-hand side is equal to

$$\begin{aligned} -\gamma _{k,N}J_1(\gamma _{k,N})\bigg [J_0(\lambda )- \frac{\sqrt{N}}{2}\lambda J_1(\lambda )\bigg ]\, \bigg [\frac{2a_\lambda \,a_{\gamma _{k,N}}}{\lambda ^2-\gamma _{k,N}^2}-\frac{N^{3/2}}{2} \bigg ]. \end{aligned}$$

Dividing this expression by the constant \(h_k^N\) in (2.21) then yields (2.23).

A second proof is of its own interest, for it proceeds directly from the Bessel-type equation (2.12) and the Green-type formula in Thm. 2.1 for \(\alpha =0\). In view of (2.10), we have

$$\begin{aligned} \begin{aligned}&\big [\Lambda _{\gamma _{k,N}}^N-\Lambda _\lambda ^N \big ]\big (J_\lambda ^{0,N},J_{\gamma _{k,N}}^{0,N}\big )_{\omega _N}= \big (\hat{\mathcal {B}}_x^N J_\lambda ^{0,N},J_{\gamma _{k,N}}^{0,N}\big )_{\omega _N} -\big (J_\lambda ^{0,N},\hat{\mathcal {B}}_x^N J_{\gamma _{k,N}}^{0,N}\big )_{\omega _N}\\&\quad =2\big [Y_\lambda ^{(1)}Y_{\gamma _{k,N}}^{}-Y_\lambda ^{} Y_{\gamma _{k,N}}^{(1)}\big ]+ \frac{N}{4}\big [-Y_\lambda ^{(3)}Y_{\gamma _{k,N}}+Y_\lambda ^{(2)} Y_{\gamma _{k,N}}^{(1)}-Y_\lambda ^{1)}Y_{\gamma _{k,N}}^{(2)} +Y_\lambda Y_{\gamma _{k,N}}^{(3)}\big ]\\&\quad =:\Omega _{\lambda ,\gamma _{k,N}}^{N}, \end{aligned} \end{aligned}$$
(2.24)

say, where \(\Lambda _\lambda ^N:=\lambda ^2\big (1+\frac{N}{8}\lambda ^2\big )\) and \(Y_\lambda ^{(k)}:=\delta _x^k[x^2 J_\lambda ^{0,N}(x)]\big \vert _{x=1},\,k=0,\dots ,3\). Due to the relationships (2.14) and the fact that \(Y_{\gamma _{k,N}}^{(1)}=-c\,Y_{\gamma _{k,N}}^{},\,c=2/\sqrt{N}\), some straightforward calculations yield

$$\begin{aligned} \begin{aligned} -\frac{N}{4}Y_{\gamma _{k,N}}^{(2)}&=\frac{N}{4}\frac{\big [2c+\gamma _{k,N}^2\big ] \,a_{\gamma _{k,N}}^2+4}{a_{\gamma _{k,N}}^2-N}Y_{\gamma _{k,N}}\\&=\bigg \lbrace \sqrt{N}+ a_{\gamma _{k,N}}-1+ \frac{N}{a_{\gamma _{k,N}}-\sqrt{N}}\bigg \rbrace Y_{\gamma _{k,N}},\\ \frac{N}{4}Y_{\gamma _{k,N}}^{(3)}&= \frac{N}{4}\frac{c\,\big [\gamma _{k,N}^2\,a_{\gamma _{k,N}}^2+4\big ]+8}{a_{\gamma _{k,N}}^2-N} Y_{\gamma _{k,N}}=c\,\bigg \lbrace a_{\gamma _{k,N}}-1+ \frac{N}{a_{\gamma _{k,N}}-\sqrt{N}}\bigg \rbrace Y_{\gamma _{k,N}}, \end{aligned} \end{aligned}$$

while

$$\begin{aligned} \begin{aligned}&-\frac{N}{4}cY_\lambda ^{(2)}=-\frac{N}{2}c\,\bigg \lbrace 1+ \frac{N}{a_\lambda ^2-N}\bigg \rbrace Y_\lambda ^{(1)}+c\,\bigg \lbrace a_\lambda -1+ \frac{Na_\lambda }{a_\lambda ^2-N}\bigg \rbrace Y_\lambda ,\\&-\frac{N}{4}Y_\lambda ^{(3)}=\bigg \lbrace a_\lambda -1+\frac{Na_\lambda }{a_\lambda ^2-N}\bigg \rbrace Y_\lambda ^{(1)}-\frac{2N}{a_\lambda ^2-N}Y_\lambda . \end{aligned} \end{aligned}$$

Inserting these four quantities into (2.24) and collecting coefficients of \(Y_{\gamma _{k,N}}\), we get

$$\begin{aligned} \Omega _{\lambda ,\gamma _{k,N}}^{N}=\big [a_\lambda +a_{\gamma _{k,N}}\big ]\bigg \lbrace 1+ \frac{N}{[a_\lambda +\sqrt{N}][a_{\gamma _{k,N}}-\sqrt{N}]} \bigg \rbrace \big [Y_\lambda ^{(1)}+cY_\lambda \big ]Y_{\gamma _{k,N}}. \end{aligned}$$

But since, in view of (2.152.16),

$$\begin{aligned} \begin{aligned} Y_{\gamma _{k,N}}&=J_{\gamma _{k,N}}^{0,N}(1)=a_{\gamma _{k,N}}J_0(\gamma _{k,N})- \frac{N}{2} \gamma _{k,N}J_1(\gamma _{k,N})\\&=\frac{\sqrt{N}}{2}\big [a_{\gamma _{k,N}}-\sqrt{N}\big ]\gamma _{k,N}J_1(\gamma _{k,N}),\\ Y_\lambda ^{(1)}+cY_\lambda&=\big [c+2\big ]J_\lambda ^{0,N}(1)+(J_\lambda ^{0,N})'(1)= \varphi _N(\lambda )\big \lbrace \frac{2}{\sqrt{N}}a_\lambda +2\big \rbrace , \end{aligned} \end{aligned}$$

it follows that

$$\begin{aligned} \big \lbrace Y_\lambda ^{(1)}+cY_\lambda \big \rbrace Y_{\gamma _{k,N}}= \varphi _N(\lambda )\big [a_\lambda +\sqrt{N}\big ]\big [a_{\gamma _{k,N}} -\sqrt{N}\big ]\gamma _{k,N}J_1(\gamma _{k,N}) \end{aligned}$$

and thus

$$\begin{aligned} \Omega _{\lambda ,\gamma _{k,N}}^{N}= \big [a_\lambda +a_{\gamma _{k,N}}\big ]\big \lbrace a_\lambda \,a_{\gamma _{k,N}}-\sqrt{N}\big [a_\lambda -a_{\gamma _{k,N}} \big ]\big \rbrace \varphi _N(\lambda ) \gamma _{k,N}J_1(\gamma _{k,N}). \end{aligned}$$

Now divide both sides by \(\Lambda _{\gamma _{k,N}}^N-\Lambda _\lambda ^N\) and use the two identities

$$\begin{aligned} \begin{aligned}&\Lambda _\lambda ^N -\Lambda _{\gamma _{k,N}}^N=\bigg [1+\frac{N}{8}\big (\lambda ^2+ \gamma _{k,N}^2\big )\bigg ]\big [\lambda ^2-\gamma _{k,N}^2\big ]=\frac{1}{2}\big [a_\lambda +a_{\gamma _{k,N}}\big ]\big [\lambda ^2-\gamma _{k,N}^2\big ],\\&a_\lambda -a_{\gamma _{k,N}}=\frac{N}{4} \big [\lambda ^2-\gamma _{k,N}^2\big ]\end{aligned} \end{aligned}$$

to get

$$\begin{aligned} \begin{aligned} \big (J_\lambda ^{0,N},J_{\gamma _{k,N}}^{0,N}\big )_{\omega _N}&= \frac{ \Omega _{\lambda ,\gamma _{k,N}}^{N}}{\Lambda _{\gamma _{k,N}}^N-\Lambda _\lambda ^N }\\&=\frac{-2\varphi _N(\lambda ) \gamma _{k,N}J_1(\gamma _{k,N})}{\lambda ^2-\gamma _{k,N}^2} \bigg [a_\lambda \,a_{\gamma _{k,N}}-\frac{N^{3/2}}{4}\big (\lambda ^2-\gamma _{k,N}^2\big ) \bigg ]. \end{aligned} \end{aligned}$$

Deviding by the normalization constant \(h_k^N\) in (2.21), we arrive at the same result as in (2.23), which settles the proof of Thm. 2.2. \(\square \)

Corollary 2.1

Let the sampling functions \(S_k^{N}(\lambda ),\,k\in \mathbb {N}_0\), be given by (2.23) and let \(\{\gamma _{k,N}^{}\}_{k\in \mathbb {N}_0}\) denote the positive zeros of the function \(\varphi _N(\lambda )\) defined in (2.19). Setting \(G_N(\lambda ):=\varphi _N(\lambda )\,a_N(\lambda )\), where \(a_N(\lambda ):=a_\lambda =1+\frac{N}{4}\lambda ^2\), there holds

$$\begin{aligned} \begin{aligned} \text {(a)}\qquad&\varphi _N'(\gamma _{k,N})=-a_N(\gamma _{k,N}) J_1(\gamma _{k,N}) ,\quad G_N'(\gamma _{k,N}) = -a_N^2(\gamma _{k,N}) J_1(\gamma _{k,N}),\\ \text {(b)}\qquad&S_k^N(\lambda )=\frac{2\gamma _{k,N}G_N(\lambda )}{\big [\lambda ^2-\gamma _{k,N}^2\big ]G_N'(\gamma _{k,N})}-\frac{N^{3/2}}{2} \frac{\gamma _{k,N}\,\varphi _N(\lambda )}{a_N^2(\gamma _{k,N})\,\varphi _N'(\gamma _{k,N})}. \end{aligned} \end{aligned}$$

Proof

(a) In view of the differentiation formulas [9, 7.2.8(50),(51)],

$$\begin{aligned} D_\lambda \big [\lambda ^{-\alpha }J_\alpha (\lambda )\big ]=-\lambda ^{-\alpha }J_{\alpha +1}(\lambda ),\, D_\lambda \big [\lambda ^\alpha J_\alpha (\lambda )\big ]=\lambda ^\alpha J_{\alpha -1}(\lambda ), \end{aligned}$$

we have

$$\begin{aligned} D_\lambda J_0(\lambda )=-J_1(\lambda ),\,D_\lambda \big [\lambda J_1(\lambda )\big ]=\lambda J_0(\lambda ), \,\text {whence}\,\varphi _N'(\lambda )=-J_1(\lambda )-\frac{\sqrt{N}}{2}\lambda J_0(\lambda ). \end{aligned}$$

Since \(\varphi _N(\gamma _{k,N}) =J_0(\gamma _{k,N})-\frac{\sqrt{N}}{2}\gamma _{k,N} J_1(\gamma _{k,N})=0\), this implies

$$\begin{aligned} \begin{aligned} \varphi _N'(\gamma _{k,N})&=-\big [J_1(\gamma _{k,N})+\frac{\sqrt{N}}{2} \gamma _{k,N}\,J_0(\gamma _{k,N})\big ]\\&=-\big [1+\frac{N}{4}\gamma _{k,N}^2 \big ]J_1(\gamma _{k,N})= -a_N(\gamma _{k,N}) J_1(\gamma _{k,N}),\\ G_N'(\gamma _{k,N})&=\varphi _N'(\gamma _{k,N})\,a_N(\gamma _{k,N})+\varphi _N(\gamma _{k,N}) \,a_N'(\gamma _{k,N})=-a_N^2(\gamma _{k,N}) J_1(\gamma _{k,N}). \end{aligned} \end{aligned}$$

(b) The representation (2.23) of \(S_k^N(\lambda )\) yields

$$\begin{aligned} S_k^N(\lambda )=\frac{-\gamma _{k,N}G_N(\lambda )}{a_N^2(\gamma _{k,N}) J_1(\gamma _{k,N})}\, \frac{2}{\lambda ^2-\gamma _{k,N}^2}+\frac{N^{3/2}}{2} \frac{\gamma _{k,N}\,\varphi _N(\lambda )}{a_N^3(\gamma _{k,N})J_1(\gamma _{k,N})}. \end{aligned}$$

The result then follows by applying part (a). \(\square \)

Remark 2.1

For \(N=0\), Thm. 2.2 reduces to the Kramer-type theorem associated with the classical Fourier–Bessel functions in case \(\alpha =0\). With kernel \(K(x,\lambda )=J_\lambda ^0(x)\) and \(\{\gamma _k\}_{k\in \mathbb {N}_0}\) being the positive zeros of \(J_0(x)\), the sampling expansion (2.222.23) reduces to

$$\begin{aligned} F(\lambda )=\sum _{k=0}^{\infty }F(\gamma _k^{})S_k^0(\lambda ),\,S_k^0(\lambda )= \frac{-2\gamma _k J_0(\lambda )}{[\lambda ^2-\gamma _k^2]\,J_1(\gamma _k)} \equiv \frac{2\gamma _k J_0(\lambda )}{[\lambda ^2-\gamma _k^2]\,J_0'(\gamma _k)} \,. \end{aligned}$$
(2.25)

Hence, this expansion furnishes an interpolation formula in accordance with Cor. 2.1(b).

3 Sampling theory associated with the Laguerre equation

In order to establish a Kramer-type sampling theorem based on the Laguerre equation (1.6),(1.8), it is natural to look for an appropriate Kramer kernel \(K(x,\Lambda ),\,0<x<\infty \), with continuous eigenvalue parameter \(\Lambda \), i.e.,

$$\begin{aligned} \begin{aligned} \mathcal {L}_x^\alpha y(x):=&e^x x^{-\alpha }D_x\big [e^{-x}x^{\alpha +1}D_x y(x) \big ]\\ \equiv&[xD_x^2+(\alpha +1-x)D_x]y(x)=-\Lambda \,y(x). \end{aligned} \end{aligned}$$
(3.1)

This is a confluent hypergeometric equation [9, Chap.6], whose two linear independent solutions are denoted by \(\Phi (-\Lambda ,\alpha +1;x)\equiv {}_1F_1(-\Lambda ;\alpha +1;x)\) and \(\Psi (-\Lambda ,\alpha +1;x)\), respectively. When transforming Eq.(3.1) into Whittaker’s standard [9, 6.1(4)] and setting \(\Lambda =\lambda -(\alpha +1)/2\), the two resulting solutions are the Whittaker functions

$$\begin{aligned} \begin{aligned}&M_{\lambda ,\alpha /2}=e^{-x/2}x^{(\alpha +1)/2}\Phi \left( \frac{\alpha +1}{2}-\lambda ,\alpha +1;x\right) ,\\&W_{\lambda ,\alpha /2}=e^{-x/2}x^{(\alpha +1)/2}\Psi \left( \frac{\alpha +1}{2}-\lambda ,\alpha +1;x\right) . \end{aligned} \end{aligned}$$

In Example 6 of his treatise on sampling theorems, Zayed [35] used a slightly different version of the Whittaker equation to introduce his so-called continuous Laguerre transform. But while the function

$$\begin{aligned} \Phi _\lambda ^\alpha (x):=\Phi \left( \frac{\alpha +1}{2}-\lambda ,\alpha +1;x \right) ,\,\lambda \ge \frac{\alpha +1}{2}, \end{aligned}$$
(3.2)

obviously reduces to the (normalized) Laguerre polynomials \(R_n^\alpha (x)\), if \(\lambda =n+(\alpha +1)/2,\,n\in \mathbb {N}_0\), it was pointed out in [35] that, in general,

$$\begin{aligned} \begin{aligned}&\Phi _\lambda ^\alpha (x)\not \in L_{w_\alpha }^2(0,\infty )\,\text {for any}\,\alpha>-1,\\&\Psi _\lambda ^\alpha (x)\in {\left\{ \begin{array}{ll}L_{w_\alpha }^2(0,\infty ),\,\text {if}\,-1<\alpha <1,\\ L_{w_\alpha }^2(\delta ,\infty ),\,\text {if}\,\alpha>-1,\,\delta >0.\\ \end{array}\right. } \end{aligned} \end{aligned}$$
(3.3)

Consequently, Zayed used the second function \(\Psi _\lambda ^\alpha (x)\) as a Kramer kernel to built upon a sampling theorem, provided that the function space is suitably restricted to justify the absolute convergence of the integral

$$\begin{aligned} F(\lambda )=\int _{0}^{\infty }f(x)\,e^{-x/2}x^{\alpha /2} \Psi (\frac{\alpha +1}{2}-\lambda ,\alpha +1;x)dx. \end{aligned}$$

Nevertheless, in case of the regular solution \(\Phi _\lambda ^\alpha (x)\), there has also been attempts to avoid the lack of convergence. In particular, Jerri [24] derived a kind of sampling expansion by means of a Laguerre-transform with kernel

$$\begin{aligned} \Phi (-\lambda ,\alpha +1;\nu x/(\nu -1)),\,\alpha >-1,\,\lambda \ge 0,\,-1<\nu <1. \end{aligned}$$

In the following, we propose a different approach to a Laguerre sampling theorem, now for functions on the bounded interval (0, 1]. To this end, we make use of the Green formula (1.13) associated with the differential operator \(\mathcal {L}_x^\alpha \) defined in (3.1). Observing that for any \((\alpha +1)/2\le \lambda ,\mu <\infty \), the two functions \(\Phi _\lambda ^\alpha ,\,\Phi _\mu ^\alpha \) belong to \(L_{w_\alpha }^2((0,1])\), we obtain

$$\begin{aligned} \begin{aligned}&\frac{1}{\Gamma (\alpha +1)}\int _{0+}^{1}\big \lbrace (\mathcal {L}_x^\alpha \Phi _\lambda ^\alpha )(x) \Phi _\mu ^\alpha (x)-\Phi _\lambda ^\alpha (x)(\mathcal {L}_x^\alpha \Phi _\mu ^\alpha )(x)\big \rbrace e^{-x}x^\alpha dx\\&\quad =\big [\Phi _\lambda ^\alpha ,\Phi _\mu ^\alpha \big ]_1\!-\! \big [\Phi _\lambda ^\alpha ,\Phi _\mu ^\alpha \big ]_{0+} =\frac{e^{-x}x^{\alpha +1}}{\Gamma (\alpha +1)} \lbrace (\Phi _\lambda ^\alpha )'(x)\Phi _\mu ^\alpha (x)\!-\!\Phi _\lambda ^\alpha (x)(\Phi _\mu ^\alpha )'(x) \rbrace \big \vert _{x=1}, \end{aligned} \end{aligned}$$
(3.4)

since \(\lim _{a\rightarrow 0+}\big [\Phi _\lambda ^\alpha ,\Phi _\mu ^\alpha \big ]_a=0\) for any \(\alpha >-1\). The task is now to find a condition under which the resulting value at \(x=1\) vanishes, as well. This is indeed possible by restricting the eigenvalue parameters according to the following feature.

Lemma 3.1

For \(\alpha >-1,\,\lambda \ge (\alpha +1)/2\), define the function

$$\begin{aligned} \psi ^\alpha (\lambda ):=\Phi _\lambda ^\alpha (1)=\sum _{j=0}^{\infty }\frac{(\frac{\alpha +1}{2}-\lambda )_j}{(\alpha +1)_j j!}. \end{aligned}$$
(3.5)
  1. (a)

    \(\psi ^\alpha (\lambda )\) has an unbounded sequence of strictly increasing zeros, say \(\{\delta _k^\alpha \}_{k\in \mathbb {N}_0}\).

  2. (b)

    With \(\{\gamma _k^\alpha \}_{k\in \mathbb {N}_0}\) denoting the positive, increasing zeros of the Bessel function \(J_\alpha (\lambda )\), the zeros of \(\psi ^\alpha (\lambda )\) possess the following lower and upper bounds for \(k\in \mathbb {N}_0,\)

    $$\begin{aligned} \frac{(\gamma _k^\alpha )^2}{4}<\delta _k^\alpha <\big (k+\frac{\alpha +3}{2}\big )\bigg \lbrace 2k+\alpha +3+\sqrt{(2k+\alpha +3)^2+\frac{1}{4}-\alpha ^2}\bigg \rbrace +\frac{1}{4}. \end{aligned}$$
    (3.6)
  3. (c)

    The lower bound in (3.6) is asymptotically sharp as \(k\rightarrow \infty \) in the sense that

    $$\begin{aligned} \lim _{k\rightarrow \infty }\psi ^\alpha \left( \frac{1}{4}(\gamma _k^\alpha )^2\right) =0. \end{aligned}$$

Proof

(a) To investigate the zeros of the function \(\psi ^\alpha (\lambda )\), we make use of its close relationship to the Meixner polynomials and their zeros. These polynomials are defined (with two different normalizations) by, see e.g. [6, 18.18–24],

$$\begin{aligned} M_n(x;\beta ,c)\equiv&(\beta )_n^{-1}m_n(x;\beta ,c) \nonumber \\ :=&{}_2F_1 \left( -n,-x;\beta ;1-\frac{1}{c}\right) ,\,x,n\in \mathbb {N}_0,\,\beta >0,\,0<c<1, \end{aligned}$$
(3.7)

and form an orthogonal system with respect to the discrete measure \((\beta )_xc^x/x!\), \(x=0,1,...\infty \). Now let \(\beta =\alpha +1\) and choose \(c=c_n:=n/(n+1)\in (0,1)\), where \(n\in \mathbb {N}\) coincides with the degree of the Meixner polynomial \(M_n\). Substituting, moreover, \(x=\lambda -(\alpha +1)/2\), we obtain

$$\begin{aligned} \psi ^\alpha (\lambda ):=\lim _{n\rightarrow \infty }\sum _{j=0}^{n}\frac{(\frac{\alpha +1}{2}-\lambda )_j}{(\alpha +1)_jj!}= \lim _{n\rightarrow \infty }M_n \left( \lambda -\frac{\alpha +1}{2};\alpha +1,\frac{n}{n+1}\right) . \end{aligned}$$
(3.8)

In fact, the inequality

$$\begin{aligned} \begin{aligned}&\big \vert \psi ^\alpha (\lambda )-M_n\left( \lambda -\frac{\alpha +1}{2};\alpha +1,\frac{n}{n+1}\right) \big \vert =\big \vert \sum _{j=0}^{\infty }\frac{(\frac{\alpha +1}{2}-\lambda )_j}{(\alpha +1)_jj!} \left\{ 1-\frac{(-n)_j}{(-n)^j}\right\} \big \vert \\&\quad \le \left\{ \sum _{j=2}^{N}+\sum _{j=N+1}^{\infty } \right\} \frac{\vert \left( \frac{\alpha +1}{2}-\lambda \right) _j\vert }{(\alpha +1)_jj!} \big \vert 1-\frac{(-n)_j}{(-n)^j}\big \vert =:S_n^1(\lambda )+S_N^2(\lambda )<\epsilon \end{aligned} \end{aligned}$$

is satisfied for any \(\epsilon >0\), provided that n is chosen large enough. To see this, just fix \(N=N(\epsilon ,\alpha ,\lambda )>2\) such that

$$\begin{aligned} S_N^2(\lambda )<2\sum _{j=N+1}^{\infty } \frac{\vert ((\frac{\alpha +1}{2}-\lambda )_j\vert }{(\alpha +1)_jj!}<\epsilon /2, \end{aligned}$$

and determine a constant \(C=C(N)\), for which

$$\begin{aligned} \big \vert 1-\frac{(-n)_j}{(-n)^j}\big \vert = \big \vert 1-\left( 1-\frac{1}{n}\right) \left( 1-\frac{2}{n}\right) \cdots \left( 1-\frac{j-1}{n}\right) \big \vert \le \frac{C}{n},\,\text {if}\,j\le N<n. \end{aligned}$$

Then there exists an \(N_1>N\) such that for any \(n>N_1\),

$$\begin{aligned} S_N^1(\lambda )<\frac{C}{n}\sum _{j=2}^{N} \frac{\vert \left( \frac{\alpha +1}{2}-\lambda \right) _j\vert }{(\alpha +1)_jj!}<\epsilon /2. \end{aligned}$$

Moreover it is well known that each Meixner polynomial \(M_n(x;\beta ,c)\) has n distinct zeros on the positive half-line, which have been studied in detail by various authors, see, e.g., [8, 22, 23, 25, 26]. So, if for any \(n\in \mathbb {N}\), the zeros of \(M_n(x;\alpha +1,c_n)\) are denoted by \(M_{n,1}^{\alpha ,c_n}<M_{n,2}^{\alpha ,c_n}<\cdots<M_{n,n}^{\alpha ,c_n}<\infty \), the required behaviour of the zeros \(\{\delta _k^\alpha \}_{k\in \mathbb {N}_0}\) follows by taking the limit

$$\begin{aligned} \delta _{k-1}^\alpha =\alpha +\frac{1}{2}+ \lim _{n\rightarrow \infty }M_{n,k}^{\alpha ,c_n}, \,k\in \mathbb {N}. \end{aligned}$$
(3.9)

(b) As to the distribution of these values, we can make use of the limit relation between the Meixner and Laguerre polynomials, see [6, 18.21.8] and also [22, Sec.3],[23, Sec.6],

$$\begin{aligned} \lim _{c\rightarrow 1-}M_n(\frac{x}{1-c};\alpha +1,c)={}_1F_1(-n;\alpha +1;x)=R_n^\alpha (x). \end{aligned}$$

In particular, given the zeros of \(R_n^\alpha (x)\) by \(l_{n,1}^\alpha<\cdots<l_{n,k}^\alpha<\cdots <l_{n,n}^\alpha \), it was shown in [23, Thm. 6.2] that

$$\begin{aligned} l_{n,k}^\alpha<\frac{1-c}{\sqrt{c}} M_{n,k}^{\alpha ,c}+(\alpha +1)(1-\sqrt{c})< l_{n,k}^\alpha +\frac{n-1}{\sqrt{c}}(1-\sqrt{c})^2,\,k=1,\dots ,n. \end{aligned}$$
(3.10)

So if \(c=c_n:=\frac{n}{n+1}\), the estimates (3.10) imply that

$$\begin{aligned} \sqrt{n(n+1)}\,l_{n,k}^\alpha< M_{n,k}^{\alpha ,c_n}+\frac{\alpha +1}{1+\sqrt{\frac{n+1}{n}}}< \sqrt{n(n+1)}\,l_{n,k}^\alpha +\frac{n-1}{(n+1)(1+\sqrt{\frac{n}{n+1}})^2}. \end{aligned}$$
(3.11)

The zeros \(l_{n,k}^\alpha \) of the Laguerre polynomials, in turn, can be estimated, partly in terms of the zeros of the Bessel function \(J_\alpha (\lambda )\), by

$$\begin{aligned} \frac{(\gamma _k^\alpha )^2}{4n+2\alpha +2}<l_{n,k}^\alpha <\frac{4k+2\alpha +2}{4n+2\alpha +2} \big \lbrace 2k+\alpha +1+\sqrt{(2k+\alpha +1)^2+\frac{1}{4}-\alpha ^2}, \end{aligned}$$

see [6, 18.16.10–11]. When inserting these lower and upper bounds into the two-sided inequality (3.11) and taking the limit \(n\rightarrow \infty \), we finally arrive at the estimate (3.6) by virtue of the limit relation (3.9).

(c)   By definition of \(\psi ^\alpha \), it follows that

$$\begin{aligned} \begin{aligned} \lim _{k\rightarrow \infty }\psi ^\alpha \big (\frac{1}{4}(\gamma _k^\alpha )^2\big )&= \lim _{k\rightarrow \infty } {}_1F_1(\frac{\alpha +1}{2}-\frac{(\gamma _k^\alpha )^2}{4};\alpha +1;1)\\&=\lim _{\lambda =(\gamma _k^\alpha )^2/4\rightarrow \infty } {}_1F_1(\frac{\alpha +1}{2}-\lambda ;\alpha +1;\frac{(\gamma _k^\alpha )^2}{4\lambda })\\&={}_0F_1(-;\alpha +1;-\frac{(\gamma _k^\alpha )^2}{4})=\frac{(\gamma _k^\alpha /2)^\alpha }{\Gamma (\alpha +1)} J_{\gamma _k^\alpha }^\alpha (1)=0. \end{aligned} \end{aligned}$$

\(\square \)

More refined asymptotic formulas for the zeros of the Meixner polynomials can be found in [25, Thms.1–2], while a sharp upper bound for the highest zero of a Meixner polynomial was given already in [22, Thm. 6]. As to the Laguerre polynomials, various inequalities for their smallest and largest zeros are presented in [8, 3.3–4].

Proposition 3.1

Let \(\{\delta _k^\alpha \}_{k\in \mathbb {N}_0}\) be the strictly increasing zeros of the function \(\psi ^\alpha (\lambda )\) in (3.5). Then the functions

$$\begin{aligned} \hat{\Phi }_k^\alpha (x):=\Phi _{\delta _k^\alpha }^\alpha (x)= \Phi (\frac{\alpha +1}{2}-\delta _k^\alpha ,\alpha +1;x),\,k\in \mathbb {N}_0, \end{aligned}$$

satisfy the orthogonality relation

$$\begin{aligned} \begin{aligned}&\big (\hat{\Phi }_k^\alpha ,\hat{\Phi }_n^\alpha \big )_{w_\alpha }:= \frac{1}{\Gamma (\alpha +1)}\int _{0+}^{1}\hat{\Phi }_k^\alpha (x)\hat{\Phi }_n^\alpha (x) e^{-x}x^\alpha dx=\hat{h}_k^\alpha \, \delta _{k,n},\,k,n\in \mathbb {N}_0, \\&\hat{h}_k^\alpha := \frac{1}{e\,\Gamma (\alpha +1)}\sum _{j=1}^{\infty } \frac{(\frac{\alpha +1}{2}-\delta _k^\alpha )_j}{(\alpha +1)_j(j-1)!}\cdot \sum _{r=1}^{\infty } \frac{(\frac{\alpha +1}{2}-\delta _k^\alpha )_r}{(\alpha +1)_r r!} \sum _{s=0}^{r-1}\frac{1}{\delta _k^\alpha -\frac{\alpha +1}{2}-s}. \end{aligned} \end{aligned}$$
(3.12)

Proof

Proceeding from the Green-type formula (3.4) and assuming that \(\lambda =\delta _k^\alpha \) and \(\mu =\delta _n^\alpha \) are two distinct zeros of \(\psi ^\alpha \), the right-hand side of (3.4) vanishes since \(\hat{\Phi }_k^\alpha (1)=\hat{\Phi }_n^\alpha (1)=0\). Due to Eq.(3.1), we further replace

$$\begin{aligned} \big (\mathcal {L}_x^\alpha \hat{\Phi }_j^\alpha \big )(x)= (\frac{\alpha +1}{2}-\delta _j^\alpha )\,\hat{\Phi }_j^\alpha (x)\,\text {for}\,j=k,\,n, \end{aligned}$$

under the integral. Then we arrive at \((\delta _n^\alpha -\delta _k^\alpha )\big (\hat{\Phi }_k^\alpha ,\hat{\Phi }_n^\alpha \big )_{w_\alpha }=0\), which yields the orthogonality of the two eigenfunctions. As to the normalization constant \(\hat{h}_k^\alpha \), it follows again by (3.4) that

$$\begin{aligned}{} & {} \begin{aligned} \Vert \hat{\Phi }_k^\alpha (x)\Vert _{w_\alpha }^2&= \lim _{\lambda \rightarrow \delta _k^\alpha } \frac{1}{\Gamma (\alpha +1)}\int _{0+}^{1}\Phi _\lambda ^\alpha (x)\hat{\Phi }_k^\alpha (x) e^{-x}x^\alpha dx\\&= \lim _{\lambda \rightarrow \delta _k^\alpha } \frac{1}{e\,\Gamma (\alpha +1)} \frac{\Phi _\lambda ^\alpha (1)(\hat{\Phi }_k^\alpha )'(1)}{\lambda -\delta _k^\alpha } = \frac{(\hat{\Phi }_k^\alpha )'(1)}{e\,\Gamma (\alpha +1)}\lim _{\lambda \rightarrow \delta _k^\alpha } \frac{\psi ^\alpha (\lambda )}{\lambda -\delta _k^\alpha },\\ \text {where}\,(\hat{\Phi }_k^\alpha )'(1)&=\sum _{j=1}^{\infty } \frac{(\frac{\alpha +1}{2}-\delta _k^\alpha )_j}{(\alpha +1)_j(j-1)!} \quad \text {and} \end{aligned} \\{} & {} \begin{aligned} \lim _{\lambda \rightarrow \delta _k^\alpha } \frac{\psi ^\alpha (\lambda )}{\lambda -\delta _k^\alpha }&= (\psi ^\alpha )'(\delta _k^\alpha )=\frac{d}{d\lambda }\sum _{r=0}^{\infty } \frac{(\frac{\alpha +1}{2}-\lambda )_r}{(\alpha +1)_r\, r!}\,\big \vert _{\lambda =\delta _k^\alpha }\\&=\sum _{r=1}^{\infty } \frac{1}{(\alpha +1)_r r!}\frac{d}{d\lambda } \prod _{s=0}^{r-1}(s+\frac{\alpha +1}{2}-\lambda )\,\big \vert _{\lambda =\delta _k^\alpha }\\&=\sum _{r=1}^{\infty } \frac{1}{(\alpha +1)_r r!} \sum _{s=0}^{r-1}\frac{(\frac{\alpha +1}{2}-\delta _k^\alpha )_r}{\delta _k^\alpha -\frac{\alpha +1}{2}-s}. \end{aligned} \end{aligned}$$

This yields the representation (3.12). \(\square \)

In the particular case \(\alpha =0\), for example, we determined the first zeros \(\delta _k\) of \(\psi ^0(\lambda )=\Phi (1/2-\lambda ,\,1;\!1)\) via MAPLE (see Table ) and verified that the corresponding eigenfunctions are pairwise orthogonal in the space \(L_{w_0}^2((0,1])\).

Table 2 Numerical values

Moreover, when setting \(\epsilon _k:=2\sqrt{\delta _k},\,k\in \mathbb {N}_0\), we found that the first values of \(\epsilon _k\) are only slightly larger than the zeros \(\gamma _{k,N=0}\) of the Bessel function \(J_0(\lambda )\) in Table .

Analogously to the Fourier–Bessel functions, we call \(\{\hat{\Phi }_k^\alpha (x)\}_{k\in \mathbb {N}_0}\) the Fourier-Laguerre functions. By spectral theorical arguments [32], cf. also [13], these functions form a complete orthogonal system in the Hilbert space \(L_{w_\alpha }^2((0,1])\).

Employing Kramer’s Theorem 1.1, we achieve the following result.

Theorem 3.1

For any \(\lambda \ge (\alpha +1)/2\), let a Kramer kernel be given by the Whittaker function

$$\begin{aligned} K(x,\lambda )=\Phi _\lambda ^\alpha (x)=\Phi (\frac{\alpha +1}{2}-\lambda ,\alpha +1;x),\,0\le x\le 1, \end{aligned}$$

and let \(\{\delta _k^\alpha \}_{k\in \mathbb {N}_0}\) be the strictly increasing zeros of the function \(\psi ^\alpha (\lambda )\) in (3.5). Clearly, \(K(\cdot ,\lambda )\in L_{w_\alpha }^2((0,1])\), and the functions \(\{K(x,\delta _k^\alpha )\}_{k\in \mathbb {N}_0}\) coincide with the complete orthogonal system \(\{\hat{\Phi }_k^\alpha (x)\}_{k\in \mathbb {N}_0}\) in the space by virtue of Prop. 3.1.

  1. (a)

    If a function \(F(\lambda ),\,\lambda \ge \frac{\alpha +1}{2}\), can be given, for some function \(g\in L_{w_\alpha }^2((0,1])\), as

    $$\begin{aligned} F(\lambda )=\frac{1}{\Gamma (\alpha +1)}\int _{0}^{1}K(x,\lambda )\overline{g(x)}e^{-x} x^\alpha dx, \end{aligned}$$

    then F can be reconstructed from its samples in terms of an absolutely convergent series,

    $$\begin{aligned} F(\lambda )=\sum _{k=0}^{\infty }F(\delta _k^\alpha ) S_k^\alpha (\lambda ),\,S_k^\alpha (\lambda )= (\hat{h}_k^\alpha )^{-1}\big (\Phi _\lambda ^\alpha ,\hat{\Phi }_k^\alpha \big )_{w_\alpha }. \end{aligned}$$
    (3.13)
  2. (b)

    The sampling functions are given by

    $$\begin{aligned} \begin{aligned} S_k^\alpha (\lambda )=&\frac{\psi ^\alpha (\lambda )}{(\lambda -\delta _k^\alpha )(\psi ^\alpha )'(\delta _k^\alpha )}, \,\text {where}\\ (\psi ^\alpha )'(\delta _k^\alpha )=&\sum _{j=1}^{\infty } \frac{(\frac{\alpha +1}{2}-\delta _k^\alpha )_j}{(\alpha +1)_j j!} \sum _{r=0}^{j-1}\frac{1}{\delta _k^\alpha -\frac{\alpha +1}{2}-r}. \end{aligned} \end{aligned}$$
    (3.14)

    Hence, the sampling series (3.13) furnishes a Lagrange interpolation formula according to (1.4) with \(G(\lambda ):=\psi ^\alpha (\lambda ).\)

Motivated by the result in Sec. 2 associated with the Fourier–Bessel-type functions, one may think of a Kramer-type theory associated with the Laguerre-type polynomials \(\{L_n^{\alpha ,N}(x)\}_{n\in \mathbb {N}_0}\), as well. When normalized in the same way as their classical counterparts \(R_n^\alpha (x)\) in (1.5), the Laguerre-type polynomials possess the representations, see [27, 28, 31, 34, Sec. 3.3],

$$\begin{aligned} \begin{aligned}&R_n^{\alpha ,N}(x):=\frac{n!}{(\alpha +1)_n}L_n^{\alpha ,N}(x)= a_n^{\alpha ,N}R_n^\alpha (x)-N\,b_n^\alpha R_{n-1}^{\alpha +1}(x)\\&\quad = a_n^{\alpha ,N}R_n^\alpha (x)+N\,b_{n+1}^{\alpha -1} (R_n^\alpha )'(x)=R_n^\alpha (x)-\frac{N}{\alpha +1}b_n^{\alpha +1}x\,R_{n-1}^{\alpha +2}(x),\\&\quad \quad \text {where}\,a_n^{\alpha ,N}:=1+N\,b_n^\alpha ,\, b_n^\alpha :=\frac{(\alpha +2)_{n-1}}{(n-1)!}. \end{aligned} \end{aligned}$$
(3.15)

Moreover, they form an orthogonal polynomial system in the space \(L_{w_{\alpha ,N}}^2(0,\infty )\) with respect to the inner product \((f,g)_{w_{\alpha ,N}}\) defined in (1.10), with orthogonality relation, for \(k,n\in \mathbb {N}_0\),

$$\begin{aligned} (R_k^{\alpha ,N},R_n^{\alpha ,N})_{w_{\alpha ,N}}=h_k^{\alpha ,N}\delta _{k,n},\, h_k^{\alpha ,N} =\frac{k!}{(\alpha +1)_k}[1+Nb_k^\alpha ][1+Nb_{k+1}^\alpha ]. \end{aligned}$$

Proposition 3.2

[31] For \(\alpha \in \mathbb {N}_0,\,N>0\), and \(0<x<\infty ,\) the Laguerre-type polynomials \(\{R_n^{\alpha ,N}(x)\}_{n\in \mathbb {N}_0}\) satisfy the differential equation

$$\begin{aligned} \mathcal {L}_x^{\alpha ,N} R_n^{\alpha ,N}(x):=\big \lbrace \mathcal {L}_x^\alpha +\dfrac{N}{(\alpha +2)!} \mathcal {T}_x^\alpha \big \rbrace R_n^{\alpha ,N}(x)=-\lambda _n^{\alpha ,N} R_n^{\alpha ,N}(x), \end{aligned}$$
(3.16)

where \(\mathcal {L}_x^\alpha \) is the classical operator in (3.1) and \(\mathcal {T}_x^\alpha \) denotes the differential operator

$$\begin{aligned} \mathcal {T}_x^\alpha y(x)=(-1)^{\alpha +1}e^xx\,D_x^{\alpha +2}\big \lbrace e^{-x}D_x^{\alpha +2}[x^{\alpha +1}y(x)]\big \rbrace . \end{aligned}$$
(3.17)

Furthermore, the eigenvalue parameter has the two components

$$\begin{aligned} \lambda _n^{\alpha ,N}= n+N\frac{(n)_{\alpha +2}}{(\alpha +2)!},\,n\in \mathbb {N}_0. \end{aligned}$$

As it was shown by Wellman [34, Sec 5], an equivalent form of the differential expression \( \mathcal {L}_x^{\alpha ,N}\) gives rise to a unique self-adjoint operator \( \mathcal {A}_x^{\alpha ,N}\) for functions in a suitably defined domain \( D(\mathcal {A}_x^{\alpha ,N})\subset L_{w_{\alpha ,N}}^2(0,\infty )\) with \((\mathcal {A}_x^{\alpha ,N}f)(0)=(\alpha +1)f'(0)\). Moreover, the so-called right-definite space \(L_{w_{\alpha ,N}}^2(0,\infty )\) has the Laguerre-type polynomials as a complete set of eigenfunctions.

Our next crucial step is to determine a continuous system of functions \(\lbrace \Phi _\lambda ^{\alpha ,N}(x)\rbrace _{\lambda >0}\), which are eigenfunctions of the Laguerre-type operator \(\mathcal {L}_x^{\alpha ,N}\) and reduce, as \(N\rightarrow 0+ \), to the functions \(\Phi _\lambda ^\alpha (x)\) defined in (3.2). Since \(\mathcal {L}_x^{\alpha ,N}\) is independent of the eigenvalue parameter, it suffices to look for continuous counterparts of the Laguerre-type polynomials and their eigenvalues. Observing that for \(\alpha \in \mathbb {N}_0\), \(b_n:=(\alpha +2)_{n-1}/(n-1)!=(n)_{\alpha +1}/(\alpha +1)!\), we first rewrite the representations (3.15) by

$$\begin{aligned} \begin{aligned} R_n^{\alpha ,N}(x)=&\bigg [1+N\frac{(n)_{\alpha +1}}{(\alpha +1)!}\bigg ]\Phi (-n,\alpha +1;x) -N\frac{(n)_{\alpha +1}}{(\alpha +1)!}\Phi (1-n,\alpha +2;x)\\ =&\Phi (-n,\alpha +1;x)-\frac{N}{\alpha +1}\frac{(n)_{\alpha +2}\,x}{(\alpha +2)!} \Phi (1-n,\alpha +3;x). \end{aligned} \end{aligned}$$
(3.18)

Replacing now \(n\in \mathbb {N}_0\) by \(\lambda -\frac{ \alpha +1}{2}\), the required Whittaker-type functions and their spectral differential equation follow by analytic continuation.

Proposition 3.3

For \(\alpha \in \mathbb {N}_0, N>0\), and \(e_\lambda ^\alpha :=\lambda -\frac{ \alpha +1}{2}\ge 0\), let the Whittaker-type functions be defined on \(0\le x<\infty \) by

$$\begin{aligned} \begin{aligned} \Phi _\lambda ^{\alpha ,N}(x)=&\bigg [1+N\frac{(e_\lambda ^\alpha )_{\alpha +1}^{}}{(\alpha +1)!}\bigg ]\Phi (-e_\lambda ^\alpha ,\alpha +1;x) -N\frac{(e_\lambda ^\alpha )_{\alpha +1}^{}}{(\alpha +1)!}\Phi (1-e_\lambda ^\alpha ,\alpha +2;x)\\ =&\Phi (-e_\lambda ^\alpha ,\alpha +1;x)-\frac{N}{\alpha +1}\frac{(e_\lambda ^\alpha )_{\alpha +2}^{}\,x}{(\alpha +2)!} \Phi (1-e_\lambda ^\alpha ,\alpha +3;x). \end{aligned} \end{aligned}$$

Then there holds, with \(\mathcal {L}_x^{\alpha ,N}\) as in (3.16),

$$\begin{aligned} \begin{aligned}&\mathcal {L}_x^{\alpha ,N} \Phi _\lambda ^{\alpha ,N}(x):=\big \lbrace \mathcal {L}_x^\alpha +\dfrac{N}{(\alpha +2)!} \mathcal {T}_x^\alpha \big \rbrace \Phi _\lambda ^{\alpha ,N}(x)=-e_\lambda ^{\alpha ,N}\Phi _\lambda ^{\alpha ,N}(x),\\&\quad \text {where}\, e_\lambda ^{\alpha ,N}=e_\lambda ^\alpha +N\frac{(e_\lambda ^\alpha )_{\alpha +2}^{}}{(\alpha +2)!}= e_\lambda ^\alpha \bigg [1+N\frac{(e_\lambda ^\alpha +1)_{\alpha +1}^{}}{(\alpha +2)!}\bigg ]. \end{aligned} \end{aligned}$$
(3.19)

In particular,

$$\begin{aligned} \Phi _{n+(\alpha +1)/2}^{\alpha ,N}(x)=R_n^{\alpha ,N}(x),\, e_{n+(\alpha +1)/2}^{\alpha ,N}=n+N\frac{(n)_{\alpha +2}}{(\alpha +2)!}=\lambda _n^{\alpha ,N}, \,n\in \mathbb {N}_0. \end{aligned}$$

As in the case \(N=0\), it is clear that for \(N>0\) and any \(\lambda \ne n+(\alpha +1)/2\), the Whittaker-type functions \( \Phi _\lambda ^{\alpha ,N}(x)\) do not belong to \(L_{w_{\alpha ,N}}^2(0,\infty )\). So once more, we restrict the differential equation (3.19) to the finite interval (0, 1] and define the operator

$$\begin{aligned} (\hat{\mathcal {L}}_x^{\alpha ,N} f)(x)=\bigg \lbrace \begin{aligned}&(\mathcal {L}_x^{\alpha ,N}f)(x),\,0<x \le 1\\&(\alpha +1)f'(0),\,x=0 \end{aligned} \end{aligned}$$
(3.20)

for functions in the domain

$$\begin{aligned} D(\hat{\mathcal {L}}_x^{\alpha ,N}) =\bigg \lbrace \begin{aligned}&f:\,[0,1]\rightarrow \mathbb {C};\,f\in AC_{loc}^{(2\alpha +3)}(0,1],\\&\mathcal {T}_x^\alpha f\in L_{w_\alpha }^2((0,1]),\,\lim _{x\rightarrow 0+}f(x)=f(0) \end{aligned} \bigg \rbrace . \end{aligned}$$

In particular, we note that \(\Phi _\lambda ^{\alpha ,N}(x)\in D(\hat{\mathcal {L}}_x^{\alpha ,N})\), and

$$\begin{aligned} \begin{aligned} (\hat{\mathcal {L}}_x^{\alpha ,N}\Phi _\lambda ^{\alpha ,N})(0)&:=(\alpha +1)(\Phi _\lambda ^{\alpha ,N})'(0)\\&=(\alpha +1)\Phi '(-e_\lambda ^\alpha ,\alpha +1;0) -N\frac{(e_\lambda ^\alpha )_{\alpha +2}^{}}{(\alpha +2)!}\Phi (1-e_\lambda ^\alpha ,\alpha +3;0)\\&=-e_\lambda ^\alpha -N\frac{(e_\lambda ^\alpha )_{\alpha +2}^{}}{(\alpha +2)!}=-e_\lambda ^{\alpha ,N}= \lim _{x\rightarrow 0+}\big [\mathcal {L}_x^{\alpha ,N}\Phi _\lambda ^{\alpha ,N}(x) \big ]. \end{aligned} \end{aligned}$$

The operator \(\hat{\mathcal {L}}_x^{\alpha ,N}\) satisfies the following Green-type formula.

Theorem 3.2

Let \(\alpha \in \mathbb {N}_0,\,N>0\). For any \(f,g\in D(\hat{\mathcal {L}}_x^{\alpha ,N})\), there holds

$$\begin{aligned} \begin{aligned}&(\hat{\mathcal {L}}_x^{\alpha ,N}f,g)_{w_{\alpha ,N}}-(f,\hat{\mathcal {L}}_x^{\alpha ,N},g)_{w_{\alpha ,N}}\\&\quad :=\frac{1}{\alpha !}\int _{0+}^{1}\bigg \lbrace (\mathcal {L}_x^{\alpha ,N}f)(x)g(x) -(\mathcal {L}_x^{\alpha ,N}g)(x)f(x)\bigg \rbrace e^{-x}x^\alpha dx+ \\&\quad \quad +N \big \lbrace (\hat{\mathcal {L}}_x^{\alpha ,N}f)(0)g(0) -(\hat{\mathcal {L}}_x^{\alpha ,N}g)(0)f(0)\big \rbrace \\&\quad =\frac{1}{e\,\alpha !}[f'(1)g(1)-f(1)g'(1)]+ \frac{N}{(\alpha +2)!\,\alpha !}\big [K^\alpha (f,g)(1)-K^\alpha (g,f)(1)\big ],\, \text {where}\\&K^\alpha (f,g)(x):=\sum _{j=0}^{\alpha +1}(-1)^{j+\alpha +1} D_x^{\alpha +1-j}\big \lbrace e^{-x}D_x^{\alpha +2}[x^{\alpha +1}f(x)]\big \rbrace \, D_x^j[x^{\alpha +1}g(x)]. \end{aligned} \end{aligned}$$
(3.21)

Proof

We consider the two components of the operator \(\mathcal {L}_x^{\alpha ,N} = \mathcal {L}_x^\alpha +\dfrac{N}{(\alpha +2)!} \mathcal {T}_x^\alpha \), separately. According to (3.4), the first part yields

$$\begin{aligned} \begin{aligned}&\frac{1}{\alpha !}\int _{0+}^{1}\bigg \lbrace (\mathcal {L}_x^\alpha f)(x)g(x) -(\mathcal {L}_x^\alpha g)(x)f(x)\bigg \rbrace e^{-x}x^\alpha dx\\&=\frac{1}{\alpha !}e^{-x}x^{\alpha +1}\big [f'(x)g(x)-f(x)g'(x)\big ]\,\vert _{x=1}, \end{aligned} \end{aligned}$$

while, after applying an \((\alpha +2)\)-fold integration by parts, the second part becomes

$$\begin{aligned} \begin{aligned}&\frac{N(-1)^{\alpha +1}}{(\alpha +2)!\,\alpha !}\int _{0+}^{1}\bigg [\begin{aligned}&D_x^{\alpha +2}\big \lbrace e^{-x}D_x^{\alpha +2}[x^{\alpha +1}f(x)]\big \rbrace g(x)x^{\alpha +1}-\\&-D_x^{\alpha +2}\big \lbrace e^{-x}D_x^{\alpha +2}[x^{\alpha +1}g(x)]\big \rbrace f(x)x^{\alpha +1} \end{aligned} \bigg ]dx\\&\quad =\frac{N(-1)^{\alpha +1}}{(\alpha +2)!\,\alpha !}\sum _{j=0}^{\alpha +1}(-1)^j\bigg [\begin{aligned}&D_x^{\alpha +1-j}\big \lbrace e^{-x}D_x^{\alpha +2}[x^{\alpha +1}f(x)]\big \rbrace D_x^j[x^{\alpha +1}g(x)]-\\&-D_x^{\alpha +1-j}\big \lbrace e^{-x}D_x^{\alpha +2}[x^{\alpha +1}g(x)]\big \rbrace D_x^j[x^{\alpha +1}f(x)]\end{aligned} \bigg ]\bigg \vert _{x=0+}^{x=1}\\&\quad =\frac{N}{(\alpha +2)!\,\alpha !}\big [K^\alpha (f,g)(x)-K^\alpha (g,f)(x)\big ]\big \vert _{x=0+}^{x=1}\,, \end{aligned} \end{aligned}$$

since the resulting integrals have cancelled each other by symmetry in fg. Evaluating the latter expression at the origin, we find that only the term for \(j=\alpha +1\) contributes to the sum. In fact, it is easy to see that for any \(h\in D(\hat{\mathcal {L}}_x^{\alpha ,N})\),

$$\begin{aligned} \lim _{x\rightarrow 0+}D_x^j[x^{\alpha +1}h(x)]= {\left\{ \begin{array}{ll} &{}0,\,j<\alpha +1\\ &{}(\alpha +1)!\,h(0),\,j=\alpha +1\\ &{}(\alpha +2)!\,h'(0),\,j=\alpha +2 \end{array}\right. } \end{aligned}$$

and thus

$$\begin{aligned} \begin{aligned}&\lim _{x\rightarrow 0+}\bigg [D_x^{\alpha +1-j}\big \lbrace e^{-x}D_x^{\alpha +2}[x^{\alpha +1}f(x)]\big \rbrace D_x^j[x^{\alpha +1}g(x)]\bigg ]\\&\quad = {\left\{ \begin{array}{ll} &{}0,\, 0\le j<\alpha +1\\ &{}(\alpha +2)!\,(\alpha +1)!\,f'(0)g(0),\, j=\alpha +1 \end{array}\right. } . \end{aligned} \end{aligned}$$

So we arrive at

$$\begin{aligned} \frac{-N}{(\alpha +2)!\,\alpha !}\big [K^\alpha (f,g)(0+)-K^\alpha (g,f)(0+)\big ]=-N(\alpha +1)\big [f'(0)g(0)-f(0)g'(0)\big ]. \end{aligned}$$

This, however, annihilates the term \(N\big \lbrace (\hat{\mathcal {L}}_x^{\alpha ,N}f)(0)g(0) -(\hat{\mathcal {L}}_x^{\alpha ,N}g)(0)f(0)\big \rbrace \) on the right-hand side of (3.21), which settles the proof of Thm. 3.2. \(\square \)

For \(\alpha =0,\,N\ge 0\), the quantities above become

$$\begin{aligned} \begin{aligned}&\Phi _\lambda ^{0,N}(x)=[1+Ne_\lambda ^0]\Phi (-e_\lambda ^0,1;x)-Ne_\lambda ^0\,\Phi (1-e_\lambda ^0,2;x),\, e_\lambda ^0=\lambda -\frac{1}{2}\ge 0,\\&\mathcal {L}_x^{0,N}f(x) = e^x D_x[e^{-x}xD_xf(x)]-\dfrac{N}{2}e^x xD_x^2\big \lbrace e^{-x}D_x^2[xf(x)]\big \rbrace ,\\&e_\lambda ^{0,N}=e_\lambda ^0+\frac{N}{2}(e_\lambda ^0)_2^{}=\lambda -\frac{1}{2}+\frac{N}{2} \big (\lambda ^2-\frac{1}{4}\big ),\\&(\hat{\mathcal {L}}_x^{0,N} f)(x)=\bigg \lbrace \begin{aligned}&(\mathcal {L}_x^{0,N}f)(x),\,0<x \le 1\\&f'(0),\,x=0 \end{aligned},\\&D(\hat{\mathcal {L}}_x^{0,N}) =\bigg \lbrace f:\,f\in AC_{loc}^{(3)}(0,1],\, \mathcal {T}_x^0 f\in L_{w_0}^2((0,1]),\,\lim _{x\rightarrow 0+}f(x)=f(0) \bigg \rbrace . \end{aligned} \end{aligned}$$

Corollary 3.1

For \(f,g\in D(\hat{\mathcal {L}}_x^{0,N})\) set \(F(x)=xf(x),\,G(x)=xg(x)\). Then there holds

$$\begin{aligned} \begin{aligned}&\int _{0+}^{1}\bigg \lbrace (\mathcal {L}_x^{0,N}f)(x)g(x) -(\mathcal {L}_x^{0,N}g)(x)f(x)\bigg \rbrace e^{-x} dx+\\&\quad \quad +N \big \lbrace (\hat{\mathcal {L}}_x^{\alpha ,N}f)(0)g(0) -(\hat{\mathcal {L}}_x^{\alpha ,N}g)(0)f(0)\big \rbrace \\&\quad =\frac{1}{e}[F'(1)G(1)-F(1)G'(1)]+ \frac{N}{2}\big [\hat{K}(F,G)(1)-\hat{K}(G,F)(1)\big ], \,\text {where}\\&\hat{K}(F,G)(1)=\frac{1}{e}\big \lbrace \big [-F'''(1)+F''(1)\big ]G(1)+F''(1)G'(1)\big \rbrace . \end{aligned} \end{aligned}$$
(3.22)

It is still an open problem whether the approach just outlined is applicable to find a new Kramer-type theorem for \(N>0\). In case \(\alpha =0\), for instance, the Whittaker-type functions \(\Phi _\lambda ^{0,N}(x),\,0\le x\le 1,\,\lambda \ge 1/2\), may serve as a Kramer kernel \(K(x,\lambda )\). In order to generalize the sampling function \(S_k^0(\lambda )\) in (3.14) to a function \(S_k^{0,N}(\lambda )\) via the Green-type formula (3.22), we have to find a sequence of eigenvalues \(\delta _k^{0,N},\,k\in \mathbb {N}_0\), say, such that the functions \(\{K(x,\delta _k^{0,N})\}_{k\in \mathbb {N}_0}\) form a complete orthogonal system in the respective Hilbert space.

4 Sampling theory associated with the finite Jacobi equation

In 1991, Zayed [35, Example 4] established a sampling theorem associated with the continuous Jacobi equation on the finite interval \(-1<x<1\). For \(\alpha ,\beta >-1\), \(\tau =(\alpha +\beta +1)/2\) and eigenvalue parameter \(\lambda >0\), this equation may be written in the form

$$\begin{aligned} \begin{aligned}&\mathcal {L}_x^{\alpha ,\beta }y_\lambda (x):=(1-x)^{-\alpha }(1+x)^{-\beta }D_x\big [(1-x)^{\alpha +1}(1+x)^{\beta +1}D_xy_\lambda (x)\big ]\\&\quad \equiv \big \lbrace (1-x^2)D_x^2+[\beta -\alpha -(\alpha +\beta +2)x]\,D_x \big \rbrace y_\lambda (x)=(\tau ^2-\lambda ^2)y_\lambda (x). \end{aligned} \end{aligned}$$
(4.1)

At the singular point \(x=1\) of Eq.(4.1), its regular solutions are the hypergeometric functions

$$\begin{aligned} R_{\lambda -\tau }^{\alpha ,\beta }(x):={}_2F_1\big (\tau -\lambda ,\tau +\lambda ;\alpha +1;\frac{1-x}{2}\big )\,\text {with}\, R_{\lambda -\tau }^{\alpha ,\beta }(1)=1. \end{aligned}$$
(4.2)

At the other endpoint \(x=-1\), these (finite continuous) Jacobi functions behave asymptotically like, cf. [9, 2.10],

$$\begin{aligned} R_{\lambda -\tau }^{\alpha ,\beta }(x)\cong const{\left\{ \begin{array}{ll} &{}(1+x)^{-max(\beta ,0)},\,\beta \ne 0\\ &{}log((1+x)^{-1}),\,\beta =0 \end{array}\right. }\,,\,x\rightarrow -1+, \end{aligned}$$
(4.3)

unless \(\lambda =n+\tau ,\,n\in \mathbb {N}_0\). In these cases they reduce to the (normalized) Jacobi polynomials \(R_n^{\alpha ,\beta }(x)\) in (1.5) with eigenvalues \(\lambda ^2-\tau ^2=n(n+\alpha +\beta +1)=:\lambda _n^{\alpha ,\beta }\). Otherwise, the functions \(R_{\lambda -\tau }^{\alpha ,\beta }(x)\) belong to the space \(L_{w_{\alpha ,\beta }}^2(-1,1)\) with \(w_{\alpha ,\beta }\) as in (1.11), provided that \(-1<\beta <1\). Following Genuit and Schöttler [18, La.C] (where \(\lambda \) is replaced by \(\lambda ^2\) for simplicity), the corresponding sampling theorem can be stated as follows.

Lemma 4.1

For \(\alpha >-1,\,-1<\beta <1\), and \(\tau =(\alpha +\beta +1)/2\), let a function \(F(\lambda ),\,\lambda >0\), be representable as

$$\begin{aligned} F(\lambda )=\big (R_{\lambda -\tau }^{\alpha ,\beta },g\big )_{w_{\alpha ,\beta }}=\int _{-1}^{1} R_{\lambda -\tau }^{\alpha ,\beta }(x)g(x)w_{\alpha ,\beta }(x)dx,\,g\in L_{w_{\alpha ,\beta }}^2(-1,1). \end{aligned}$$

Then there holds

$$\begin{aligned} F(\lambda )=\sum _{k=0}^{\infty }F(k+\tau )\frac{2(k+\tau )\Gamma (k+2\tau )}{k!} \frac{\Gamma (\lambda -\tau +1)}{\Gamma (\lambda +\tau )}\frac{\sin (\pi [\lambda -\tau -k])}{\pi [\lambda ^2-(k+\tau )^2]}\,. \end{aligned}$$
(4.4)

Lemma 4.2

The expansion (4.4) is equivalent to

$$\begin{aligned} F(\lambda )=\sum _{k=0}^{\infty }F(k+\tau )S_k^{\alpha ,\beta }(\lambda ),\, S_k^{\alpha ,\beta }(\lambda )= \frac{\big (R_{\lambda -\tau }^{\alpha ,\beta },R_k^{\alpha ,\beta }\big )_{w_{\alpha ,\beta }}}{\big (R_k^{\alpha ,\beta },R_k^{\alpha ,\beta }\big )_{w_{\alpha ,\beta }}} . \end{aligned}$$
(4.5)

Proof

Due to well-known properties of the Jacobi polynomials [9, 10.8], there holds

$$\begin{aligned} \int _{-1}^{1}[R_k^{\alpha ,\beta }(x)]^2(1-x)^\alpha (1+x)^\beta dx= \frac{2^{\alpha +\beta +1}k!\,(\beta +1)_k\Gamma (\alpha +1)\Gamma (\beta +1)}{(2k+\alpha +\beta +1)(\alpha +1)_k \Gamma (k+\alpha +\beta +1)}, \end{aligned}$$

while Eq.(4.1) and the Green formula for the operator \(\mathcal {L}_x^{\alpha ,\beta }\) yield

$$\begin{aligned} \begin{aligned}&\big [(k+\tau )^2-\lambda ^2\big ]\int _{-1}^{1} R_{\lambda -\tau }^{\alpha ,\beta }(x)R_k^{\alpha ,\beta }(x)(1-x)^\alpha (1+x)^\beta dx\\&\quad =(1-x)^{\alpha +1}(1+x)^{\beta +1}\big [(R_{\lambda -\tau }^{\alpha ,\beta })'(x) R_k^{\alpha ,\beta }(x)-R_{\lambda -\tau }^{\alpha ,\beta }(x)(R_k^{\alpha ,\beta })'(x)\big ]\,\big \vert _{x=-1+}^{1-}. \end{aligned} \end{aligned}$$

Evaluating this expression at \(x=\pm 1\), it follows by (4.2) that the only non-vanishing part is

$$\begin{aligned} \begin{aligned}&-2^{\alpha +1} R_k^{\alpha ,\beta }(-1)\lim _{x\rightarrow -1+}\big \lbrace (1+x)^{\beta +1}(R_{\lambda -\tau }^{\alpha ,\beta })'(x)\big \rbrace \\&\quad =(-1)^k\frac{(\beta +1)_k}{(\alpha +1)_k}2^{\alpha +\beta +1}\,\frac{(\tau ^2-\lambda ^2)}{\alpha +1} \cdot \\&\quad \qquad \cdot \lim _{z\rightarrow 1-}\big \lbrace (1-z)^{\beta +1}{}_2F_1(\tau +1-\lambda ,\tau +1+\lambda ;\alpha +2;z)\big \rbrace \\&\quad =(-1)^k 2^{\alpha +\beta +1}\,\frac{(\beta +1)_k}{(\alpha +1)_k}\frac{\Gamma (\alpha +1)\,\Gamma (\beta +1)}{\Gamma (\tau -\lambda )\,\Gamma (\tau +\lambda )}. \end{aligned} \end{aligned}$$

Hence we arrive at

$$\begin{aligned} S_k^{\alpha ,\beta }(\lambda )= \frac{\big (R_{\lambda -\tau }^{\alpha ,\beta },R_k^{\alpha ,\beta }\big )_{w_{\alpha ,\beta }}}{\big (R_k^{\alpha ,\beta },R_k^{\alpha ,\beta }\big )_{w_{\alpha ,\beta }}}=\frac{(-1)^{k+1}}{\lambda ^2-(k+\tau )^2}\,\frac{2(k+\tau )\,\Gamma (k+2\tau )}{k!\,\Gamma (\tau -\lambda )\, \Gamma (\tau +\lambda )}. \end{aligned}$$

In view of the property of the \(\Gamma \)-function [9, 1.2],

$$\begin{aligned} \frac{(-1)^{k+1}}{\Gamma (\tau -\lambda )\,\Gamma (\lambda -\tau +1)}=\frac{\sin (\pi [\lambda -\tau -k])}{\pi }, \end{aligned}$$

the representation (4.4) follows. \(\square \)

For \(\alpha \in \mathbb {N}_0,\,\beta >-1,\) and \(N>0\), the (normalized) Jacobi-type polynomials \(\{R_n^{\alpha ,\beta ,N}(x)\}_{n\in \mathbb {N}_0}\) are given by \(R_0^{\alpha ,\beta ,N}(x)\equiv 1\) and in terms of the polynomials \(\{R_n^{\alpha ,\beta }(x)\}_{n\in \mathbb {N}}\) by, see [30],

$$\begin{aligned} \begin{aligned} R_n^{\alpha ,\beta ,N}(x)&=R_n^{\alpha ,\beta }(x)-\frac{N}{\alpha +1}b_n^{\alpha +1,\beta } \big (\frac{1-x}{2}\big ) R_{n-1}^{\alpha +2,\beta }(x),\,n\in \mathbb {N},\,\text {where}\\ b_n^{\alpha +1,\beta }&:=\frac{(\alpha +3)_{n-1}\,(\alpha +\beta +2)_n}{(n-1)!\,(\beta +1)_{n-1}}= \frac{(n)_{\alpha +2}\,(n+\beta )_{\alpha +2}}{(\alpha +2)!\,(\beta +1)_{\alpha +1}}. \end{aligned} \end{aligned}$$
(4.6)

These polynomials form a complete orthogonal system with respect to the inner product \((f,g)_{w_{\alpha ,\beta ,N}}\) in (1.10).

Proposition 4.1

[30] For \(\alpha ,\beta ,N\) as above, the Jacobi-type polynomials satisfy the differential equation

$$\begin{aligned} \begin{aligned} \mathcal {L}_x^{\alpha ,\beta ,N} R_n^{\alpha ,\beta ,N}(x):=&\big \lbrace \mathcal {L}_x^{\alpha ,\beta }+\dfrac{N\,\mathcal {T}_x^{\alpha ,\beta }}{(\alpha +2)!\,(\beta +1)_{\alpha +1}}\big \rbrace R_n^{\alpha ,\beta ,N}(x)\\ =&-\lambda _n^{\alpha ,\beta ,N} R_n^{\alpha ,\beta ,N}(x),\,-1<x<1, \end{aligned} \end{aligned}$$
(4.7)

where \(\mathcal {L}_x^{\alpha ,\beta }\) is the classical operator in (4.1) and \(\mathcal {T}_x^{\alpha ,\beta }\) denotes the differential operator

$$\begin{aligned} \mathcal {T}_x^{\alpha ,\beta } y(x)=\frac{1-x}{(1+x)^{\beta }}D_x^{\alpha +2}\big \lbrace (1+x)^{\alpha +\beta +2}D_x^{\alpha +2}[(x-1)^{\alpha +1}y(x)]\big \rbrace . \end{aligned}$$
(4.8)

Furthermore, the eigenvalue parameter consists of the two components

$$\begin{aligned} \begin{aligned}&\lambda _n^{\alpha ,\beta ,N}=\lambda _n^{\alpha ,\beta }+\frac{N\,t_n^{\alpha ,\beta }}{(\alpha +2)!\,(\beta +1)_{\alpha +1}},\,\text {where}\\&\lambda _n^{\alpha ,\beta }=n(n+\alpha +\beta +1),\,t_n^{\alpha ,\beta }=(n)_{\alpha +2}\,(n+\beta )_{\alpha +2} . \end{aligned} \end{aligned}$$
(4.9)

As in the case \(N=0\), the differential operator \(\mathcal {L}_x^{\alpha ,\beta ,N}\) is independent of n. Hence, when replacing n again by \(\lambda -\tau =\lambda -(\alpha +\beta +1)/2\), Eq.(4.7) gives rise to the finite continuous Jacobi-type equation

$$\begin{aligned} \mathcal {L}_x^{\alpha ,\beta ,N} \varphi _\lambda (x)=-e_\lambda ^{\alpha ,\beta ,N} \varphi _\lambda (x),\,-1<x<1, \,\lambda >0, \end{aligned}$$
(4.10)

where its solutions are given in terms of the functions \(R_{\lambda -\tau }^{\alpha ,\beta }(x)\) in (4.2) by

$$\begin{aligned} \begin{aligned}&\varphi _\lambda (x)=R_{\lambda -\tau }^{\alpha ,\beta ,N}(x)\\&\quad :=R_{\lambda -\tau }^{\alpha ,\beta }(x)-\frac{N}{\alpha +1}\frac{(\lambda -\tau )_{\alpha +2}\,(\lambda -\tau +\beta )_{\alpha +2}}{(\alpha +2)!\,(\beta +1)_{\alpha +1}} \big (\frac{1-x}{2}\big ) R_{\lambda -\tau -1}^{\alpha +2,\beta }(x). \end{aligned} \end{aligned}$$
(4.11)

The corresponding eigenvalues of these (finite) Jacobi-type functions are

$$\begin{aligned} e_\lambda ^{\alpha ,\beta ,N}=\lambda ^2-\tau ^2+ \frac{N\,(\lambda -\tau )_{\alpha +2}\,(\lambda -\tau +\beta )_{\alpha +2}}{(\alpha +2)!\,(\beta +1)_{\alpha +1}}. \end{aligned}$$
(4.12)

In view of the asymptotic behaviour (4.3), both components of the functions \( R_{\lambda -\tau }^{\alpha ,\beta ,N}(x)\) belong to \(L_{w_{\alpha ,\beta ,N}}^2(-1,1)\), if \(\alpha \in \mathbb {N}_0,\,-1<\beta <1,\,N>0\).

Since for \(\lambda =n+\tau \), the Jacobi-type functions reduce to the complete orthogonal polynomial system \(\{R_n^{\alpha ,\beta ,N}\}_{n\in \mathbb {N}_0}\), we can use them as a Kramer kernel, i.e. \(K(x,\lambda ^2)=R_{\lambda -\tau }^{\alpha ,\beta ,N}(x)\). Hence the results in La. 4.1–2 extend to a Kramer-type sampling theorem associated with the higher-order Jacobi-type equation (4.7).

Corollary 4.1

Let \(\alpha \in \mathbb {N}_0,-1<\beta <1,\,N>0,\) and \(\tau =(\alpha +\beta +1)/2.\) If a function \(F(\lambda ),\,\lambda >0\), is representable as

$$\begin{aligned} F(\lambda )=\big (R_{\lambda -\tau }^{\alpha ,\beta ,N},g\big )_{w_{\alpha ,\beta ,N}}=\int _{-1}^{1} R_{\lambda -\tau }^{\alpha ,\beta ,N}(x)g(x)w_{\alpha ,\beta }(x)dx+N\,\overline{g(1)} \end{aligned}$$

for some \(g\in L_{w_{\alpha ,\beta ,N}}^2(-1,1)\), then there holds

$$\begin{aligned} F(\lambda )=\sum _{k=0}^{\infty }F(k+\tau )S_k^{\alpha ,\beta ,N}(\lambda ),\, S_k^{\alpha ,\beta ,N}(\lambda )= \frac{\big (R_{\lambda -\tau }^{\alpha ,\beta ,N},R_k^{\alpha ,\beta ,N}\big )_{w_{\alpha ,\beta ,N}}}{\big (R_k^{\alpha ,\beta ,N},R_k^{\alpha ,\beta ,N}\big )_{w_{\alpha ,\beta ,N}}} . \end{aligned}$$
(4.13)

For a further evaluation of the inner products defining the sampling functions \(S_k^{\alpha ,\beta ,N}(\lambda )\), one may proceed as in the proof of La. 4.2 for \(N=0\). To this end, we need a Green-type formula for the Jacobi-type differential operator \(\mathcal {L}_x^{\alpha ,\beta ,N}\), applied to the functions arising in (4.13). At the right endpoint \(x\rightarrow 1-\), the operator is naturally extended to

$$\begin{aligned} \big (\mathcal {L}_x^{\alpha ,\beta ,N}f\big )(1):=\big (\mathcal {L}_x^{\alpha ,\beta }f\big )(1)=-2(\alpha +1)f'(1). \end{aligned}$$
(4.14)

Theorem 4.1

  1. (a)

    Let \(\alpha ,\beta ,N\) be given as in Cor. 4.1. For sufficiently smooth functions \(f,g:(-1,1]\rightarrow \mathbb {R}\), there holds

    $$\begin{aligned} (\mathcal {L}_x^{\alpha ,\beta ,N}f,g)_{w_{\alpha \beta ,N}}- (f,\mathcal {L}_x^{\alpha ,\beta ,N}g)_{w_{\alpha ,\beta ,N}} =I_1(f,g)+N\big \lbrace \,I_2(f,g)+\,I_3(f,g)\big \rbrace , \end{aligned}$$
    (4.15)

    where

    $$\begin{aligned}{} & {} \begin{aligned} I_1(f,g)&=\int _{-1}^{1}\big \lbrace \big (\mathcal {L}_x^{\alpha ,\beta }f\big )(x)g(x)-f(x) \big (\mathcal {L}_x^{\alpha ,\beta }g\big )(x)\big \rbrace w_{\alpha ,\beta }(x)dx\\&=\frac{(\beta +1)_{\alpha +1}}{2^{\alpha +\beta +1}\alpha !}(1-x)^{\alpha +1}(1+x)^{\beta +1} \big [f'(x)g(x)-f(x)g'(x)\big ]\,\big \vert _{x=-1+}^1, \end{aligned}\nonumber \\{} & {} \begin{aligned} I_2(f,g)&=\frac{1}{(\alpha +2)!(\beta +1)_{\alpha +1}}\int _{-1}^{1}\big \lbrace \big (\mathcal {T}_x^{\alpha ,\beta }f\big )(x)g(x)-f(x) \big (\mathcal {T}_x^{\alpha ,\beta }g\big )(x)\big \rbrace w_{\alpha ,\beta }(x)dx\\&=\frac{(-1)^{\alpha +1}}{2^{\alpha + \beta +1}\alpha !\,(\alpha +2)!} \big [K^{\alpha ,\beta }(f,g)(x)-K^{\alpha ,\beta }(g,f)(x))\big ]\,\big \vert _{x=-1+}^1,\, \text {where}\\ K&^{\alpha ,\beta }(f,g)(x)=\sum _{j=0}^{\alpha +1}(-1)^j\bigg [\begin{aligned}&D_x^{\alpha +1-j}\big \lbrace (1+x)^{\alpha +\beta +2}D_x^{\alpha +2}[(x-1)^{\alpha +1}f(x)]\big \rbrace \\&\cdot D_x^j[(x-1)^{\alpha +1}g(x)]\end{aligned} \bigg ], \end{aligned}\nonumber \\ \end{aligned}$$
    (4.16)

    and

    $$\begin{aligned} I_3(f,g)= & {} \big [\big (\mathcal {L}_x^{\alpha ,\beta ,N}f\big )(1)g(1)-f(1) \big (\mathcal {L}_x^{\alpha ,\beta ,N}g\big )(1)\big ]\\ {}= & {} -2(\alpha +1)\big [f'(1)g(1)-f(1)g'(1)\big ]. \end{aligned}$$
  2. (b)

    With \(f(x)=R_{\lambda -\tau }^{\alpha ,\beta ,N}(x),\,\tau =(\alpha +\beta +1)/2,\,\lambda >0,\) and \(g(x)=R_k^{\alpha ,\beta ,N}(x),\,k\in \mathbb {N}_0\), the quantities in part (a) result in

    $$\begin{aligned}{} & {} I_1(f,g)=-\frac{(\beta +1)_{\alpha +1}}{2^\beta \,\alpha !}R_k^{\alpha ,\beta ,N}(-1) \lim _{x\rightarrow -1+} \big \lbrace (1+x)^{\beta +1} \big (R_{\lambda -\tau }^{\alpha ,\beta ,N}\big )'(x)\big \rbrace , \nonumber \\ \end{aligned}$$
    (4.17)
    $$\begin{aligned}{} & {} I_2(f,g)+ I_3(f,g)= -\frac{1}{2^{\alpha +\beta +1}\alpha !\,(\alpha +2)!} \lim _{x\rightarrow -1+} K^{\alpha ,\beta }\big (R_{\lambda -\tau }^{\alpha ,\beta ,N},R_k^{\alpha ,\beta ,N}\big )(x).\nonumber \\ \end{aligned}$$
    (4.18)

Proof

(a) The three terms \(I_j(f,g),\,j=1,2,3\), follow by definition (4.7) of the operator \(\mathcal {L}_x^{\alpha ,\beta ,N}\) and the inner product in (1.10). The representation of \(I_1(f,g)\) is obtained as in the proof of La. 4.2. Concerning \(I_2(f,g)\), we get, by definition (4.8) of \(\mathcal {T}_x^{\alpha ,\beta },\)

$$\begin{aligned} \begin{aligned} I_2&(f,g)=\frac{(-1)^{\alpha +1}}{2^{\alpha +\beta +1}\alpha !\,(\alpha +2)!}\cdot \\&\cdot \int _{-1}^{1}\bigg [\begin{aligned}&D_x^{\alpha +2}\big \lbrace (1+x)^{\alpha +\beta +2}D_x^{\alpha +2}[(x-1)^{\alpha +1}f(x)]\big \rbrace (x-1)^{\alpha +1}g(x)-\\&-(x-1)^{\alpha +1}f(x)D_x^{\alpha +2}\big \lbrace (1+x)^{\alpha +\beta +2}D_x^{\alpha +2}[(x-1)^{\alpha +1}g(x)]\big \rbrace \end{aligned} \bigg ]dx. \end{aligned} \end{aligned}$$

Analogously to the procedure used in the Bessel- and Laguerre-type cases, we apply an \((\alpha +2)\)–fold integration by parts to the integral and arrive at the representation (4.16). Finally, \(I_3(f,g)\) is an immediate consequence of (4.14).

(b) Inserting the specific functions fg into the formula for \(I_1(f,g)\), the boundary value at \(x=1\) clearly vanishes. Moreover it follows by definition (4.11) of \(R_{\lambda -\tau }^{\alpha ,\beta ,N}(x)\) and (4.2–3), that \(\lim _{x\rightarrow -1+} \big \lbrace (1+x)^{\beta +1} R_{\lambda -\tau }^{\alpha ,\beta ,N}(x)\big \rbrace =0\), while the limit in (4.17) is finite.

Concerning (4.18), we first evaluate \(I_2(f,g)\) by taking the limits \(x\rightarrow \pm 1\) in (4.16). At the upper bound \(x=1\), it is not hard to see that only the summand for \(j=\alpha +1\) does not vanish so that

$$\begin{aligned} \frac{K^{\alpha ,\beta }(f,g)(1)-K^{\alpha ,\beta }(g,f)(1)}{2^{\alpha +\beta +1}\alpha !\,(\alpha +2)!} =\frac{2^{\alpha +\beta +2}(\alpha +2)!\,(\alpha +1)!}{2^{\alpha +\beta +1}\alpha !\,(\alpha +2)!} \lbrace f'(1)g(1)-f(1)g'(1)\rbrace . \end{aligned}$$

But this result coincides with \(-I_3(f,g)\) according to part (a). Hence it is left to see that \(K^{\alpha ,\beta }\big (R_k^{\alpha ,\beta ,N},R_{\lambda -\tau }^{\alpha ,\beta ,N}\big )(x)\vert _{x=-1+}\) does not contribute to \(I_2(f,g)\) either. This, however, follows in view of the estimates for \(0\le j\le \alpha +1\) and some constants \(C>0\),

$$\begin{aligned} \begin{aligned}&\big \vert D_x^{\alpha +1-j}\big \lbrace (1+x)^{\alpha +\beta +2}D_x^{\alpha +2}[(x-1)^{\alpha +1}R_k^{\alpha ,\beta ,N}(x)]\big \rbrace \big \vert \le C(1+x)^{j+\beta +1},\\&\big \vert D_x^j[(x-1)^{\alpha +1}R_{\lambda -\tau }^{\alpha ,\beta ,N}(x) ]\big \vert \le C{\left\{ \begin{array}{ll} &{}(1+x)^{-j-\beta },\,j+\beta \ne 0\\ &{}log((1+x)^{-1}),\,j+\beta =0 \end{array}\right. }. \end{aligned} \end{aligned}$$

While the first estimate is obvious, the second one is due to the behaviour near \(x=-1\) of the hypergeometric functions for \(s=0,1\),

$$\begin{aligned} \begin{aligned} \big ( R&_{\lambda -\tau -s}^{\alpha +2s,\beta }\big )^{(j)}(x)\\&=\frac{(\tau +s-\lambda )_j(\tau +s+\lambda )_j}{(-2)^j\,(\alpha +2s+1)_j}{}_2F_1\bigg ( \begin{aligned} j+\tau&+s-\lambda ,\,j+\tau +s+\lambda \\ {}&j+\alpha +2s+1 \end{aligned} ;\frac{1-x}{2}\bigg ). \end{aligned} \end{aligned}$$

\(\square \)

In view of the Eqs.(4.7),(4.10) and for the particular choice of fg in Thm. 4.1(b), the left-hand side of (4.15) equals

$$\begin{aligned} \big [\lambda _k^{\alpha ,\beta ,N}-e_\lambda ^{\alpha ,\beta ,N}\big ]\, \int _{-1}^{1}R_{\lambda -\tau }^{\alpha ,\beta ,N}(x)R_k^{\alpha ,\beta ,N}(x)w_{\alpha ,\beta }(x)dx. \end{aligned}$$

So after dividing the result of Thm. 4.1(b) by the factor in front of the last integral and taking the limit \(\lambda \rightarrow k+\tau \), one obtains the value of the normalization constant \(\big (R_k^{\alpha ,\beta ,N},R_k^{\alpha ,\beta ,N}\big )_{w_{\alpha ,\beta ,N}}\).

The latter calculations have still to be accomplished. In particular, it would be interesting to investigate the particular case \(\alpha =\beta =0\). Here, Eq.(4.7) becomes the Legendre-type differential equation of fourth-order, which is satisfied by the Legendre-type polynomials and their continuous analogues. It would generalize and shed new lights on the sampling theory associated with the second-order Legendre equation and the (finite) continuous Legendre functions, see, in particular, the achievements of Butzer in joint work with Stens and Wehrens [5] as well as with Hinsen and Zayed [38].