1 Introduction

In [2], Atkinson and Hansen studied the problem of finding the numerical solution of the nonlinear Poisson equation \(-\Delta u = f(\cdot ,u)\) with zero boundary conditions on the unit ball \(\textbf{B}^d\) on \(\mathbb {R}^d\), and asked the question of finding an explicit orthogonal basis for the Sobolev inner product

$$\begin{aligned} \left\langle f,g\right\rangle = \frac{1}{\pi }\, \int _{\textbf{B}^d}\Delta [(1-\Vert x\Vert ^2) f(x)] \, \Delta [(1-\Vert x\Vert ^2) g(x)]\,\textrm{d}x, \end{aligned}$$
(1.1)

where \(\Delta \) denotes the usual Laplace operator.

Y. Xu answered that question in [11], where he constructed such basis in terms of spherical harmonics and classical Jacobi polynomials of varying parameter. In addition, he studied the orthogonal expansion of a function in that basis, and proved that it can be computed without using the derivatives of the function.

Sobolev orthogonal polynomials, that is, families of polynomials that are orthogonal with respect to inner product involving both values of functions as well as derivative operators, such as partial derivatives, gradients, normal derivatives, Laplacians, and others, have been recently studied. A recent survey on this topic can be found in [7]. A clear range of application of orthogonal polynomials is the field of approximation of functions, with multiple technological applications within the multivariate case. Recently, there has been a renovated interest for approximants based on multivariate Sobolev orthogonal polynomials, showing that it is not necessary to use the derivatives of the function. Examples of this kind of studies can be find in [8, 9, 13], among others.

In this paper, we modify the inner product (1.1) by introducing a term on the spherical border of the ball taking into account the possible information that may be observed from some modification of the nonlinear Poisson equation

$$\begin{aligned} \left\langle f,g\right\rangle _{\Delta }&=\frac{\lambda }{\sigma _{d-1}}\int _{\textbf{S}^{d-1}}f(\xi )\,g(\xi )\,\textrm{d}\sigma (\xi )\\&\quad + \frac{1}{8\,\sigma _{d-1}} \int _{\textbf{B}^d}{\Delta [(1-\Vert x\Vert ^2) f(x)] \, \Delta [(1-\Vert x\Vert ^2) g(x)]\,\textrm{d}x}. \end{aligned}$$

This term is introduced by means of a real positive constant \(\lambda >0\) to modulate the influence of that term in the problem, and the normalization constants are chosen to simplify expressions in the sequel.

In addition, following the ideas in [12], we also modify the inner product (1.1) by introducing a mass point at the origin:

$$\begin{aligned} \left\langle f,g\right\rangle _{\Delta }^*&=\frac{\lambda }{\sigma _{d-1}}f(0)\,g(0)\\ {}&\quad + \frac{1}{8\,\sigma _{d-1}} \int _{\textbf{B}^d}{\Delta [(1-\Vert x\Vert ^2) f(x)] \, \Delta [(1-\Vert x\Vert ^2) g(x)]\,\textrm{d}x}. \end{aligned}$$

Our main objective is to study the influence of the additional term in the study of the basis and its impact into the Fourier coefficients and the errors for a given function.

The work is structured in the following way. Section 2 is devoted to establishing the main tools and basic results that we will need for the rest of the paper, including a brief introduction about spherical harmonics, classical orthogonal polynomials in the unit ball of \(\mathbb {R}^d\), Fourier coefficients, among others facts. Section 3 describes the first considered Sobolev inner product, deducing a Sobolev basis for that inner product, and proving that they satisfy a fourth-order partial differential equation, extending the well known property for ball polynomials since they satisfy a second order PDE. Section 4 is devoted to analyse the Sobolev Fourier orthogonal expansions and approximation, giving explicit bounds for the errors. Finally, in Sect. 5, we study the Sobolev inner product with a mass point at the origin, proving that this second Sobolev inner product and the associated basis can be related with the first one studied.

2 Preliminaries

In this section, we collect the basic background on the classical orthogonal polynomials on the unit ball and orthogonal expansions that will be used throughout this paper.

For \(x\in \mathbb {R}^d\), we denote by \(\Vert x\Vert \) the usual Euclidean norm of x. The unit ball and the unit sphere in \(\mathbb {R}^d\) are denoted by \(\textbf{B}^d = \left\{ x \in \mathbb {R}^d: \Vert x\Vert \leqslant 1 \right\} \) and \(\textbf{S}^{d-1}= \left\{ \xi \in \mathbb {R}^d: \Vert \xi \Vert = 1\right\} \), respectively.

For \(\mu >-1\), let \(W_{\mu }\) be the weight function on the unit ball defined as

$$\begin{aligned} W_\mu (x)=\left( 1-\Vert x \Vert ^2\right) ^{\mu }, \quad x\in \textbf{B}^d. \end{aligned}$$

This weight function can be used to define the inner product

$$\begin{aligned} \left\langle f,g\right\rangle _{\mu } =\displaystyle b_\mu \int _{\textbf{B}^d}{f(x)g(x)\,W_\mu (x)\,\textrm{d}x}, \quad f,g\in L^2(W_{\mu };\textbf{B}^d), \end{aligned}$$

where \(b_{\mu }\) is the normalization constant, such that \(\left\langle 1,1\right\rangle _{\mu }=1\) given by

$$\begin{aligned} b_\mu =\left( \displaystyle \int _{\textbf{B}^d}{W_{\mu }(x)\,\textrm{d}x}\right) ^{-1}=\dfrac{\Gamma \left( \mu +1+\frac{d}{2}\right) }{\Gamma (\mu +1)\pi ^{d/2}}. \end{aligned}$$

This inner product, in turn, induces the norm \(\Vert \cdot \Vert _{\mu }\) defined by

$$\begin{aligned} \Vert f\Vert _{\mu }=\left( b_{\mu }\int _{\textbf{B}^d}f(x)^2\,W_{\mu }(x)\textrm{d}x\right) ^{1/2}, \quad f\in L^2(W_{\mu };\textbf{B}^d). \end{aligned}$$

For \(n\geqslant 0\), let us denote by \(\Pi _n^d\) the linear space of real polynomials in d variables of total degree less than or equal to n, and by \(\Pi ^d=\bigcup _{n\geqslant 0}\Pi _n^d\) the linear space of all real polynomials in d variables.

A polynomial \(P\in \Pi _n^d\) is said to be an orthogonal polynomial of (total) degree n if \(\left\langle P,Q\right\rangle _{\mu }=0\) for all \(Q\in \Pi _{n-1}^d\). For \(n\geqslant 0\), let \(\mathcal {V}_n^d(W_{\mu })\) denote the space of orthogonal polynomials of total degree n. Then, \(\dim \mathcal {V}_n^d(W_{\mu })=\left( {\begin{array}{c}n+d-1\\ n\end{array}}\right) :=r_n^d\).

For \(n\geqslant 0\), let \(\{P_{\nu }^n(x):\, 1\leqslant \nu \leqslant r_n^d\}\) be a basis of \(\mathcal {V}_n^d(W_{\mu })\). Notice that every element of \(\mathcal {V}_n^d(W_{\mu })\) is orthogonal to polynomials of lower degree. If the elements of the basis are also orthogonal to each other, that is, \(\langle P_{\nu }^n,P_{\eta }^n\rangle =0\) whenever \(\nu \ne \eta \), we call the basis mututally orthogonal. If, in addition, \(\langle P_{\nu }^n,P_{\nu }^n\rangle =1\), we say that the basis is orthonormal.

2.1 Spherical Harmonics

Let \(\mathcal {H}_{n}^d\) denote the space of harmonic polynomials in d variables of degree n, that is, homogeneous polynomials of degree n satisfying the Laplace equation \(\Delta Y=0\), where \(\Delta =\frac{\partial ^2}{\partial x^2_1}+\cdots +\frac{\partial ^2}{\partial x^2_d}\) is the usual Laplace operator. It is well known that

$$\begin{aligned} a_n^d:=\dim \ \mathcal {H}_{n}^d= \left( {\begin{array}{c}n+d-1\\ n\end{array}}\right) -\left( {\begin{array}{c}n+d-3\\ n\end{array}}\right) . \end{aligned}$$

Spherical harmonics are the restriction of harmonic polynomials to the unit sphere. If \(Y\in \mathcal {H}_n^d\), the linear space of spherical harmonics of degree n, then in spherical-polar coordinates \(x=r\xi \) where \(r> 0\) and \(\xi \in \textbf{S}^{d-1}\), we get

$$\begin{aligned} Y(x)=r^n\,Y(\xi ), \end{aligned}$$

so that Y is uniquely determined by its restriction to the sphere.

We define the linear operator

$$\begin{aligned} x\cdot \nabla = \sum _{i=1}^\textrm{d}x_i\,\frac{\partial }{\partial x_i}, \end{aligned}$$

and, by Euler’s equation for homogeneous polynomials, we deduce that

$$\begin{aligned} x\cdot \nabla Y(x)= n\,Y(x), \quad \forall Y\in \mathcal {H}_n^d. \end{aligned}$$

The differential operators \(\Delta \) and \(x\cdot \nabla \) can be expressed in spherical-polar coordinates as [5]

$$\begin{aligned} \Delta&= \frac{\partial ^2}{\partial r^2}+\frac{d-1}{r}\frac{\partial }{\partial r}+\frac{1}{r^2}\Delta _0,\nonumber \\ x\cdot \nabla&= r\frac{\partial }{\partial r}, \end{aligned}$$
(2.1)

where \(\Delta _0\) is the spherical part of the Laplacian, called the Laplace–Beltrami operator. The operator \(\Delta _0\) has spherical harmonics as eigenfunctions. More precisely, it holds that [5]

$$\begin{aligned} \Delta _0\,Y(\xi )= -n\,(n+d-2)\,Y(\xi ), \quad \forall Y\in \mathcal {H}_n^d, \quad \xi \in \textbf{S}^{d-1}. \end{aligned}$$
(2.2)

We will also need the following family of differential operators, \(D_{i,j}\), defined by:

$$\begin{aligned} D_{i,j}= x_i\,\partial _j - x_j\,\partial _i, \quad 1\leqslant i < j \leqslant d. \end{aligned}$$

These are angular derivatives, since \(D_{i,j}=\partial _{\theta _{i,j}}\) in the polar coordinates of the \(x_i,x_j\)–plane, \((x_i,x_j)=r_{i,j}\,(\cos \theta _{i,j}, \sin \theta _{i,j})\). Furthermore, the angular derivatives \(D_{i,j}\) and the Laplace–Beltrami operator \(\Delta _0\) are related by

$$\begin{aligned} \Delta _0=\sum _{1\leqslant i < j \leqslant d} D_{i,j}^2. \end{aligned}$$

Spherical harmonics are orthogonal polynomials on \(\textbf{S}^{d-1}\) with respect to the inner product

$$\begin{aligned} \langle f, g\rangle _{\textbf{S}^{d-1}} = \frac{1}{\sigma _{d-1}}\int _{\textbf{S}^{d-1}}f(\xi )\,g(\xi )\,\textrm{d}\sigma (\xi ), \end{aligned}$$

where \(\textrm{d}\sigma \) denotes the surface measure and \(\sigma _{d-1}\) denotes the surface area

$$\begin{aligned} \sigma _{d-1}=\int _{\textbf{S}^{d-1}}\textrm{d}\sigma (\xi )= \frac{2\,\pi ^{d/2}}{\Gamma (d/2)}. \end{aligned}$$

2.2 Mutually Orthogonal Polynomials on the Unit Ball

A mutually orthogonal basis of \(\mathcal {V}_n^d(W_{\mu })\) can be given in terms of Jacobi polynomials and spherical harmonics.

For \(\alpha ,\beta >-1\), the Jacobi polynomial \(P^{(\alpha ,\beta )}_n(t)\) of degree n is defined as [10]

$$\begin{aligned} P_n^{(\alpha ,\beta )}(t)=\frac{1}{n!}\sum _{k=0}^n\left( {\begin{array}{c}n\\ k\end{array}}\right) (k+\alpha +1)_{n-k}(k+\alpha +\beta +1)_k \left( \frac{t-1}{2}\right) ^k, \end{aligned}$$

where, for \(a\in \mathbb {R}\), \(n\geqslant 0\), \((a)_0 =1\), and \((a)_n=a\,(a+1)\cdots (a+n-1)\) denotes the Pochhammer symbol. They are orthogonal with respect to the Jacobi weight function \(w_{\alpha ,\beta }(t)=(1-t)^{\alpha }(1+t)^{\beta }\) on the interval \([-1,1]\).

The polynomials defined in the following proposition are known in the literature as classical ball polynomials.

Proposition 2.1

[6] For \(n\geqslant 0\) and \(0\leqslant j \leqslant \frac{n}{2}\), let \(\{Y_{\nu }^{n-2j}(x):\,1\leqslant \nu \leqslant a_{n-2j}^d\}\) denote an orthonormal basis of \(\mathcal {H}_{n-2j}^d\). For \(\mu >-1\), define the polynomials

$$\begin{aligned} P_{j,\nu }^{n,\mu }\left( x\right) :=P_j^{\left( \mu ,n-2j+\frac{d-2}{2}\right) }\left( 2\, \Vert x \Vert ^2-1\right) \, Y_{\nu }^{n-2j}(x). \end{aligned}$$
(2.3)

Then, the set \(\{P_{j,\nu }^{n,\mu }: 0 \leqslant j \leqslant \frac{n}{2}, \, 1\leqslant \nu \leqslant a_{n-2j}^d\}\) constitutes a mutually orthogonal basis of \(\mathcal {V}_n^d(W_{\mu })\).

Moreover

$$\begin{aligned} \langle P_{j,\nu }^{n,\mu }, P_{k,\eta }^{m,\mu }\rangle _{\mu }\,=\,H_{j,n}^{\mu }\,\delta _{n,m}\,\delta _{j,k}\,\delta _{\nu ,\eta }, \end{aligned}$$

where

$$\begin{aligned} H_{j,n}^{\mu }=\frac{(\mu +1)_j\,\left( d/2 \right) _{n-j}\,(n-j+\mu +d/2)}{j!\,\left( \mu +d/2+1\right) _{n-j}\,(n+\mu +d/2)}. \end{aligned}$$
(2.4)

The square of the \(L^2(w_{\alpha ,\beta },[-1,1])\) norm of the Jacobi polynomial \(P_j^{(\alpha ,\beta )}(t)\), given by

$$\begin{aligned} h_j^{(\alpha ,\beta )}:= \int _{-1}^1 \left( P_j^{(\alpha ,\beta )}(t)\right) ^2w_{\alpha ,\beta }(t)\,\textrm{d}t, \end{aligned}$$

is related with \(H_{j,n}^{\mu }\) as follows:

$$\begin{aligned} H_{j,n}^{\mu }=\dfrac{\gamma _{\mu ,d}}{2^{n-2j}}\, h_j^{(\mu ,n-2j+\frac{d-2}{2})}, \end{aligned}$$
(2.5)

where \(\gamma _{\mu ,d}=\dfrac{b_{\mu }\,\sigma _{d-1}}{2^{\mu +\frac{d}{2}+1}}\).

It is known [6] that the orthogonal polynomials with respect to \(W_{\mu }\) are eigenfunctions of the second-order differential operator \(\mathcal {D}_\mu \). More precisely, we have

$$\begin{aligned} \mathcal {D}_\mu \,P = -(n+d)(n+2\mu -1)\,P, \quad \forall P\in \mathcal {V}_n^d(W_{\mu }), \end{aligned}$$

where

$$\begin{aligned} \mathcal {D}_\mu : = \Delta - \sum _{j=1}^d \, \frac{\partial }{\partial \,x_j} \,x_j\,\left( 2\,\mu +1 + \sum _{i=1}^d \, x_i\,\frac{\partial }{\partial \,x_i}\right) . \end{aligned}$$

The following lemma will be useful in the sequel. For convenience, we define \(P_{j,\nu }^{n,\mu }(x)=0\) if \(j<0\).

Lemma 2.2

[9] Let \(\mu >-1\). Then

$$\begin{aligned} \Delta \,P_{j,\nu }^{n,\mu }(x)=\kappa _{n-j}^{\mu }\,P_{j-1,\nu }^{n-2,\mu +2}(x) \quad \text {and}\quad \Delta _0\,P_{j,\nu }^{n,\mu }(x)= \varrho _{n-2j}\,P_{j,\nu }^{n,\mu }(x), \end{aligned}$$

where

$$\begin{aligned} \kappa _{n}^{\mu }=4\,\left( n+\mu +\frac{d}{2}\right) \,\left( n+\frac{d-2}{2}\right) \quad \text {and} \quad \varrho _{n}= -n\,(n+d-2). \end{aligned}$$

2.3 Fourier Orthogonal Expansion and Approximation

With respect to the basis (2.3), the Fourier orthogonal expansion of \(f\in L^2(W_{\mu };\textbf{B}^d)\) is defined by

$$\begin{aligned} f(x)=\sum _{n=0}^{\infty }\sum _{j=0}^{\lfloor {\frac{n}{2}}\rfloor } \sum _{\nu =1}^{a_{n-2j}^d}\widehat{f}_{j,\nu }^{\;n,\mu }P_{j,\nu }^{n,\mu }(x)\quad \text {with} \quad \widehat{f }_{j,\nu }^{\;n,\mu }:=\dfrac{1}{H_{j,n}^{\mu }}\left\langle f,P_{j,\nu }^{n,\mu }\right\rangle _{\mu }. \end{aligned}$$

Since \(\Vert f\Vert _{\mu }\) is finite, the Parseval identity holds: for \(\mu >-1\)

$$\begin{aligned} \Vert f\Vert _{\mu }^2=\sum _{n=0}^{\infty }\sum _{j=0}^{\lfloor {\frac{n}{2}}\rfloor }\sum _{\nu =1}^{a_{n-2j}^d}\left| \widehat{ f}_{j,\nu }^{\;n,\mu } \right| ^2\,H_{j,n}^{\mu }. \end{aligned}$$

Let \(\text {proj}_n^{\mu }:L^2(W_{\mu };\textbf{B}^d)\rightarrow \mathcal {V}_n^d(W_{\mu })\) and \(S_n^{\mu }:L^2(W_{\mu };\textbf{B}^d)\rightarrow \Pi _n^d\) denote the projection operator and the partial sum operator, respectively. Then

$$\begin{aligned} \text {proj}_m^{\mu }f(x)=\sum _{j=0}^{\lfloor {\frac{m}{2}}\rfloor }\sum _{\nu =1}^{a_{m-2j}^d}\widehat{ f}_{j,\nu }^{\;m,\mu }P_{j,\nu }^{m,\mu }(x) \quad \text {and} \quad S_n^{\mu }f(x)=\sum _{m=0}^n\text {proj}_m^{\mu }f(x). \end{aligned}$$

By definition, \(S_n^{\mu }f=f\) if \(f\in \Pi _n^d\). Moreover, for \(f\in L^2(W_{\mu };\textbf{B}^d)\), we have \(\langle f-S_n^{\mu }f,Q\rangle _{\mu }=0\) for all \(Q\in \Pi _n^d\).

We consider the error, \(\mathcal {E}_n(f)_{\mu }\), of best approximation by polynomials in \(\Pi _n^d\) in the space \(L^2(W_{\mu };\textbf{B}^d)\), defined by

$$\begin{aligned} \mathcal {E}_n(f)_{\mu }=\inf _{p_n\in \Pi _n^d}\Vert f-p_n\Vert _{\mu }, \end{aligned}$$

and notice that the infimum is achieved by \(S^{\mu }_nf\).

We define the non-uniform Sobolev space. For \(\textbf{m}\in \mathbb {N}_0^d\), let \(|\textbf{m}|=m_1+\cdots +m_d\) and \(\partial ^{\textbf{m}}=\partial _1^{m_1}\cdots \partial _d^{m_d}\). For \(\mu >-1\) and \(s\geqslant 1\), we denote by \(\mathcal {W}^s_2(W_{\mu };\textbf{B}^d)\) the Sobolev space

$$\begin{aligned} \mathcal {W}^s_2(W_{\mu };\textbf{B}^d)=\left\{ f\in L^2(W_{\mu };\textbf{B}^d);\, \partial ^{\textbf{m}}f\in L^2(W_{\mu +|\textbf{m}|};\textbf{B}^d), \, |\textbf{m}|\leqslant s,\,\textbf{m}\in \mathbb {N}_0^d \right\} . \end{aligned}$$

We say that this Sobolev space is non-uniform, since each derivative of the function belongs to a different \(L^2\) space.

The following estimate was proved in [9]: for \(n\geqslant 2\,s\) and \(f\in \mathcal {W}_2^{2\,s}\) \((W_{\mu };\textbf{B}^d)\),

$$\begin{aligned} \mathcal {E}_n(f)_\mu \leqslant \frac{c}{n^{2s}}\bigg [\mathcal {E}_{n-2s}(\Delta ^s f)_{\mu +2s}+\mathcal {E}_n(\Delta _0^s f)_{\mu }\bigg ], \end{aligned}$$
(2.6)

and for \(n\geqslant 2\,s+1\) and \(f\in \mathcal {W}_2^{2\,s+1}(W_{\mu };\textbf{B}^d)\)

$$\begin{aligned} \mathcal {E}_n(f)_\mu \leqslant \frac{c}{n^{2s+1}}\bigg [\sum _{i=1}^d\mathcal {E}_{n-2s-1}(\partial _i\,\Delta ^s f)_{\mu +2s+1}+\sum _{1\leqslant i < j \leqslant d}\mathcal {E}_n(D_{i,j}\,\Delta _0^s f)_{\mu }\bigg ]. \end{aligned}$$

Here and in the sequel, c is a generic constant independent of n and f but may depend on \(\mu \) and d, and its value may be different from one instance to the next. As pointed out in [9], each term involving \(\Delta \) and \(\Delta _0\) on the right-hand side of the above inequalities is necessary, since the first term deals with the radial component of f and the second one deals with the harmonic component of f defined on the ball.

3 Sobolev Orthogonal Polynomials

This section is devoted to the study of the orthogonal structure on the unit ball with respect to the Sobolev inner product

$$\begin{aligned} \begin{aligned} \left\langle f,g\right\rangle _{\Delta }&=\frac{\lambda }{\sigma _{d-1}}\int _{\textbf{S}^{d-1}}f(\xi )\,g(\xi )\,\textrm{d}\sigma (\xi )\\&\quad + \frac{1}{8\,\sigma _{d-1}} \int _{\textbf{B}^d}{\Delta [(1-\Vert x\Vert ^2) f(x)] \, \Delta [(1-\Vert x\Vert ^2) g(x)]\,\textrm{d}x}, \quad \lambda >0. \end{aligned} \end{aligned}$$
(3.1)

The normalization constants are chosen to simplify expressions in the sequel. Orthogonal polynomials with respect to inner products involving derivatives are called Sobolev orthogonal polynomials. Let us denote by \(\mathcal {V}_n^d(\Delta )\) the space of Sobolev orthogonal polynomials of degree n with respect to (3.1). We point out that when \(\lambda =0\), we recover the inner product studied in [11] up to a normalization constant.

We need the following lemma.

Lemma 3.1

Let \(\beta =n-2j+\frac{d-2}{2}\) and \(Y_{\nu }^{n-2j}\in \mathcal {H}_{n-2j}^d\). Then, for any polynomial q(s),

$$\begin{aligned} \Delta [(1-\Vert x\Vert ^2)\,q(\Vert x\Vert ^2)\,Y^{n-2j}_{\nu }(x)]= 4\,(\mathcal {J}_{\beta }q)(\Vert x\Vert ^2)\,Y_{\nu }^{n-2j}(x), \end{aligned}$$
(3.2)

where

$$\begin{aligned} (\mathcal {J}_{\beta }q)(s)=s\,(1-s)\,q''(s)+(\beta +1-(\beta +3)\,s)q'(s)-(\beta +1)\,q(s). \end{aligned}$$

Proof

Using spherical-polar coordinates, we can use (2.1) and (2.2) for the radial and spherical part of \(\Delta \), respectively. After a tedious calculation, we get that

$$\begin{aligned}&\Delta \bigg [(1-\Vert x\Vert ^2)\,q(\Vert x\Vert ^2)Y_{\nu }^{n-2j}(x)\bigg ] =\Delta \bigg [(1-r^2)\,q(r^2)\,r^{n-2j}\,Y_{\nu }^{n-2j}(\xi ) \bigg ]\\&\quad =4\,\bigg [r^2(1-r^2)q''(r^2) +(\beta +1-(\beta +3)\,r^2)q'(r^2) -(\beta +1)\,q(r^2) \bigg ]Y^{n-2j}_{\nu }(x). \end{aligned}$$

Setting \(s\mapsto r^2\) gives the desired result. \(\square \)

Inspired by the explicit expression (2.3) for the basis of classical ball polynomials and Theorem 2.4 in [11], we use the univariate Jacobi polynomials and the spherical harmonics to construct the following multivariate polynomials defined on \(\textbf{B}^d\).

Definition 3.2

For \(n\geqslant 0\) and \(0\leqslant j \leqslant \frac{n}{2}\), let \(\{Y_{\nu }^{n-2j}(x):\,1\leqslant \nu \leqslant a_{n-2j}^d\}\) denote an orthonormal basis of \(\mathcal {H}_{n-2j}^d\). We define the polynomials

$$\begin{aligned} \begin{aligned}&Q_{0,\nu }^n(x):=Y_{\nu }^n(x), \\&Q_{j,\nu }^{n}(x):=(1-\Vert x\Vert ^2)\,P_{j-1}^{(2,n-2j+\frac{d-2}{2})}(2\Vert x\Vert ^2-1)\,Y_{\nu }^{n-2j}(x), \quad 1\leqslant j \leqslant \frac{n}{2}. \end{aligned} \end{aligned}$$

It turns out that these polynomials are eigenfunctions of a fourth-order linear partial differential operator.

Proposition 3.3

The polynomials \(Q_{j,\nu }^n\) satisfy

$$\begin{aligned} \Delta \big [(1-\Vert x\Vert ^2)\,Q_{j,\nu }^n(x) \big ]\,=\,c_{j,n}\,P_{j,\nu }^{n,0}, \quad n\geqslant 0, \end{aligned}$$
(3.3)

with

$$\begin{aligned} c_{j,n}=\left\{ \begin{array}{ll} -4\,\left( n+\frac{d}{2}\right) , &{}\quad j = 0,\\ 4\,j\,(j+1), &{}\quad 1\leqslant j \leqslant \frac{n}{2}, \end{array}\right. \end{aligned}$$

and

$$\begin{aligned} (1-\Vert x\Vert ^2)\,\Delta \,P_{j,\nu }^{n,0}(x)\,=\,d_{j,n}\,Q_{j,\nu }^{n}(x), \quad n\geqslant 0, \end{aligned}$$
(3.4)

with

$$\begin{aligned} d_{j,n}=\left\{ \begin{array}{ll} 0, &{}\quad j = 0,\\ 4\left( n-j+\frac{d}{2}\right) \left( n-j+\frac{d-2}{2}\right) , &{}\quad 1\leqslant j \leqslant \frac{n}{2}. \end{array}\right. \end{aligned}$$

Proof

For \(j=0\), by (3.2), we have

$$\begin{aligned} \Delta \big [(1-\Vert x\Vert ^2)\,Q_{0,\nu }^n(x) \big ]=\Delta \big [(1-\Vert x\Vert ^2)\,Y_{\nu }^n(x) \big ]=-4\,\left( n+\frac{d}{2}\right) \,Y^n_{\nu }(x). \end{aligned}$$

Now, we deal with \(1\leqslant j \leqslant \frac{n}{2}\). The Jacobi polynomials satisfy the following property ([10, p. 71]):

$$\begin{aligned} (1-s)\,P_{j-1}^{(2,\beta )}(2s-1)=\frac{1}{2j+\beta +1}\left[ (j+1)\,P_{j-1}^{(1,\beta )}(2s-1) -j\,P_j^{(1,\beta )}(2s-1) \right] . \end{aligned}$$
(3.5)

Furthermore, the Jacobi polynomials \(P_{j-1}^{(1,\beta )}(2s-1)\) satisfy the differential equation

$$\begin{aligned} s\,(1-s)\,y''+(\beta +1-(\beta +3)\,s)y'=-(j-1)\,(j+\beta +1)\,y. \end{aligned}$$

Using these two facts, we can easily deduce that

$$\begin{aligned}&(2j+\beta +1)\,\mathcal {J_{\beta }}\bigg [(1-s)\,P_{j-1}^{(2,\beta )}(2s-1) \bigg ]\\&\quad =(j+1)\,\mathcal {J}_{\beta }P_{j-1}^{(1,\beta )}(2s-1)-j\,\mathcal {J}_{\beta }P_j^{(1,\beta )}(2s-1)\\&\quad =(j+1)\bigg [-(j-1)\,(j+\beta +1)-(\beta +1) \bigg ]\,P_{j-1}^{(1,\beta )}(2s-1)\\&\qquad -j\bigg [-j(j+\beta +2)-(\beta +1) \bigg ]P_j^{(1,\beta )}(2s-1)\\&\quad =-j(j+1)\bigg [(j+\beta )\,P_{j-1}^{(1,\beta )}(2s-1)-(j+\beta +1)\,P_j^{(1,\beta )}(2s-1) \bigg ]. \end{aligned}$$

We need yet another formula for Jacobi polynomials ([1, p. 782, (22.7.18)])

$$\begin{aligned} (2j+\beta +1)\,P_{j}^{(0,\beta )}(2s-1)=(j+\beta +1)\,P_j^{(1,\beta )}(2s-1)-(j+\beta )\,P_{j-1}^{(1,\beta )}(2s-1), \end{aligned}$$

which implies immediately that

$$\begin{aligned} \mathcal {J_{\beta }}\bigg [(1-s)\,P_{j-1}^{(2,\beta )}(2s-1) \bigg ]=j\,(j+1)\,P_j^{(0,\beta )}(2s-1). \end{aligned}$$
(3.6)

Then, by Eqs. (3.2) and (3.6) with \(\beta =n-2j+\frac{d-2}{2}\),

$$\begin{aligned} \Delta \bigg [(1-\Vert x\Vert ^2)\,Q_{j,\nu }^n(x) \bigg ]=4\,j\,(j+1)\,P_{j,\nu }^{n,0}(x). \end{aligned}$$

This proves (3.3).

Using \(\Delta \,Y^n_{\nu }(x)=0\), we get

$$\begin{aligned} (1-\Vert x\Vert ^2)\,\Delta \,P_{0,\nu }^{n,0}\,=\,0\,=\,d_{0,n}\,Q_{0,\nu }^n(x). \end{aligned}$$

From Lemma 2.2 and Definition 3.2, we obtain

$$\begin{aligned} (1-\Vert x\Vert ^2)\,\Delta \,P_{j,\nu }^{n,0}=d_{j,n}\,(1-\Vert x\Vert ^2)\,P_{j-1,\nu }^{n-2,2}(x)=d_{j,n}\,Q_{j,\nu }^n(x), \end{aligned}$$

proving (3.4). \(\square \)

Combining Eqs. (3.3) and (3.4), we get a partial differential equation for the Sobolev orthogonal polynomials.

Corollary 3.4

The polynomials \(Q_{j,\nu }^n\) satisfy

$$\begin{aligned} (1-\Vert x\Vert ^2)\,\Delta ^2\big [(1-\Vert x\Vert ^2)\,Q_{j,\nu }^n(x) \big ]=\varpi _{n,j}\,Q_{j,\nu }^n(x), \quad 0\leqslant j \leqslant \frac{n}{2}, \end{aligned}$$
(3.7)

where

$$\begin{aligned} \varpi _{n,j}=16\,j\,(j+1)\,\left( n-j+\frac{d}{2}\right) \,\left( n-j+\frac{d-2}{2}\right) . \end{aligned}$$

The following relation follows readily from (2.3) and (3.5).

Proposition 3.5

The polynomials \(Q_{j,\nu }^n\) satisfy

$$\begin{aligned} \begin{aligned}&Q_{0,\nu }^n(x)=P_{0,\nu }^{n,1}(x), \\&Q_{j,\nu }^{n}(x)=\frac{1}{n+\frac{d}{2}}\left[ (j+1)\,P_{j-1,\nu }^{n-2,1}(x)-j\, P_{j,\nu }^{n,1}(x) \right] , \quad 1\leqslant j \leqslant \frac{n}{2}. \end{aligned} \end{aligned}$$
(3.8)

In the following proposition, we show that the polynomials in Definition 3.2 constitute a mutually orthogonal basis with respect to the inner product (3.1).

Proposition 3.6

For \(n\geqslant 0\), \(\{Q_{j,\nu }^{n}:\ 0 \leqslant j \leqslant \frac{n}{2},\ 1\leqslant \nu \leqslant a_{n-2j}^d\}\) constitutes a mutually orthogonal basis of \(\mathcal {V}_n^d(\Delta )\). Moreover

$$\begin{aligned} \left\langle Q_{j,\nu }^{n}, Q_{k,\eta }^{m} \right\rangle _{\Delta }\,=\, \widetilde{H}_{j,n}^{\Delta }\,\delta _{n,m}\,\delta _{j,k}\,\delta _{\nu ,\eta }, \end{aligned}$$

where,

$$\begin{aligned} \widetilde{H}_{j,n}^{\Delta }\,=\,\left\{ \begin{array}{ll} \lambda +n+\frac{d}{2},&{}\quad j=0,\\ \dfrac{j^2\,(j+1)^2}{2^{n-2j+\frac{d}{2}}}\,h_{j}^{(0,n-2j+\frac{d-2}{2})},&{}\quad 1\leqslant j \leqslant \frac{n}{2}. \end{array}\right. \end{aligned}$$
(3.9)

Proof

If \(j=k=0\), then using the formula

$$\begin{aligned} \int _{\textbf{B}^d}f(x)\,\textrm{d}x=\int _0^1r^{d-1}\int _{\textbf{S}^{d-1}}f(r\,\xi )\,\textrm{d}\sigma (\xi )\,dr, \end{aligned}$$

and (3.2), we get

$$\begin{aligned} \langle Q_{0,\nu }^n,Q_{0,\eta }^m\rangle _{\Delta }=\,&\delta _{n,m}\,\delta _{\nu ,\eta } \left[ \lambda +(\beta +1)^2\int _0^1s^{\beta }ds\right] =\delta _{n,m}\,\delta _{\nu ,\eta }\left[ \lambda +\beta +1\right] , \end{aligned}$$

where \(\beta =n-2j+\frac{d-2}{2}\). If \(j=0\) and \(k\geqslant 1\), then

$$\begin{aligned} \langle Q_{0,\nu }^n,Q_{k,\eta }^m\rangle _{\Delta }=\,&-(\beta +1)\,k\,(k+1)\,\delta _{n,m-2k}\, \delta _{\nu ,\eta }\int _0^1P_k^{(0,\beta )}(2s-1)s^{\beta }ds=0, \end{aligned}$$

since the factor \((1-\Vert x\Vert ^2)\) vanishes on \(\textbf{S}^{d-1}\).

For \(1\leqslant j,k \leqslant \frac{n}{2}\), applying Green’s identity

$$\begin{aligned} \int _{\textbf{B}^d}(u\Delta v-v\Delta u)\,\textrm{d}x=\int _{\textbf{S}^{d-1}}\left( \frac{\partial v}{\partial n}u-\frac{\partial u}{\partial n}\,v \right) \,\textrm{d}\sigma (\xi ) \end{aligned}$$
(3.10)

with \(v(x)=(1-\Vert x\Vert ^2)\,Q_{j,\nu }^{n}(x)\) and \(u=(1-\Vert x\Vert ^2)\,Q_{k,\eta }^{m}(x)\), we get

$$\begin{aligned} \langle Q_{j,\nu }^n,Q_{k,\eta }^m\rangle _{\Delta }=\,&\frac{\lambda }{\sigma _{d-1}} \int _{\textbf{S}^{d-1}}Q_{j,\nu }^n(\xi )\,Q_{k,\eta }^m(\xi )\,\textrm{d}\sigma (\xi )\\&+ \frac{1}{8\,\sigma _{d-1}} \int _{\textbf{B}^d}{(1-\Vert x\Vert ^2)\, Q_{k,\eta }^m(x) \, \Delta ^2\big [(1-\Vert x\Vert ^2) Q_{j,\nu }^n(x)\big ]\,\textrm{d}x}. \end{aligned}$$

Then, by (3.7) and Definition 3.2, we can write

$$\begin{aligned} \langle Q_{j,\nu }^n,Q_{k,\eta }^m\rangle _{\Delta }&=\frac{\varpi _{n,j}}{8\,\sigma _{d-1}} \int _{\textbf{B}^d}{P_{j-1,\nu }^{n-2,2}(x)\,P_{k-1,\eta }^{m-2,2}(x)\,W_2(x)\,\textrm{d}x}\\&=\frac{\varpi _{n,j}}{8\,\sigma _{d-1}\,b_2}H^2_{j-1,n-2}\,\delta _{n,m}\,\delta _{j,k}\,\delta _{\nu ,\eta }. \end{aligned}$$

Using (2.4), we get

$$\begin{aligned} \frac{H^2_{j-1,n-2}}{H^0_{j,n}}=\frac{1}{2}j\,(j+1)\frac{(\frac{d}{2}+1)\,(\frac{d}{2}+2)}{(n-j+\frac{d}{2}) \,(n-j+\frac{d-2}{2})}. \end{aligned}$$

Moreover, using (2.5) and the fact that \(b_2=(\frac{d}{2}+1)\,(\frac{d}{2}+2)\,b_0\), we obtain

$$\begin{aligned} \langle Q_{j,\nu }^n,&Q_{k,\eta }^m\rangle _{\Delta }=\frac{j^2\,(j+1)^2}{2^{\beta +1}}h_j^{(0,\beta )}\, \delta _{n,m}\,\delta _{j,k}\,\delta _{\nu ,\eta }. \end{aligned}$$

\(\square \)

Corollary 3.7

For \(n\geqslant 2\)

$$\begin{aligned} \mathcal {V}^d_n(\Delta )=\mathcal {H}^d_n\oplus (1-\Vert x\Vert ^2)\,\mathcal {V}^d_{n-2}(W_2). \end{aligned}$$

Proof

Using the basis (2.3) for \(\mathcal {V}^d_{n-2}(W_2)\), it follows that we actually have

$$\begin{aligned} Q_{j,\nu }^n(x)=(1-\Vert x\Vert ^2)\,P_{j-1,\nu }^{n-2,2}(x), \end{aligned}$$

for \(j\geqslant 1\), from which the stated result follows. \(\square \)

4 Sobolev Fourier Orthogonal Expansions and Approximation

Consider the Sobolev space

$$\begin{aligned} \texttt{H}^s(\textbf{B}^d)=\left\{ f\in C(\textbf{B}^d);\, \partial ^{\textbf{m}}f\in L^2(\textbf{B}^d), \, |\textbf{m}|\leqslant s,\,\textbf{m}\in \mathbb {N}_0^d \right\} , \end{aligned}$$

where \(L^2(\textbf{B}^d) = L^2(W_{0};\textbf{B}^d)\).

For \(f\in \texttt{H}^2(\textbf{B}^d)\), let us denote by \(\widehat{f}_{j,\nu }^{\;n,\Delta }\) the Fourier coefficients with respect to the basis of \(\mathcal {V}_n^d(\Delta )\) defined in (3.2), that is

$$\begin{aligned} \widehat{f}_{j,\nu }^{\;n,\Delta }=\frac{1}{\widetilde{H}_{j,n}^{\Delta }}\left\langle f, Q_{j,\nu }^{n} \right\rangle _{\Delta }, \end{aligned}$$

where \(\widetilde{H}_{j,n}^{\Delta }\) is given in (3.9).

Let \(\text {proj}_m^{\Delta }:\texttt{H}^2(\textbf{B}^d)\rightarrow \mathcal {V}_m^d(\Delta )\) and \(S_n^{\Delta }:\texttt{H}^2(\textbf{B}^d)\rightarrow \Pi _n^d\) denote the projection operator and partial sum operators

$$\begin{aligned} \text {proj}_m^{\Delta }f(x)=\sum _{j=0}^{\frac{m}{2}} \sum _{\nu =1}^{a_{m-2j}^d}\widehat{f}_{j,\nu }^{\;m,\Delta }\,Q_{j,\nu }^{m}(x) \quad \text {and} \quad S_n^{\Delta }f(x)=\sum _{m=0}^n\text {proj}_m^{\Delta }f(x). \end{aligned}$$

We denote by \(\Vert \cdot \Vert _{\Delta }\) the norm induced by the inner product (3.1), and by \(\mathcal {E}_n(f)_{\Delta }\) the error of best approximation in \(\texttt{H}^2(\textbf{B}^d)\) given by

$$\begin{aligned} \mathcal {E}_n(f)_{\Delta }=\Vert f-S_n^{\Delta }f\Vert _{\Delta }. \end{aligned}$$

It turns out that the orthogonal expansion can be computed without involving the derivatives of f.

Proposition 4.1

For \(j\geqslant 1\), let \(\beta _j=n-2j+\frac{d-2}{2}\). Then

$$\begin{aligned} \widehat{f}_{j,\nu }^{n,\Delta }&=\frac{2\,j\,(j+1)}{\sigma _{d-1}\widetilde{H}_{j,n}^{\Delta }} \Bigg [(\beta _j+j)\,(\beta _j+j+1)\int _{\textbf{B}^d}f(x)\, Q_{j,\nu }^{n}(x)\,\textrm{d}x\\&\quad -\frac{1}{2}\int _{\textbf{S}^{d-1}}f(\xi )\,Y_{\nu }^{n-2j}(\xi )\,\textrm{d}\sigma (\xi )\Bigg ]; \end{aligned}$$

furthermore, for \(j=0\)

$$\begin{aligned} \widehat{f}^{n,\Delta }_{0,\nu }=\frac{1}{\sigma _{d-1}}\int _{\textbf{S}^{d-1}}f(\xi )\,Y^n_{\nu }(\xi )\,\textrm{d}\sigma (\xi ). \end{aligned}$$

Proof

Applying Green’s identity (3.10) with \(v(x)=(1-\Vert x\Vert ^2)\,f(x)\) and \(u=\Delta \big [(1-\Vert x\Vert ^2)\,Q_{j,\nu }^n(x) \big ]=4\,j\,(j+1)\,P_{j,\nu }^{n,0}(x)\), \(j\geqslant 1\), shows

$$\begin{aligned} \widehat{f}_{j,\nu }^{n,\Delta }&=\frac{1}{\sigma _{d-1}\widetilde{H}_{j,n}^{\Delta }}\Bigg [ \lambda \,\int _{\textbf{S}^{d-1}}f(\xi )\,Q_{j,\nu }^n(\xi )\,\textrm{d}\sigma (\xi )\\&\quad +\frac{1}{8}\int _{\textbf{B}^d}\Delta \bigg [(1-\Vert x\Vert ^2)\,f(x) \bigg ]\,\Delta \bigg [(1-\Vert x\Vert ^2)\,Q_{j,\nu }^n(x) \bigg ]\,\textrm{d}x\Bigg ]\\&=\frac{1}{8\,\sigma _{d-1}\widetilde{H}_{j,n}^{\Delta }}\Bigg [\int _{\textbf{B}^d} (1-\Vert x\Vert ^2)\,f(x)\,\Delta ^2\bigg [(1-\Vert x\Vert ^2)\,Q_{j,\nu }^n(x) \bigg ]\,\textrm{d}x\\&\quad -8\,j\,(j+1)\int _{\textbf{S}^{d-1}}f(\xi )\,Y_{\nu }^{n-2j}(\xi )\,\textrm{d}\sigma (\xi )\Bigg ], \end{aligned}$$

where we have used (3.2) and \(P_j^{(0,\beta )}(1)=1\). The stated result for \(j\geqslant 1\) follows from (3.7). The proof of \(j=0\) is similar but easier, in which we need to use \(\Delta \big [(1-\Vert x\Vert ^2)\,Y^n_{\nu }(x) \big ]=-4\,(n+\frac{d}{2})\,Y^n_{\nu }(x)\) and \(\widetilde{H}^{\Delta }_{0,n}=\lambda +n+\frac{d}{2}\). \(\square \)

We point out that the linear operator \(\mathcal {D}\) defined by

$$\begin{aligned} \mathcal {D}[P(x)]= \Delta \big [(1-\Vert x\Vert ^2)\,P(x) \big ], \quad P\in \Pi ^d, \end{aligned}$$

is a bijection on \(\Pi ^d\). Indeed, by Proposition 3.3, \(\mathcal {D}[\,Q_{j,\nu }^n(x) ]\,=\,c_{j,n}\,P_{j,\nu }^{n,0}\) with \(c_{j,n}\ne 0\). Therefore, for each \(n\geqslant 0\), \(\mathcal {D}\) is a one-to-one correspondence between the classical basis for \(\mathcal {V}^d_n(W_0)\) and the Sobolev basis for \(\mathcal {V}^d_n(\Delta )\) defined in Definition 3.2.

The following results will be used to estimate the error of approximation of the Sobolev orthogonal expansion with respect to the basis in Definition 3.2.

Proposition 4.2

For \(f\in \texttt{H}^2(\textbf{B}^d)\) and \(m\geqslant 0\), we have

$$\begin{aligned} \mathcal {D}\,\text {proj}_m^{\Delta }f(x)\in \mathcal {V}_{m}^d(W_0). \end{aligned}$$

Furthermore, for \(m\geqslant 0\), we have

$$\begin{aligned} \Delta _0\,\text {proj}_m^{\Delta }f(x)\in \mathcal {V}_m^d(\Delta ). \end{aligned}$$

Proof

By the definition of \(\text {proj}_m^{\Delta }f(x)\) and (3.3), we have

$$\begin{aligned} \mathcal {D}\,\text {proj}_m^{\Delta }f(x)&=\sum _{j=0}^{\frac{m}{2}}\sum _{\nu =1}^{a_{m-2j}^d}\widehat{f}^{\ m,\Delta }_{j,\nu }\,\Delta \,\bigg [(1-\Vert x\Vert ^2)\,Q_{j,\nu }^{m}(x)\bigg ]\\&=\sum _{j=0}^{\frac{m}{2}}\sum _{\nu =1}^{a_{m-2j}^d}c_{n,j}\,\widehat{f}^{\ m,\Delta }_{j,\nu }\,P_{j,\nu }^{m,0}(x). \end{aligned}$$

Therefore, \(\mathcal {D}\,\text {proj}_m^{\Delta }f(x)\in \mathcal {V}_{m}^d(W_0)\).

The second part of the proposition is a direct consequence of identity (2.2). \(\square \)

We use the previous result to show that \(\mathcal {D}\) commutes with the partial Fourier sum \(S_n^{\Delta }\).

Proposition 4.3

For \(f\in \texttt{H}^2(\textbf{B}^d)\)

$$\begin{aligned} \mathcal {D}\,S_n^{\Delta } f\,=\,S_n^{0}(\mathcal {D} f) \quad \text {and} \quad \Delta _0S^{\Delta }_n f\,=\,S^{\Delta }_n(\Delta _0 f). \end{aligned}$$

Proof

By the definition, \(f-S_n^{\Delta } f=\sum _{m=n+1}^{+\infty }\text {proj}_m^{\Delta }f\). From Proposition 4.2, we get that \(\langle \mathcal {D}\,[f-S_n^{\Delta } f], P\rangle _0=0\) for all \(P\in \Pi _{n}^d\). Consequently, \(S_{n}^0(\mathcal {D} f-\mathcal {D}\,S_n^{\Delta } f)=0\). Since \(S_{n}^0\) reproduces polynomials of degree at most n, then \(S_{n}^0(\mathcal {D}\,S_n^{\Delta } f)=\mathcal {D}\,S_n^{\Delta } f\), which implies that

$$\begin{aligned} 0=S_{n}^0(\mathcal {D} f-\mathcal {D}\,S_n^{\Delta } f)=S_{n}^0(\mathcal {D} f)-\mathcal {D}\,S_n^{\Delta } f, \end{aligned}$$

and the commutation relation is proved.

The second part can be established in a similar way taking into account that \(\Delta _0\) maps \(\mathcal {H}_n^d\) to itself. \(\square \)

The relation in the proposition above passes down to the Fourier coefficients.

Proposition 4.4

For \(f\in \texttt{H}^2(\textbf{B}^d)\)

$$\begin{aligned} \widehat{\mathcal {D} f\,}^{n,0}_{0,\nu }&=\,-4\,\left( n+\frac{d}{2} \right) \,\widehat{f\,}^{n,\Delta }_{0,\nu },\\ \widehat{\mathcal {D} f\,}^{n,0}_{j,\nu }&=\,4\,j\,(j+1)\,\widehat{f\,}^{n,\Delta }_{j,\nu }, \quad 1\leqslant j \leqslant \frac{n}{2}, \end{aligned}$$

and

$$\begin{aligned} \widehat{\Delta _0 f\,}^{n,\Delta }_{j,\nu }\,=\,-(n-2j)\,(n-2j+d-2)\,\widehat{f\,}^{n,\Delta }_{j,\nu }, \quad 0\leqslant j \leqslant \frac{n}{2}. \end{aligned}$$

Proof

Using the identity \(\text {proj}^{\Delta }_{n}f=S_n^{\Delta } f-S_{n-1}^{\Delta } f\) and Proposition 4.3, we obtain \(\mathcal {D}\,\text {proj}^{\Delta }_nf=\text {proj}^0_{n}(\mathcal {D}\,f)\). Then, the first two identities follow from (3.3). The last one is a direct consequence of (2.2). \(\square \)

Theorem 4.5

For \(f\in \texttt{H}^2(\textbf{B}^d)\)

$$\begin{aligned} \mathcal {E}_n(f)_{\Delta } =c\,\mathcal {E}_{n}(\mathcal {D} f)_{0}, \quad n\geqslant 0. \end{aligned}$$

Proof

The Parseval identity reads

$$\begin{aligned} \mathcal {E}_n(f)_{\Delta }^2=\Vert f-S^{\Delta }_nf\Vert ^2_{\Delta }=\sum _{m=n+1}^{\infty } \sum _{j=0}^{\lfloor {\frac{m}{2}}\rfloor }\sum _{\nu }\left| \widehat{f\,}^{m,\Delta }_{j,\nu }\right| ^2\widetilde{H}_{j,m}^{\Delta }=\Sigma _1+\Sigma _2, \end{aligned}$$

where we split the sum as

$$\begin{aligned} \Sigma _1&=\sum _{m=n+1}^{\infty }\sum _{j=1}^{\lfloor {\frac{m}{2}}\rfloor }\sum _{\nu }\left| \widehat{f\,}^{m,\Delta }_{j,\nu }\right| ^2\widetilde{H}_{j,m}^{\Delta },\\ \Sigma _2&=\sum _{m=n+1}^{\infty }\sum _{\nu }\left| \widehat{f\,}^{m,\Delta }_{0,\nu }\right| ^2\widetilde{H}_{0,m}^{\Delta }. \end{aligned}$$

We estimate \(\Sigma _1\) first. Using Proposition 4.4, we get

$$\begin{aligned} \left| \widehat{f\,}^{m,\Delta }_{j,\nu }\right| ^2=\frac{1}{16\,j^2\,(j+1)^2}\,\left| \widehat{\mathcal {D}\,f\,}^{m,0}_{j,\nu }\right| ^2. \end{aligned}$$

Furthermore

$$\begin{aligned} \frac{\widetilde{H}^{\Delta }_{j,m}}{H^{0}_{j,m}}=\frac{\dfrac{j^2\,(j+1)^2}{2^{n-2j+\frac{d}{2}}}\,h_{j}^{(0,n-2j+\frac{d-2}{2})}}{\dfrac{b_{0}\,\sigma _{d-1}}{2^{n-2j+\frac{d}{2}+1}}\, h_j^{(0,n-2j+\frac{d-2}{2})}}=\frac{2\,j^2\,(j+1)^2}{b_{0}\,\sigma _{d-1}}. \end{aligned}$$

Consequently, it follows that:

$$\begin{aligned} \Sigma _1 = \frac{1}{8\,d}\sum _{m=n+1}^{\infty }\sum _{j=1}^{\lfloor {\frac{m}{2}}\rfloor }\sum _{\nu } \left| \widehat{\mathcal {D}\,f\,}^{m,0}_{j,\nu } \right| ^2\,H^{0}_{j,m}. \end{aligned}$$

Next, we estimate \(\Sigma _2\). Using Proposition 4.4 again, we obtain

$$\begin{aligned} \left| \widehat{f\,}^{m,\Delta }_{0,\nu }\right| ^2=\frac{1}{16\,(m+\frac{d}{2})^{2}}\left| \widehat{\mathcal {D}\,f\,}^{m,0}_{0,\nu }\right| ^2. \end{aligned}$$

Moreover

$$\begin{aligned} \frac{\widetilde{H}^{\Delta }_{0,m}}{H^{0}_{0,m}}=\frac{(m+\frac{d}{2})\,(m+\frac{d}{2}+\lambda )}{\frac{d}{2}}. \end{aligned}$$

Consequently, it follows that:

$$\begin{aligned} \Sigma _2=c\,\sum _{m=n+1}^{\infty }\sum _{\nu } \left| \widehat{\mathcal {D}\,f\,}^{m,0}_{j,\nu } \right| ^2\,H^{0}_{j,m}. \end{aligned}$$

Putting these estimates together completes the proof of the theorem. \(\square \)

The main result of this section is stated in the following theorem.

Theorem 4.6

For \(f\in \texttt{H}^2(\textbf{B}^d)\) and \(n\geqslant 1\)

$$\begin{aligned}{} & {} \bigg \Vert \mathcal {D}\,f-\mathcal {D}\,S_n^{\Delta }f\bigg \Vert _0=\,\mathcal {E}_{n}(\mathcal {D} f)_{0}, \end{aligned}$$
(4.1)
$$\begin{aligned}{} & {} \bigg \Vert \partial _i\bigg [(1-\Vert x\Vert ^2)\left( f-S_n^{\Delta }f\right) \bigg ]\bigg \Vert _0\leqslant \,\frac{c}{n}\, \mathcal {E}_n(\mathcal {D}f)_0, \quad 1\leqslant i \leqslant d, \nonumber \\ {}{} & {} \bigg \Vert (1-\Vert x\Vert ^2)\left( f-S_n^{\Delta }f\right) \bigg \Vert _0 \leqslant \frac{c}{n^2} \mathcal {E}_n(\mathcal {D}f)_0. \end{aligned}$$
(4.2)

Proof

By Proposition 4.3

$$\begin{aligned} \Vert \mathcal {D}\,f-\mathcal {D}\,S_n^{\Delta }f\Vert _0=\Vert \mathcal {D}\,f-S_n^0(\mathcal {D}\,f)\Vert _0 =\mathcal {E}_{n}(\mathcal {D}\,f)_0, \end{aligned}$$

which proves (4.1).

Now, we deal with (4.2). We use the well-known duality argument, the so called Aubin–Nitsche technique [4]. We use the characterization

$$\begin{aligned} \bigg \Vert (1-\Vert x\Vert ^2)\left( f-S_n^{\Delta }f\right) \bigg \Vert _0 = \sup _{\Vert g\Vert _0\ne 0} \frac{|\langle g, (1-\Vert x\Vert ^2)\left( f-S_n^{\Delta }f\right) \rangle _0 |}{\Vert g\Vert _0}. \end{aligned}$$
(4.3)

We introduce the following auxiliary boundary-value problem:

$$\begin{aligned} \left\{ \begin{array}{ll} \Delta ^2\,\varphi _g\,=\, g, &{}\quad \text {in } \textbf{B}^d,\\ \Delta \,\varphi _g \,=\,0, &{}\quad \text {on } \textbf{S}^{d-1},\\ \varphi _g\,=\,0. &{}\quad \text {on } \textbf{S}^{d-1}. \end{array} \right. \end{aligned}$$
(4.4)

Observe that, by Green’s identity, for \(h\in \texttt{H}^2(\textbf{B}^d)\), we have

$$\begin{aligned} \begin{aligned} \frac{1}{8\,\sigma _{d-1}}\int _{\textbf{B}^d}\Delta \bigg [(1-\Vert x\Vert ^2)\,h \bigg ]\,\Delta \,\varphi _g\,\textrm{d}x&=\frac{1}{8\,\sigma _{d-1}}\int _{\textbf{B}^d}(1-\Vert x\Vert ^2)\,h \,\Delta ^2\,\varphi _g\,\textrm{d}x\\&=\frac{1}{8\,\sigma _{d-1}}\int _{\textbf{B}^d} (1-\Vert x\Vert ^2)\,h \,g(x)\,\textrm{d}x\\ {}&=\frac{1}{8\,d}\langle g, (1-\Vert x\Vert ^2)\,h \, \rangle _0, \end{aligned} \end{aligned}$$
(4.5)

where we have used \(b_0=d/\sigma _{d-1}\). Moreover, since \(\varphi _g=0\) on \(\textbf{S}^{d-1}\), there is a function \(\widetilde{\varphi }_g\), such that \(\varphi _g = (1-\Vert x\Vert ^2)\,\widetilde{\varphi }_g\). If \(g=0\), then \(\Vert \widetilde{\varphi }_g\Vert ^2_{\Delta }=\langle \widetilde{\varphi }_g, \widetilde{\varphi }_g\rangle _{\Delta }=0\), which implies that \(\widetilde{\varphi }_g\equiv 0\). This shows that the homogeneous version of (4.4) has a unique solution \(\varphi _g=0\) and, by the linearity of the problem (4.4), we also have that the non-homogeneous problem with \(g\in L^2(\textbf{B}^d)\) has a unique solution.

Using (4.5), we get

$$\begin{aligned} \left\langle f-S^{\Delta }_nf, \widetilde{\varphi }_g\right\rangle _{\Delta }=\frac{1}{8\,d}\left\langle g, (1-\Vert x\Vert ^2)\,(f-S^{\Delta }_nf) \,\right\rangle _0. \end{aligned}$$

Since \(S^{\Delta }_n\) reproduces polynomials of degree n, it follows that:

$$\begin{aligned} \langle f-S_n^{\Delta }f,\, S^{\Delta }_{n}\widetilde{\varphi }_g\rangle _{\Delta }=0. \end{aligned}$$

Consequently

$$\begin{aligned} |\langle g, (1-\Vert x\Vert ^2)\left( f-S_n^{\Delta }f\right) \rangle _0 |{} & {} \leqslant \langle f-S_n^{\Delta }f,\, \widetilde{\varphi }_g- S^{\Delta }_{n}\widetilde{\varphi }_g\rangle _{\Delta } \\ {}{} & {} \leqslant \Vert f- S^{\Delta }_{n}f\Vert _{\Delta }\Vert \widetilde{\varphi }_g- S^{\Delta }_{n}\widetilde{\varphi }_g\Vert _{\Delta }. \end{aligned}$$

Therefore, by (4.3), we have

$$\begin{aligned} \bigg \Vert (1-\Vert x\Vert ^2)\left( f-S_n^{\Delta }f\right) \bigg \Vert _0 \leqslant \Vert f- S^{\Delta }_{n}f\Vert _{\Delta }\left( \sup _{\Vert g\Vert _0\ne 0} \frac{\Vert \widetilde{\varphi }_g- S^{\Delta }_{n}\widetilde{\varphi }_g\Vert _{\Delta }}{\Vert g\Vert _0}\right) . \end{aligned}$$

Moreover, by Theorem 4.5 and (2.6)

$$\begin{aligned} \begin{aligned} \Vert \widetilde{\varphi }_g- S^{\Delta }_{n}\widetilde{\varphi }_g\Vert _{\Delta }&=c\,\mathcal {E}_n(\mathcal {D}\widetilde{\varphi }_g)_0 = c\,\mathcal {E}_n(\Delta \,\varphi _g)_0\\ {}&\leqslant \frac{c}{n^2}\bigg [\mathcal {E}_{n-2}(\Delta ^2\,\varphi _g)_2+\mathcal {E}_n(\Delta _0\,\Delta \,\varphi _g)_0 \bigg ]. \end{aligned} \end{aligned}$$
(4.6)

Let us bound the term \(\mathcal {E}_{n}(\Delta _0\Delta \,\varphi _g)_0\). Since \(P_{0,\nu }^{n,0}\) and \(P_{\frac{n}{2},1}^{n,0}\) are harmonic and radial functions, respectively, then \(\Delta _0\,\Delta P_{0,\nu }^{n,0}=0\) and \(\Delta _0\,\Delta P_{\frac{n}{2},1}^{n,0}=0\). This means that

$$\begin{aligned} \mathcal {E}_{n}(\Delta _0\Delta \,\varphi _g)_0^2=\sum _{m=n+1}^{\infty } \sum _{j=1}^{\lfloor {\frac{m-2}{2}}\rfloor } \sum _{\nu =1}^{a^d_{m-2j}}\left| \widehat{\Delta _0\Delta \,\varphi _g}^{m,0}_{j,\nu } \right| ^2 H^0_{j,m}. \end{aligned}$$

We need the following identities (see (3.5) in [9]):

$$\begin{aligned} \widehat{\Delta _0\Delta \,\varphi _g}^{m,0}_{j,\nu }=\lambda _{m-2j}\,\widehat{\Delta \,\varphi _g}^{m,0}_{j,\nu } \quad \text {and} \quad \widehat{\Delta ^2\,\varphi _g}^{m-2,2}_{j-1,\nu }=\kappa ^0_{m-j}\,\widehat{\Delta \,\varphi _g}^{m,0}_{j,\nu }, \end{aligned}$$

for \(0\leqslant j \leqslant \frac{m}{2}\) in the first identity and \(0\leqslant j \leqslant \frac{m-2}{2}\) in the second identity, where \(\lambda _{m-2j}\) and \(\kappa ^0_{m-j}\) are defined in Lemma 2.2. Using these identities, we get

$$\begin{aligned} \mathcal {E}_{n}(\Delta _0\Delta \,\varphi _g)_0^2=\sum _{m=n+1}^{\infty } \sum _{j=1}^{\lfloor {\frac{m-2}{2}}\rfloor } \sum _{\nu =1}^{a^d_{m-2j}} \frac{|\lambda _{m-2j}|^2}{|\kappa ^0_{m-j}|^2}\left| \widehat{\Delta ^2\,\varphi _g}^{m-2,2}_{j-1,\nu } \right| ^2 H^0_{j,m}. \end{aligned}$$

Moreover

$$\begin{aligned} \frac{H^0_{j,m}}{H^2_{j-1,m-2}}=\frac{2\,(m-j+\frac{d-2}{2})\,(m-j+\frac{d+2}{2})}{(\frac{d+2}{2})\,j\,(j+1)}. \end{aligned}$$

Therefore, for \(1\leqslant j \leqslant \frac{m-2}{2}\)

$$\begin{aligned} \frac{|\lambda _{m-2j}|^2}{|\kappa ^0_{m-j}|^2}\frac{H^0_{j,m}}{H^2_{j-1,m-2}}\leqslant c\,\frac{m+d}{m+\frac{d-2}{2}}. \end{aligned}$$

Consequently

$$\begin{aligned} \begin{aligned} \mathcal {E}_{n}(\Delta _0\Delta \,\varphi _g)_0^2&\leqslant c \sum _{m=n+1}^{\infty }\frac{m+d}{m+\frac{d-2}{2}}\sum _{j=1}^{\lfloor {\frac{m-2}{2}}\rfloor } \sum _{\nu =1}^{a^d_{m-2j}}\left| \widehat{\Delta ^2\varphi _g}^{m-2,2}_{j-1,\nu } \right| ^2H^2_{j-1,m-2}\\ {}&\leqslant c\,\mathcal {E}_{n-2}(\Delta ^2\varphi _g)_2^2. \end{aligned} \end{aligned}$$
(4.7)

Putting together (4.6) and (4.7), we get

$$\begin{aligned} \Vert \widetilde{\varphi }_g- S^{\Delta }_{n}\widetilde{\varphi }_g\Vert _{\Delta }\leqslant \frac{c}{n^2} \mathcal {E}_{n-2}(\Delta ^2\varphi _g)_2\leqslant \frac{c}{n^2} \Vert \Delta ^2\varphi _g\Vert _2\leqslant \frac{c}{n^2}\Vert \Delta ^2 \varphi _g\Vert _0 =\frac{c}{n^2} \Vert g\Vert _0, \end{aligned}$$

where we have used (4.4) and the fact that \((1-\Vert x\Vert ^2)^2\leqslant 1\) on \(\textbf{B}^d\). This implies that

$$\begin{aligned} \sup _{\Vert g\Vert _0\ne 0} \frac{\Vert \widetilde{\varphi }_g- S^{\Delta }_{n}\widetilde{\varphi }_g\Vert _{\Delta }}{\Vert g\Vert _0}\leqslant \frac{c}{n^2}, \end{aligned}$$

and, therefore, by Theorem 4.5 again, we have

$$\begin{aligned} \bigg \Vert (1-\Vert x\Vert ^2)\left( f-S_n^{\Delta }f\right) \bigg \Vert _0 \leqslant \frac{c}{n^2} \mathcal {E}_n(\mathcal {D}f)_0, \end{aligned}$$

which proves (4.2).

The intermediate case follows from the multivariate Landau–Kolmogorov inequality [3]: for \(i=1,2,\ldots ,d\)

$$\begin{aligned}{} & {} \bigg \Vert \partial _i\bigg [(1-\Vert x\Vert ^2)\left( f-S_n^{\Delta }f\right) \bigg ]\bigg \Vert _0\\{} & {} \qquad \quad \leqslant c\,\bigg \Vert (1-\Vert x\Vert ^2)\left( f-S_n^{\Delta }f\right) \bigg \Vert _0 ^{1/2}\,\bigg \Vert \mathcal {D}\,f-\mathcal {D}\,S_n^{\Delta }f\bigg \Vert _0^{1/2}. \end{aligned}$$

\(\square \)

This kind of result has not been proved yet for the inner product with a mass point at the origin studied in the next section, and will be analysed in the near future.

5 A Sobolev Inner Product with a Mass Point at the Origin

In this section, we consider a Sobolev inner product obtained from \(\langle \cdot , \cdot \rangle _{\Delta }\) by replacing the integral over \(\textbf{S}^{d-1}\) with a mass point at the origin. That is, this new inner product is given by

$$\begin{aligned} \begin{aligned} \left\langle f,g\right\rangle _{\Delta }^*&=\frac{\lambda ^*}{\sigma _{d-1}}f(0)\,g(0)\\ {}&\quad + \frac{1}{8\,\sigma _{d-1}} \int _{\textbf{B}^d}{\Delta [(1-\Vert x\Vert ^2) f(x)] \, \Delta [(1-\Vert x\Vert ^2) g(x)]\,\textrm{d}x}, \quad \lambda ^* >0. \end{aligned} \end{aligned}$$
(5.1)

As in Definition 3.2, orthogonal polynomials with respect to this inner product can be constructed in a similar way using classical ball polynomials (2.3) and Theorem 2.4 in [11], but, in this case, there is a small modification.

Definition 5.1

For \(n\geqslant 0\) and \(0\leqslant j \leqslant \frac{n}{2}\), let \(\{Y_{\nu }^{n-2j}(x):\,1\leqslant \nu \leqslant a_{n-2j}^d\}\) denote an orthonormal basis of \(\mathcal {H}_{n-2j}^d\). We define the polynomials

$$\begin{aligned} \begin{aligned}&R_{0,\nu }^n(x):=Y_{\nu }^n(x), \\ {}&R_{j,\nu }^{n}(x):= (-1)^j\,\Vert x\Vert ^2\,P_{j-1}^{(n-2j+\frac{d-2}{2},2)}(1-2\Vert x\Vert ^2)\,Y_{\nu }^{n-2j}(x), \quad 1\leqslant j \leqslant \frac{n}{2}. \end{aligned} \end{aligned}$$

The following proposition is useful in studying the orthogonality of these polynomials with respect to the inner product (5.1).

Proposition 5.2

The polynomials \(R_{j,\nu }^n\) satisfy

$$\begin{aligned} \Delta \big [(1-\Vert x\Vert ^2)\,R_{j,\nu }^n(x) \big ]\,=\,c_{j,n}\,P_{j,\nu }^{n,0}, \quad n\geqslant 0, \end{aligned}$$
(5.2)

where \(c_{j,n}\) are constants defined in (3.3).

Proof

For \(j=0\), by (3.2), we have

$$\begin{aligned} \Delta \big [(1-\Vert x\Vert ^2)\,R_{0,\nu }^n(x) \big ]=\Delta \big [(1-\Vert x\Vert ^2)\,Y_{\nu }^n(x) \big ]=-4\,\left( n+\frac{d}{2}\right) \,Y^n_{\nu }(x). \end{aligned}$$

Now, we deal with \(1\leqslant j \leqslant \frac{n}{2}\). The Jacobi polynomials satisfy the following properties ([10, p.71]):

$$\begin{aligned} s\,P_{j-1}^{(\beta ,2)}(1-2s)=\frac{1}{2j+\beta +1}\left[ (j+1) \,P_{j-1}^{(\beta ,1)}(1-2s)+j\,P_j^{(\beta ,1)}(1-2s) \right] , \end{aligned}$$

and ([10, p.59]):

$$\begin{aligned} P_j^{(\beta ,1)}(1-2s)\,=\,(-1)^j\,P_j^{(1,\beta )}(2s-1). \end{aligned}$$

Therefore

$$\begin{aligned} s\,P_{j-1}^{(\beta ,2)}(1-2s)=\frac{(-1)^{j-1}}{2j+\beta +1}\left[ (j+1)\, P_{j-1}^{(1,\beta )}(2s-1)-j\,P_j^{(1,\beta )}(2s-1) \right] . \end{aligned}$$
(5.3)

As in the proof of Proposition 3.3, this implies that

$$\begin{aligned} \mathcal {J_{\beta }}\bigg [s\,P_{j-1}^{(\beta ,2)}(1-2s) \bigg ]=(-1)^j\,j\,(j+1)\,P_j^{(0,\beta )}(2s-1). \end{aligned}$$
(5.4)

Then, by (3.2) and (5.4) with \(\beta =n-2j+\frac{d-2}{2}\)

$$\begin{aligned} \Delta \bigg [(1-\Vert x\Vert ^2)\,R_{j,\nu }^n(x) \bigg ]=4\,j\,(j+1)\,P_{j,\nu }^{n,0}(x). \end{aligned}$$

This proves (5.2). \(\square \)

The following relation follows readily from (2.3), (5.3), and (3.8).

Proposition 5.3

The polynomials \(R_{j,\nu }^n\) satisfy

$$\begin{aligned} \begin{aligned}&R_{0,\nu }^n(x)=P_{0,\nu }^{n,1}(x), \\&R_{j,\nu }^{n}(x)=\frac{1}{n+\frac{d}{2}}\left[ j\,P_{j,\nu }^{n,1} (x)-(j+1)\,P_{j-1,\nu }^{n-2,1}(x) \right] , \quad 1\leqslant j \leqslant \frac{n}{2}. \end{aligned} \end{aligned}$$

In particular, \(R_{0,\nu }^n=Q_{0,\nu }^n\) and \(R_{j,\nu }^{n}=-Q_{j,\nu }^{n}\) for \(1\leqslant j \leqslant \frac{n}{2}\).

Let us denote by \(\mathcal {V}_n^{d,*}(\Delta )\) the space of Sobolev orthogonal polynomials of degree n with respect to (5.1).

Proposition 5.4

For \(n\geqslant 0\), \(\{Q_{j,\nu }^{n}:\ 0 \leqslant j \leqslant \frac{n}{2},\ 1\leqslant \nu \leqslant a_{n-2j}^d\}\) constitutes a mutually orthogonal basis of \(\mathcal {V}_n^{d,*}(\Delta )\). Moreover

$$\begin{aligned} \left\langle Q_{j,\nu }^{n}, Q_{k,\eta }^{m} \right\rangle _{\Delta }^*\,=\, \widetilde{H}_{j,n}^{*}\,\delta _{n,m}\,\delta _{j,k}\,\delta _{\nu ,\eta }, \end{aligned}$$

where,

$$\begin{aligned} \widetilde{H}_{j,n}^{*}\,=\,\left\{ \begin{array}{ll} \lambda ^*\,\delta _{n,0}+n+\frac{d}{2},&{}\quad j=0,\\ \dfrac{j^2\,(j+1)^2}{2^{n-2j+\frac{d}{2}}}\,h_{j}^{(0,n-2j+\frac{d-2}{2})},&{}\quad 1\leqslant j \leqslant \frac{n}{2}. \end{array}\right. \end{aligned}$$

Proof

The proof is similar to that of Proposition 3.6, but the fact that \(Q_{j,\nu }^n(0)=-R_{j,\nu }^n(0)=0\) for \(n\geqslant 0\) must be taken into account. \(\square \)

Following the proof of Proposition 4.1 and using Proposition 5.1, we can obtain the Fourier coefficients of a function relative to (5.1) given by

$$\begin{aligned} \widehat{f}_{j,\nu }^{\;n,*}=\frac{1}{\widetilde{H}_{j,n}^{*}}\left\langle f, R_{j,\nu }^{n} \right\rangle _{\Delta }^*. \end{aligned}$$

Proposition 5.5

For \(n\geqslant 1\)

$$\begin{aligned} \widehat{f}_{j,\nu }^{\;n,*} = \widehat{f}_{j,\nu }^{\;n,\Delta }, \quad 0\leqslant j \leqslant \lfloor {n/2}\rfloor , \end{aligned}$$

and

$$\begin{aligned} \widehat{f}_{0,1}^{\;0,*}=\frac{1}{\sigma _{d-1}\,(\lambda ^*+\frac{d}{2})}\left[ \lambda ^*\,f(0)+\frac{d}{2} \int _{\textbf{S}^{d-1}}f(\xi )\,\textrm{d}\sigma (\xi )\right] . \end{aligned}$$

Proposition 3.6 together with Proposition 5.4 yields the following result.

Proposition 5.6

Define the inner product

$$\begin{aligned} \begin{aligned} \left\langle f,g\right\rangle _{\Delta }^{**}&=\frac{\lambda ^*}{\sigma _{d-1}}f(0)\,g(0)+\frac{\lambda }{\sigma _{d-1}} \int _{\textbf{S}^{d-1}}f(\xi )\,g(\xi )\,\textrm{d}\sigma (\xi )\\&\quad + \frac{1}{4\,\sigma _{d-1}} \int _{\textbf{B}^d}{\Delta [(1-\Vert x\Vert ^2) f(x)] \, \Delta [(1-\Vert x\Vert ^2) g(x)]\,\textrm{d}x}, \quad \lambda ^*,\lambda >0, \end{aligned} \end{aligned}$$
(5.5)

and let \(\mathcal {V}_n^{d,**}(\Delta )\) be the space of Sobolev orthogonal polynomials of degree n with respect to (5.5). For \(n\geqslant 0\), \(\{Q_{j,\nu }^{n}:\ 0 \leqslant j \leqslant \frac{n}{2},\ 1\leqslant \nu \leqslant a_{n-2j}^d\}\) constitutes a mutually orthogonal basis of \(\mathcal {V}_n^{d,**}(\Delta )\). Moreover

$$\begin{aligned} \left\langle Q_{j,\nu }^{n}, Q_{k,\eta }^{m} \right\rangle _{\Delta }^{**}\,=\, \left( \widetilde{H}_{j,n}^{\Delta }+\widetilde{H}_{j,n}^{*}\right) \,\delta _{n,m}\,\delta _{j,k}\,\delta _{\nu ,\eta }, \end{aligned}$$

6 Conclusions

In this work, we analyse the impact of an additional term in the Sobolev inner product introduced by Y. Xu in [11] used to find the numerical solution of the Poisson equation \(-\Delta u = f(\cdot , u)\) on the unit disk with zero boundary conditions (Atkinson and Hansen [2]). Each of the two Sobolev inner products considered in this paper includes an additional term that takes into account the values of the functions on the boudary of the unit ball of \(\mathbb {R}^d\) (see (3.1)), or the evaluation of the functions at the origin (see (5.1)). These new Sobolev inner products contemplate the prospect of knowing some additional values of the involved functions and offer the possibility of taking into account more characteristics of the functions that can be approximated. The additional terms are multiplied by a positive real constant \(\lambda \) generalizing the inner product considered by Y. Xu in [11]. Appart from the exhaustive description of the orthogonal structure in both cases, giving explicit expressions for the bases, we provide the Fourier coefficients of the approximations in both cases and related them with the Fourtier coefficients for the original case. We must remark that the spherical term in the first Sobolev inner product has influence over the angular part of the orthogonal expansion, that is, the terms associated with the index value \(j=0\) (see (3.9)), and, in the second Sobolev inner product, that is, when the value of the functions at the origin is added, only the first coefficient in the Fourier expansion is affected by the new conditions (Proposition 5.4). Moreover, the error of approximation for the orthogonal expansions with respect to the Sobolev inner products appearing in this work (in particular, the one introduced in [11]) have not been previously studied in the literature.