LUPAŞ-TYPE INEQUALITY AND APPLICATIONS TO MARKOV-TYPE INEQUALITIES IN WEIGHTED SOBOLEV SPACES

Weighted Sobolev spaces play a main role in the study of Sobolev orthogonal polynomials. In particular, analytic properties of such polynomials have been extensively studied, mainly focused on their asymptotic behavior and the location of their zeros. On the other hand, the behavior of the Fourier-Sobolev projector allows to deal with very interesting approximation problems. The aim of this paper is twofold. First, we improve a well known inequality by Lupaş by using connection formulas for Jacobi polynomials with different parameters. In a next step, we deduce Markov-type inequalities in weighted Sobolev spaces associated with generalized Laguerre and generalized Hermite weights. 2010 AMS Subject Classification: 33C45, 41A17, 26C99.


Introduction
Let P be the linear space of polynomials with real coefficients and P n be its linear subspace of polynomials of degree at most n. The so-called Markov-type inequalities provide estimates of the ratio of the norm of derivatives of a polynomial and the norm of the polynomial itself. They constitute a basic tool in the proof of many inverse theorems in polynomial approximation theory (cf. [24,25,32] and the references therein).
For every polynomial P ∈ P n , the Markov inequality means that holds. Chebyshev polynomials of the first kind are optimal, i.e., the above inequality became an equality for such polynomials (see [8]).
In [15] the above inequality has been extended when you take into account the p norm (p ≥ 1). Indeed, for every polynomial P ∈ P n you get P L p ([−1,1]) ≤ C(n, p)n 2 P L p ([−1,1]) .
Therein the value of C(n, p) is explicitly given in terms of p and n. Furthermore, you have an upper bound C(n, p) ≤ 6e 1+1/e for n > 0 and p ≥ 1. In [13] admissible values for C(n, p) and some computational results for p = 2 are given. Notice that for any p > 1 and every polynomial P ∈ P n where C is explicitly given and it is less than the constant C(n, p) (see [15]).
On the other hand, from a matrix analysis approach, in [10] it is proved that the exact value of C(n, 2) is, indeed, the greatest singular value of the matrix A n = [a j,k ] 0≤j≤n−1,0≤k≤n , where a j,k = 1 −1 p j (x)p k (x)dx and {p n } ∞ n=0 is the sequence of standard orthonormal Legendre polynomials. A simple proof of this result, with an interpretation of the constant C(n, 2) as the largest positive zero of a polynomial as well as an explicit expression of the extremal polynomial (the polynomial such that the inequality becomes an equality) in the L 2 -Markov inequality appears in [17].
For weighted L 2 -spaces, the analysis of such Markov-type inequalities becomes more difficult. For instance, let · L 2 ((a,b),w) be a weighted L 2 -norm on P, given by where w is an integrable function on (a, b), −∞ ≤ a < b ≤ ∞, such that w > 0 a.e. on (a, b) and all their moments x n w(x)dx, n ≥ 0, are finite. Then there exists a constant γ n = γ n (a, b, w) such that (1.1) P L 2 ((a,b),w) ≤ γ n P L 2 ((a,b),w) , for all P ∈ P n .
Also, when we consider the weighted L 2 -norm associated with the Laguerre weight is proved in [5].
On the other hand, in [27] and [28] the study of lower and upper bounds of the sharp constant in the above inequality is given by using analytic tools, while they have improved with the assistance of computer algebra in [30].
There exist a lot of results on Markov-type inequalities (see, e.g., [11,12,24], and the references therein). In connection with the research in the field of the weighted approximation by polynomials, Markov-type inequalities have been studied for different norms and sets over which the norm is taken (see, e.g., [23] and the references therein). More recently, the study of asymptotic behavior of the sharp constant involved in some kind of these inequalities have been done in [5] for Hermite, Laguerre and Gegenbauer weights, and in [6] for Jacobi weights with parameters satisfying some constraints.
Notice that, from matrix analysis considerations, the sharp constant is the greatest singular value of the matrix n=0 is the orthonormal polynomial sequence with respect to the positive measure w(x)dx. Thus, from a computational point of view you need to find the connection coefficients between the sequences {p n } ∞ n=0 and {p n } ∞ n=0 in order to proceed with the computation of the matrix. In a second step, you must give the greatest singular value of the matrix B n . Notice that for classical weights (Jacobi, Laguerre and Hermite), such connection coefficients can be found in a simple way (see [1] and [31]).
In [26] it is proved that the best constant γ * n := sup P ∈Pn The main interest of the above result is however qualitative, since the bound specified by (1.3) can be very crude. In fact, when w( The contrast between this estimate and the classic result of Schmidt [38], which establishes γ * n = √ 2n , is evident. We must point out that the nature of the extremal problems associated with inequalities (1.1) and (1.2) is different. In the first case the constant on the right-hand side of (1.1) depends on n, while in the second one the multiplicative constant C α on the right-hand side of (1.2) is independent of n.
On the other hand, for a classical weight w, i.e., such that a Pearson equation (A(x)w(x)) = B(x)w(x) holds, where A, B are polynomials of degree at most 2 and 1, respectively, a similar problem connected with the Markov-Bernstein inequality has been analyzed in [14] and [16], when you try to determine the sharp constant C(n, m; w) such that Notice that in [16] the study of sharp constants is also studied for semiclassical weights satisfying a Pearson equation (A(x)w(x)) = B(x)w(x), where A, B are polynomials with the constraint that deg(B) ≥ 1 and some boundary conditions on the support of the weight are fulfilled.
An analogue of the Markov-Bernstein inequality for linear operators T from P n into P has been studied in [19] in terms of singular values of matrices. Some illustrative examples when T is either the derivative (difference) operator with some classical weights (Laguerre, Gegenbauer in the first case, Charlier, Meixner in the second one) are shown. In particular, difference inequalities for discrete iterated classical weights have been studied in [36]. Another recent application of Markov-Bernstein-type inequalities can be found in [7].
In this contribution, we first focus our attention on a Markov type inequality involving the L 2 -spaces associated with the Lebesgue measure and the beta probability measure supported on [−1, 1] such that the corresponding sequences of orthogonal polynomials are the Legendre and Jacobi polynomials, respectively.
By using Lupaş' inequality and the asymptotic behavior of Gamma function, the authors showed in [21] that there exists a constant c 1 (α, β), which just depends on α and β, such for every n ∈ N and P ∈ P n , when max{α, β} ≥ −1/2. Inequality (1.5) is interesting by itself. It has been applied (see [21]) to obtain Markovtype inequalities in weighted Sobolev spaces associated with vector of measures intimately related with the classical weights (normal, gamma and beta distributions). The study of properties of such functional spaces has been done in classical monographs as [2] and [22], while [18] is a basic reference about weighted Sobolev spaces. In such a sense, one of our aims is to study bounds for sharp constants for Markov inequalities in the framework of such Sobolev spaces. Notice that Muckenhoupt inequality for three measures and the connection with orthogonal polynomials associated with Sobolev inner products has been given in [9]. Surveys about orthogonal polynomials in weighted Sobolev spaces are presented in [33], [34], [35].
for every n ∈ N and P ∈ P n , where for some constant C α,β which just depend on α and β.

Some technical lemmas
In order to make the proof of Theorem 1.1 more readable, we are collecting in this section some technical lemmas that will be needed there.
Proof. The hypothesis gives and thus, The hypothesis also gives Hence, We also need the following direct result.
By C we will denote a constant independent on n, k, j, , which can depend just on α and β, and can change its value from line to line and even in the same line. The expression A B means, as usual, that there exists a constant C such that and the bounds of the quotient depend just on β and f .
Proof. Note that it suffices to consider the case < k.
Since α > β, we have If we denote by J α,β k the Jacobi orthonormal polynomial of degree k, i.e., J α,β k = h −1/2 k P α,β k , then the previous formula reads as Hence, .
Let P 1 n (respectively, P 2 n ) be the Hilbert space P n with the inner product associated with the weight w α,β (respectively, w 0,0 ) and orthonormal basis {J α,β k } n k=0 (respectively, {J 0,0 k } n k=0 ), and I the identity map I : P 1 n → P 2 n . The matrix representation of the map I in the orthonormal bases {J α,β k } n k=0 and {J 0,0 k } n k=0 is I n = (c k, ). If I n 2 denotes the induced 2-norm of I n , then for every P ∈ P n , and I n 2 2 is the best possible constant. Since the 2-norm is at most the Frobenius norm, we conclude and so, for each α > β > 0, there exists a constant C(α, β) such that for every P ∈ P n . This gives the result in this case, with G α,β (n) = C(α, β) U α (n) .
If α > β = 0, then the same argument, with a k,j instead of c k, (since b k,j = δ k,j , i.e., the matrix (b k,j ) is the identity in this case), gives the same result with simpler computations.
If α = β > 0, then the same argument, with b k,j instead of c k, (since the matrix (a k,j ) is the identity in this case), gives the same result. The case α = β = 0 is trivial.

Applications to Markov-type inequalities in weighted Sobolev spaces
In [21, Theorem 2.1] the authors extend the Markov-type inequalities to the framework of weighted Sobolev spaces in the following way.
Theorem 4.1. The following inequalities hold.
In each case the multiplicative constants depend just on the specified parameters and they do not depend on n or λ 1 , . . . , λ k .
In each case the multiplicative constants depend just on the specified parameters and they do not depend on n or λ 1 , . . . , λ k .
Proof. The argument in the proof of Theorem 4.1 gives the result, by using inequality (1.6) instead of (1.5).