Dimension-free Lp-estimates for vectors of Riesz transforms in the rational Dunkl setting

In this article, we prove a dimension-free upper bound for the Lp-norms of the vector of Riesz transforms in the rational Dunkl setting. Our main technique is the Bellman function method adapted to the Dunkl setting.


Introduction
In the seminal article [22], Charles F. Dunkl defined new commuting differentialdifference operators associated with a finite reflection group G which is related to a root system R on a Euclidean space R N .Here ξ ∈ R N , σ α denotes the reflection with respect to the hyperplane orthogonal to the root α ∈ R, and k : R → C is a G-invariant function (see Section 2 for details).The Dunkl operators are generalizations of the directional derivatives (in fact, they are ordinary partial derivatives for k ≡ 0), however, in general, they are non-local operators.They turn out to be a key tool in the study of special functions with reflection symmetries and allow to built up the framework for the theory of special functions and integral transforms in several variables related with reflection groups in [21]- [23].Afterwards, the theory was studied and developed by many mathematicians from many different points of view.Beside the special functions and mathematical analysis, the Dunkl theory has deep connections with the other branches of mathematics, for instance probability theory, mathematical physics, and algebra.The aim of the article is to study the Riesz transforms in the rational Dunkl setting defined as follows.
Definition 1.1.Let f ∈ S(R N ) and j ∈ {1, . . ., N}.The Riesz transforms R j in the Dunkl setting are defined by (1.1) where F is the Dunkl transform (see (2.9)).The vector of the Riesz transforms in the Dunkl setting is defined by Here and subsequently, S(R N ) denotes the Schwartz class functions.
Theorem 1.2 ([1, Theorem 3.3]).Let 1 < p < ∞.The Riesz transforms, defined initially on S(R N ), extend to bounded operators L p (dw) −→ L p (dw), where dw is the measure associated with the root system R and the multiplicity function k (see (2.2) and Section 2 for details).
Moreover, it can be checked by using the Dunkl transform (see Lemma 2.5) that for f ∈ L 2 (dw), where ∆ k = N j=1 T 2 e j is the Dunkl Laplacian.Here and subsequently, {e j } 1≤j≤N denote the canonical orthonormal basis in R N .
A well-known result concerning the classical Riesz transforms, proved by E.M. Stein in [51], stated that in the case k ≡ 0, there are upper bounds for the L p -norm of the vector of the Riesz transforms independent of the dimension N. Then it was proved that, in fact, the L p -norm of the vector of the Riesz transforms is controlled by C max(p, p p−1 ), where C > 0 is independent of p and the dimension N, see [4,24].At this point, it is worth to mention that in the case k ≡ 0, the norms of the vector of the Riesz transforms are still not known (see [5,18,33] for the some results concerning the subject).
The aim of the current paper is to prove the bounds for L p (dw)-norms of the vector of Riesz transforms in that spirit in the rational Dunkl setting, i.e., the case of k ≡ 0 and for any root system R.
The main goal of this paper is to prove the following theorem.Recall that the measurable function is G-invariant, if for almost all x ∈ R N and σ ∈ G we have f (σ(x)) = f (x) (see Section 2 for the definition of the Weyl group G).
Moreover, for all f ∈ L p (dw), which are G-invariant, we have Our second main goal in this paper will be to prove a different version of Theorem 1.3 in the one-dimensional case.If N = 1, then there is just one Riesz transform (Dunkl Hilbert transform), which will be denoted by H, i.e., (1.6) Theorem 1.4.Assume that N = 1.Let p, q > 1 be such that 1 p + 1 q = 1.Set p * = max(p, q).Then for all f ∈ L p (dw) we have (1.7)Hf L p (dw) ≤ 1440(p * − 1) f L p (dw) .
Let us discuss some difficulties in Dunkl analysis, which distinguish it from the classical setting k ≡ 0. As it was pointed out in [54], one of the most serious problem in the Dunkl analysis lays in the lack of knowledge about generalized translations τ x , x ∈ R N , which generalize the ordinary translation of the function f −→ f (• − x).It was proved that for some root systems R the operators τ x do not preserve positive functions and the boundedness of τ x on L p (dw)-spaces (p = 2) becomes an open problem in the Dunkl analysis.In the context of this paper, we overcome this difficulty using recently proved upper bounds for the Dunkl Poisson kernel (see [2]).
Looking from the point of view of the current paper, let us discuss another difficulty regarding the Dunkl operators.The Dunkl operators T ξ do not satisfy the Leibniz rule in the usual sense, i.e., the formula holds just in specific cases e.g. if f or g is radial.In general case, the formula for T ξ (f g) contains summands of local and non-local character.The analysis turns to be more complicated when we compose two or more Dunkl operators, which is the case when we are trying to adapt the Bellman function method.
At this point, it is also worth to mention that in the Dunkl setting the explicit formulas for ∆ k u p for p ∈ [1, ∞) and u ∈ S(R N ) seem to be of quite different nature than in the case k ≡ 0. In order to elaborate the case of p = 2, let us consider the Dunkl version of the carré du champ operator: As it was noticed in [56], we have T j f T j g dw, but the identity N j=1 T j f T j g ≡ Γ k (f, g) is not true if k ≡ 0, which can be checked by the explicit calculation: (see also [27] for a more general calculation).In the current paper, following the approach presented in [20], we obtain an explicit formula for ∆ k applied to the Bellman function, which turn out to be closely related to the known formulas for ∆ k u p .Therefore, the Bellman approach has to be adapted to this specific setting.
Acknowledgment.The author would like to thank B lażej Wróbel and Jacek Dziubański for their helpful comments and suggestions, and Charles Dunkl for pointing our some references.

Basic definitions of the Dunkl theory
In this section, for the convenience of the reader, we present basic facts concerning the theory of the Dunkl operators.For details we refer the reader to [22], [48], and [49].The reader who is familiar with the Dunkl theory can omit this section and proceed to Subsection 3.2.
We consider the Euclidean space R N with the scalar product x, y = N j=1 x j y j , where x = (x 1 , ..., x N ), y = (y 1 , ..., y N ), and the norm x 2 = x, x .The number N will be fixed throughout this paper.For a nonzero vector α ∈ R N , the reflection σ α with respect to the hyperplane α ⊥ orthogonal to α is given by (2.1) In this paper we fix a normalized root system in R N , that is, a finite set The finite group G generated by the reflections σ α , α ∈ R is called the Weyl group (reflection group) of the root system.A multiplicity function is a G-invariant function k : R → C which will be ≥ 0 throughout this paper.Let be the associated measure in R N , where, here and subsequently, dx stands for the Lebesgue measure in R N .For a Lebesgue measurable set A we denote w(A) = A dw(x).
There is a constant C > 0 such that Moreover, since the function w is G-invariant, for all σ ∈ G we have (2.5) For ξ ∈ R N , the Dunkl operators T ξ are the following k-deformations of the directional derivatives ∂ ξ by a difference operator: The Dunkl operators T ξ , which were introduced in [22], commute and are skew-symmetric with respect to the G-invariant measure dw.Let {e j } 1≤j≤N denote the canonical orthonormal basis in R N and let T j = T e j .As usual, for every multi-index where {e 1 , e 2 , . . ., e N } is the canonical basis of R N .The additional subscript x in ∂ α x means that the partial derivative ∂ α is taken with respect to the variable x ∈ R N .By ∇ x f we denote the gradient of the function f with respect to the variable x.
The following fundamental theorem was proved by Ch.Dunkl.

Theorem 2.1 ([23]
).The Dunkl operators are skew-symmetric with respect to the measure dw.More precisely, for any ξ ∈ R N , f ∈ S(R N ), and g ∈ C 1 b (R N ) (here and subsequently, C 1 b (R N ) denotes the set of bounded functions with bounded and continuous partial derivatives), we have the following integration by parts formula It follows from (2.6) that T ξ f ≡ 0 and f ≡ 0 on R N \ A. Hence, by (2.7) we have We will also need the following technical lemma, which is well-known.We provide the sketch of its proof for the sake of completeness.
Proof.By the definition of T j and by the fundamental theorem of calculus, for all f ∈ C 1 (R N ), we have [49, page 9]).Consequently, for any β ∈ N N 0 there is a constant C > 0 such that for all f ∈ C |β|+1 (R N ) and j ∈ {1, . . ., N} we have (2.8) sup The claim follows from (2.8) by the induction on |β|.
For fixed y ∈ R N the Dunkl kernel E(x, y) is the unique analytic solution to the system The function E(x, y), which generalizes the exponential function e x,y , has the unique extension to a holomorphic function on The Dunkl transform is defined by where for f ∈ L 1 (dw).It was introduced in [23] for k ≥ 0 and further studied in [16] in the more general context.It was proved in [23, Corollary 2.7] (see also [16,Theorem 4.26]) that is an isometry on L 2 (dw), i.e., (2.10) We have also the following inversion theorem.
Theorem 2.4 (Inversion theorem, see [16,Theorem 4.20]).For all f ∈ L 1 (dw) such that F f ∈ L 1 (dw) we have The inverse F −1 of F has the form (2.12) Below we list some properties of F .
Definition 2.6.The Dunkl Laplacian associated with G and k is the differentialdifference operator It was introduced in [22], where it was also proved that ∆ k acts on C 2 (R N ) functions by Here and subsequently, ∆ = N j=1 ∂ 2 j .We have the following theorem, which allows us to define √ −∆ k by spectral theorem.
Theorem 2.7 ([47, Theorem 4.8]).The operator (−∆ k , S(R N )) in L 2 (dw) is densily defined and closable.Its closure will be denoted by the same symbol −∆ k , is self-adjoint and its domain is It is the unique positive self-adjoint extension of (−∆ k , S(R N )).
Note that, thanks to Lemma 2.5 (C), for all ξ ∈ R N and f ∈ S(R N ) we have Definition 3.1.Let x, y ∈ R N and t > 0. We define the k-Cauchy kernel p t (x, y) to be the integral kernel of the operator ), that is The kernel p t (x, y) was introduced and studied in [49].
Theorem 3.2 ([49, Theorem 5.6]).Let f be a bounded continuous function on R N .Then the function given by v(x, t) = P t f (x) is continuous and bounded.Moreover, it solves the Cauchy problem The k-Cauchy kernel is also called the generalized Poisson kernel (or Dunkl Poisson kernel ) by the analogy with the classical Poisson semigroup.We have the following lemma.
Lemma 3.3.Let x, y ∈ R N and t > 0. The generalized Poisson kernel p t (x, y) has the following properties: It follows by Theorem 3.2, (2.16), and the inversion theorem for Dunkl transform (see Theorem 2.4) that for all f ∈ S(R N ), x ∈ R N , and t > 0 we have (3.1) We also have the following upper and lower bound for the generalized Poisson kernel.
Proposition 3.4 ([2, Proposition 5.1]).For x, y ∈ R N and t, r > 0 we denote (a) Upper and lower bounds : there is a constant C ≥ 1 such that for all t > 0 and for all x, y ∈ R N .(b) Dunkl gradient : for every ξ ∈ R N , there is a constant C > 0 such that for all t > 0 and for all x, y ∈ R N .(c) Mixed derivatives : for any nonnegative integer m and for any multi-index β ∈ N N 0 , there is a constant C ≥ 0 such that, for all t > 0 and for all x, y ∈ R N , Moreover, for any nonnegative integer m and for any multi-indices β, β ′ ∈ N N 0 , there is a constant C ≥ 0 such that, for all t > 0 and for all x, y ∈ R N , y) .Note that the estimates in Proposition 3.4 are given in the spirit of spaces of homogeneous type, except that the metric x − y is replaced by the distance of the orbits d(x, y) (see (3.2)).One of the reason why the estimates of Proposition 3.4 are suitable in many context is explained in the next lemma.We omit its standard proof.
Moreover, for any m ∈ N 0 there is a constant C = C p,m > 0 such that for all t > 0 and f ∈ L p (dw) we have The next proposition is well-known (see [51], [18,Lemma 2.1]).We provide its version in the Dunkl setting for the sake of completeness.Proposition 3.6.For all j ∈ {1, . . ., N} and f, g ∈ S(R N ) we have Proof.For 1 ≤ j ≤ N, x ∈ R N , and t > 0 we define ϕ(x, t) := P t R j f (x)P t g(x).
It follows by Proposition 3.4 that for fixed x ∈ R N there is a constant C > 0 independent of x such that for all y ∈ R N and t > 0 we have .
Hence, for all F ∈ S(R N ) we have Moreover, by (2.3), for all x ∈ R N we have Consequently, by (3.6) we get that for fixed x ∈ R N we have ϕ(x, Therefore, by the fundamental theorem of calculus and Theorem 3.2, for all x ∈ R N we have Since, by the definition of {P t } t≥0 , ∂ t P t = √ −∆ k P t , and the operator √ −∆ k is selfadjoint on L 2 (dw), by (3.9) we have (3.10) Finally, note that by the definition of the Riesz transform (see (1.1)), (2.16), (3.1), and Lemma 2.5 (C), for all 1 ≤ j ≤ N we have so the claim follows by (3.10).
As the consequence of Proposition 3.6, we obtain the following corollary.
Corollary 3.7.Let p, q > 1 be such that 1 p + 1 q = 1.Then for all f ∈ S(R N ) we have (3.11) Here and subsequently, for g j ∈ S(R N ), 1 ≤ j ≤ N, and x ∈ R N we denote

Bellman function
In this section, we introduce the Bellman function, which will be the main ingredient of the proof of Theorem 1.3.Definition 4.1.Let p ≥ 2 and let q be such that The number γ will be fixed throughout the paper.Next, we define the Nazarov-Treil Bellman function B : R The function B(η, ζ) was introduced by Nazarov and Treil in [41], then used and simplified in [10,11,18,19,20].
Note that the function B is differentiable but not smooth.We will need the smooth version of B.
For κ > 0 and ( Definition 4.2.Let p ≥ 2 and let q be such that 1  p Remark 4.3.In order to avoid misunderstanding, we would like to emphasise that the convolution "⋆" in (4.4) is the ordinary one (not the Dunkl generalized convolution).Let us also point out that in the proof of Theorem 1.3 we will set N 1 = 1 and N 2 = N.
The following properties of the functions β κ and B κ were proved in [20, Theorems 3 and 4] and [37].Proposition 4.4.Let p ≥ 2 and let q be such that 1  p + 1 q = 1.There is a constant C p > 0 such that for all κ ∈ (0, 1] and s, t > 0 we have Theorem 4.5.Let p ≥ 2 and let q be such that It follows from the proof of [20, Theorem 3] that one can take Remark 4.6.In our further considerations, we will need the explicit form of τ (see (4.7)).This form of τ follows directly from the proofs presented in [20, Theorem 3] and [37, Proposition 6.3], although it is not given explicitly there.Therefore, for the convenience of the reader, we repeat the proof from [37] in Appendix A with τ given by (4.7).
In our further consideration, we will need the following elementary lemma, which concerns the properties of τ in (4.7).
Lemma 4.7.Let 1 < q ≤ 2 and N 3 ∈ N. Then for all a, b ∈ R N 3 we have Proof.The proof is standard, but we provide it for the sake of completeness.We will prove (4.8) first.Let us consider two cases.Case 1. a ≥ b .Then we have Case 2. b > a .By the change of variables we have q ds, so we are reduced to Case 1.In order to prove (4.9), we write
Lemma 4.9.Assume that f, g j ∈ S(R N ), 1 ≤ j ≤ N, and κ (B) there is a constant C f,g > 0, which depends on f and g and is independent of κ, such that for all x ∈ R N and t > 0 we have Proof.By Lemma 3.5, for f, g j ∈ S(R N ), 1 ≤ j ≤ N, the functions P t f, P t g j belong to C ∞ (R N × (0, ∞)).Therefore, by Theorem 4.5 and (4.12), b κ is a composition of smooth functions, so (A) follows.In order to prove (B), note that by the chain rule we have Consequently, by Proposition 4.4 and the Cauchy-Schwarz inequality we get that there is a constant C p > 0, which depends just on p, such that Note that by Lemma 3.5 and the fact that f, g j ∈ S(R N ) there is a constant C = C f,g > 0 such for all x ∈ R N , t > 0, and 1 ≤ j ≤ N we have so, by (4.14), the proof of (B) is finished.
In the next proposition we obtain an explicit formula for ∆ k b κ (cf.[27, Section 4]).
Proposition 4.10.Assume that f, g j ∈ S(R N ), 1 ≤ j ≤ N, and κ ∈ (0, 1].Let u, u, and b κ be as in Definition 4.8.Then for all x ∈ R N and t > 0 we have Hess(B κ )( u(x, t))∂ j,x u(x, t), ∂ j,x u(x, t) where Proof.It follows by the chain rule (see e.g.[18,Lemma 1.4]) that and Hess(B κ )( u(x, t))∂ j,x u(x, t), ∂ j,x u(x, t) Moreover, for α ∈ R we have Finally, note that by the Taylor's expansion of the function b κ (x, t), for all α ∈ R we have , and t > 0 we have Proof.Since f, g j ∈ S(R N ), 1 ≤ j ≤ N , by Lemma 3.5 there is a constant C > 0, which depends on f, g j such that for all x ∈ R N and t > 0 we have Consequently, by the fact that ∇B κ and Hess(B κ ) are smooth and (4.12), we obtain that there is a constant where • HS is the Hilbert-Schmith norm.Moreover, by Lemma 3.5, there is a constant C ′′ = C ′′ f,g > 0 such that for all x ∈ R N and t > 0 we have Recall that for all x ∈ R N and α ∈ R we have √ 2| x, α | = x−σ α (x) (see (2.1)).Hence, by (3.6) and the mean value theorem we have that there is a constant C ′′′ = C ′′′ f,g > 0 such that for all x ∈ R N and t > 0 we have Finally, the claim is a consequence of (4.15), (4.17), and the Cauchy-Schwarz inequality.

Proof of Theorem 1.3
In this section, we prove Theorem 1.3.We closely follow the reasoning from [10] and [20].

Upper estimate of I(n, ε, κ).
Lemma 5.3.Let p ≥ 2 and q > 1 be such that 1 p + 1 q = 1.Assume that f, g j ∈ S(R N ), 1 ≤ j ≤ N, and ε > 0. For n ∈ N we set Then we have Proof.Recall that supp Φ ⊆ B(0, 2).Therefore, by Lemma 4.9 (A), Corollary 4.11, and the fact that for fixed ε > 0 we have ∞ 0 t −2 ν ε (t) dt < ∞, we can change the order of integration in (5.11).Note that by the fact that Integrating by parts (see Theorem 2.1 and Remark 2.2), for any t > 0 we get Recall that supp Φ(•/n) ⊆ B(0, 2n).Then, it follows from Lemma 2.3 that there is a constant C > 0 independent of Φ and n such that for all x ∈ R N and n ∈ N we have Moreover, by (2.14) and the fact that Φ(x/n) = 1 for all x ∈ B(0, n) we have Consequently, by (4.5) there is a constant C p > 0, which depends just on p, such that for all n ∈ N we have (5.13) Since f, g j ∈ S(R N ), by Lemma 3.5 we get P t f ∈ L p (dw) and P t g j ∈ L q (dw) for all t > 0 and 1 ≤ j ≤ N. Hence, lim n→∞ B(0,2n)\B(0,n) Moreover, by the choice of κ(n) (see (5.10)) we get Lemma 5.4.Assume that f, g j ∈ S(R N ), 1 ≤ j ≤ N, κ ∈ (0, 1], and ε > 0. Then for all x ∈ R N we have Recall that ν ε is defined in (5.1).
Proof.Note that (5.14) is a consequence of Lemma 4.9 (B) and the fact that for fixed ε > 0 we have lim The proof of (5.15) is similar.Indeed, since f, g j ∈ S(R N ), by (3.6) there is a constant C = C f,g > 0 such that for all x ∈ R N and t > 0 we have Consequently, by (4.5), there is a constant C ′ > 0, which depends on f and g j and is independent of κ ∈ (0, 1], such that for all x ∈ R N and t > 0 we have so the claim is a consequence of an elementary fact that for fixed ε > 0 we have Lemma 5.5.Recall that ν ε is defined in (5.1).We have Proof.It follows from an elementary calculation (see e.g.[20, (3.30)]).

Proof of Theorem 1.3.
Proof of Theorem 1.3.We will prove (1.4) first.Assume first that p ≥ 2. Take f ∈ L p (dw).Thanks to Theorem 1.2 and the fact that S(R N ) is dense in L p (dw), without loss of generality we can assume f ∈ S(R N ).Let κ : N → (0, 1] be defined by (5.10).By Corollary 3.7 we get Rf L p (dw) = 4 sup t∂ t P t g j (x)T j P t f (x) dt dw(x) .
(5.23) Next, by Lemma 5.2 and Corollary 5.7, 4 sup Finally, we use a polarization arguments.Let s > 0. We replace f (•) by sf (•) and g(•) by s −1 g(•) in (5.24).Then, the left hand side of (5.24) is unchanged, and minimizing the right-hand-side by s > 0 we obtain It was shown in [58, proof of the main theorem] that which ends the proof for p ≥ 2. The proof in case 1 < p < 2 is analogous: we switch P t f and P t g in the definition of b κ .The proof of (1.5) is similar (we use (5.4) instead of (5.3) in (5.24)).

One-dimensional case
This section is devoted to the proof of Theorem 1.4.We will work in the onedimensional setting, i.e. we assume N = 1.We would like to emphasize that in this case we have , where σ − √ 2 (x) = −x for all x ∈ R. Consequently, the multiplicity function k takes just one value, which, for simplicity of the notation, will be denoted by k.In this case, the associated measure dw is of the form (6.1) dw(x) = 2|x| 2k dx.
The Dunkl operator in one-dimensional case is In this section, we will use the same notation as in the previous sections unless specified otherwise.We will also assume k > 1 (otherwise, the claim follows by Theorem 1.3).We will slightly modify the proof of Theorem 1.3 to obtain the Theorem 1.4.The main point is to prove a modified version of Lemma 5.2.We will also need the following version of Proposition 3.6.We state and prove it for the convenience of the reader.Proposition 6.1.For all f, g ∈ S(R) we have t∂ t P t f (x)T P t g(x) dw(x) dt .
Proof.By Plancherel's identity (see (2.10)) and the definition of the Dunkl Hilbert transform (see (1.6)) we have Consequently, the rest of the proof is the same as in the proof on Proposition 3.6 (with g instead of f and f instead of g).
Proof of Lemma 6.2.Fix κ : N → (0, 1].By the monotone convergence theorem we have Recall that Φ and ν ε are defined in Definition 5.1.Fix n ∈ N and ε > 0. It follows by the definition of the Poisson semigroup (see Definition 3.1) and Lemma 3.3 (D) that if g is odd, then P t g is also odd for all t > 0. Consequently, by (6.2), T P t g(x) = ∂ x P t g(x) + 2k P t g(x) x .
(6.12)Note that, by the fact that q ∈ (1, 2] and the triangle inequality, for all (y 1 , y 2 ) ∈ B(0, κ(n)) we have Recall that P t g is odd for all t > 0, so Therefore, by (6.13) and (4.9) with a = P t g(x)−y 2 and b = −P t g(x)−y 2 = P t g(−x)−y 2 , we get Finally, by (6.9), (6.10), (6.11), and (6.14), + e 2 (n, ε, κ). (6.15) Now we are ready to apply the same argument as in the proof on Lemma 5.2.Indeed, by (4.6) with Then, by the fact that |x −1 | ≤ n −1 for x ∈ B(0, n) and by (6.21), Finally, the claim is a consequence of the fact that for fixed ε > 0 we have As a direct consequence of Lemmas 6.2, 6.6, and 6.7, we obtain the following corollary.Corollary 6.8.Assume that p ≥ 2, f, g ∈ S(R), and g is odd.Let κ : N → (0, 1] be defined in (5.10) Proof of Theorem 1.4.Assume first that p ≥ 2. Take f ∈ L p (dw).Thanks to Theorem 1.2 and the fact that S(R) is dense in L p (dw), without loss of generality we can assume f ∈ S(R).Let κ : N → (0, 1] be defined by (5.10) and let q be such that 1 p + 1 q = 1.By Proposition 6.1 we have Hf L p (dw) = sup g∈S(R), g L q (dw) =1 R Hf (x)g(x) dw(x) = 4 sup g∈S(R), g L q (dw) =1 R ∞ 0 t∂ t P t f (x)T P t g(x) dt dw(x) .
Proof of (4.6).We repeat the argument from [20, Theorem 4] and [37,Proposition 6.3].It follows by the formulas for the second derivatives of B which are given above that they are C 2 on R N 1 × R N 2 \ Υ and they are locally integrable.Moreover, B is