$L^p$ improving properties and maximal estimates for certain multilinear averaging operators

In this article we focus on $L^{p}$ estimates for two types of multilinear lacunary maximal averages over hypersurfaces with curvature conditions. Moreover, we give a different proof for the bilinear lacunary spherical maximal functions. To obtain our results, we make use of the $L^1$-improving estimates of multilinear averaging operators. We also obtain $L^p$-improving estimates for certain multilinear averages by means of the nonlinear Brascamp-Lieb inequality.


INTRODUCTION
Let S be a compact and smooth hypersurface contained in a unit ball B d (0, 1) with κ nonvanishing principal curvatures, and Θ j be rotation matrices in M d,d (R) for j = 1, 2, . . ., m.We assume that {Θ j } m j=1 is mutually linearly independent.Then for f 1 , f 2 , . . ., f m ∈ S (R d ), we define f j (x + Θ j y) dσ S (y), (1.1) where F = (f 1 , f 2 , • • • , f m ) and dσ S is the normalized surface measure on S. We also consider another m-linear averaging operator defined by A Σ (F)(x) := Σ m j=1 f j (x + y j ) dσ Σ (y), (y 1 , . . . ,y m ) = y ∈ R md , (1.2) where Σ is a compact (md − 1) dimensional smooth hypersurface contained in a unit ball B md (0, 1) with κ non-vanishing principal curvatures.Note that κ arising in (1.1) satisfies 1 ≤ κ ≤ d − 1, while κ in (1.2) is 1 ≤ κ ≤ md − 1.Moreover, we are interested in the following lacunary maximal operators associated with (1.1) and (1.2): The purpose of this article is to prove L p -improving estimates of multilinear averaging operators defined by (1.1) and (1.2).Further, using this L p -improving estimates we show L p 1 × L p 2 × • • • × L pm → L p boundedness for 1/p = m j=1 1/p j of the multi-(sub)linear lacunary maximal functions M Θ S and M Σ .Averaging operators given in (1.1) and (1.2) and related maximal operators arise in many studies in multilinear harmonic analysis.Since Coifman and Meyer [13] opened the path of multilinear harmonic analysis in 1975, there have been significant developments in the area of multilinear harmonic analysis over the last few decades.Among those achievements, we introduce works of Lacey and Thiele [25,26] in which they proved L p -boundedness of the bilinear Hilbert transform given as Their seminal work settled the long standing conjecture of Calderón.Later, Lacey [24] studied L p -boundedness of bilinear maximal operator M α (f, g)(x) := sup which is related to the bilinear Hilbert transform.One may regard averaging operators A Θ S as a generalization of M α without the supremum because the condition α = 0, 1 corresponds to the linearly independent condition of {Θ j }.
On the other hand, A Σ given in (1.2) is a direct analogue of a spherical averages A t S d−1 f (x) for t = 1, which is defined by f (x − ty) dσ(y).
Thus we write A S md−1 (F)(x) = A 1 S md−1 (f 1 ⊗• • •⊗f m )(x, . . . ,x).For studies on A S md−1 (F), we recommend [32,1,36,14] and references therein.In the literature, A t S d−1 have been extensively studied in terms of maximal operators.For the (sub)linear spherical maximal operator M * S d−1 defined by with dσ is the normalized surface measure on the sphere S d−1 , Stein [37] proved that for d ≥ 3, the spherical maximal operator M * S d−1 is bounded in L p , if and only if p > d d−1 in 1976.Later, Bourgain [10] obtained L p boundedness of M * S 1 for p > 2. Those restricted boundedness of M * S d−1 can be improved if one considers the lacunary spherical maximal operator, which is given by M S d−1 f (x) := sup j∈Z |A 2 j S d−1 f (x)|.Calderón [11] proved L p estimates of the operator M S d−1 for 1 < p ≤ ∞ and d ≥ 2. After then, Seeger and Wright [35] showed L p estimates of general lacunary maximal operators M S for 1 < p ≤ ∞, when the Fourier transform of the surface measure σ of S satisfies |σ(ξ)| |ξ| −ǫ , for any ǫ > 0. There are also L p − L q estimates for p ≤ q (we call this L p -improving estimates) of the spherical average A 1 S d−1 [30,38].Lacey [23] used the L p -improving estimates of spherical averages to prove sparse domination of the corresponding lacunary and full spherical maximal functions.It is well known that sparse domination of an operator implies vector valued boundedness and weighted boundedness of that operator with respect to Muckenhoupt A p weights [31,28].This idea has been extensively used to obtain sparse domination of several linear and sub-linear operators in the field of Harmonic analysis (See [3]).The idea of Lacey [23], together with L p -improving estimates of certain bilinear averaging operators, can be used to study sparse domination of maximal operators associated with the bilinear operators.We recommend [8,33,34] and references therein, which contains results of bilinear spherical maximal operator, bilinear maximal triangle averaging operators and bilinear product-type spherical maximal operators, respectively.
Recently, Christ and Zhou [12] studied L p 1 × L p 2 → L p with 1/p 1 + 1/p 2 = 1/p boundedness of bi-(sub)linear lacunary maximal functions defined on a class of singular curves, which might be understood in the sense of both (1.3) and (1.4).
where dσ(y) is the normalized surface measure on the circle S 1 .For d ≥ 2, the complete (L p 1 × L p 2 → L p )-estimate of the operator M S 2d−1 was not known.However, there are some partial results of the operator M S 2d−1 [33,8], and very recently Borges and Foster [9] have obtained almost sharp results including some endpoint estimates.In this paper, we give a different proof of the same There is another important bi-(sub)linear maximal function which is known as bilinear spherical maximal function.Study of this operator was generated in [2].Later, in [22] Jeong and Lee proved almost complete This result is extended to d = 1 by Chirst and Zhou [12].It would be interesting to study Σ , where Σ is a compact smooth hypersurface with κ non-vanishing principal curvatures (κ ≤ 2d − 1).For some specific hypersurfaces, the optimal (except few border line cases) L p 1 × L p 2 → L p boundedness is known [27].
For general hypersurface with non-vanishing Gaussian curvature, only [16].It would be interesting to study L p 1 × L p 2 → L p estimates of such full maximal averages for p ≤ 1 in all dimensions and their multilinear analogues.However, multilinear estimates for m-linear full maximal operators with m ≥ 3 have not been pursued, while bounds for lacunary maximal operators are studied by Grafakos, He, Honzík, and Park [18].In this paper, we focus on L p 1 × • • • × L pm → L p bounds for the lacunary maximal functions for 1/p = 1/p 1 + • • • + 1/p m and p < 2/m.It would be our future goal to study m-linear estimates for the full maximal functions for m ≥ 3.
We first state L 1 -improving and quasi-Banach estimates of the m-linear averaging operators A Θ S and A Σ .Note that the following two propositions are derived by simple Fourier analysis and multilinear interpolation, and we will give a proof of the propositions for selfcontainedness.
Then for 1 ≤ p j ≤ 2, j = 1, 2, . . ., m and m+1 2 ≤ m j=1 1 p j < 2d+κ 2d , the following L 1 -improving estimates hold: When p > 1, one can obtain different L p -improving estimates for A Θ S under specific choice of {Θ j } and S. In this case, we do not need any curvature condition on S and only the dimension of surfaces matters.Let S k be a k-dimensional C 2 surface in R d .We choose mutually linearly independent {Θ j }.Moreover, we assume that for any choice of The assumption (1.9) yields that dimension of intersection of any subset {Θ j i } k+1 i=1 of {Θ j } m j=1 equals to zero.The following theorem is one of our main results: 1).Suppose that {Θ j } satisfies (1.8) and (1.9), and k is given such that That is, we have In our proof of Theorem 1.3, we mainly use the nonlinear Brascamp-Lieb inequality proved in [5].We give details on the inequality and the proof of Theorem 1.3 in Section 3.
In Theorem 1.By making use of the quasi-Banach space estimates, Propositions 1.1 and 1.2 together with Sobolev regularity estimates, we obtain multilinear estimates for lacunary maximal operators M Θ S and M Σ .
where θ denotes a counter-clockwise rotation.Therefore, Theorem 1.5 (when m = 2) yields boundedness of the lacunary maximal function corresponding to the averaging operator B θ under the assumption on the Sobolev regularity estimates (1.13).Thus one only need to show (1.13), but it is not accomplished in this paper.
On the other hand, one can actually obtain Sobolev regularity estimates for A Σ , which is (1.7) of Proposition 1.2.Thus, another main result of this paper is the following lacunary maximal estimates for A Σ : Remark 1.7.What we will prove in Sections 4 and 5 is that multi-linear estimates of lacunary maximal operators will be derived from L 1 -improving estimates and Sobolev regularity estimates of corresponding averaging operators.Precisely, if one obtains estimates of the lacunary maximal operators for m j=1 1/p • j = 1/p • together with certain polynomial growth, which is Lemma 4.2.The polynomial growth of Lemma 4.2 will be handled by interpolation with an exponential decay estimates of Lemma 4.3, which is originated by the Sobolev regularity estimates of averaging operators.In result, we obtain As a simple application of Remark 1.7, we obtain the following result: Remark 1.8.Theorem 1.6 also yields the following boundedness of the bilinear lacunary spherical maximal function Note that we make use of given by [21] and machinery of Section 4 to obtain (1.14) for p > 1/2.This estimate is already given in [9] and we give a different proof at the end of this paper.Remark 1.9.It is known in [18] )-estimates for certain κ.One can check that even for the worst indices, our Theorem 1.6 is better than the

NOTATIONS AND DEFINITIONS
• For a cube Q or a ball B in R d , we define CQ and CB whose sidelength and radius are C times those of Q and B with the same centers, respectively.For a measureable set E, we say meas(E) as a measure of E.
2. PROOFS OF PROPOSITIONS 1.1 AND 1.2 2.1.Proof of Proposition 1.1.The proof of Proposition 1.1 follows from the following lemma and a standard technique from [19,21].
Let p = k+2 2(k+1) and ( 1 p 1 , . . ., 1 pm ) ∈ conv(V κ ).We begin with Together with compactness of S, we have Now we apply Hölder's inequality to obtain Since x ∈ Q n and y ∈ supp(S), we have the following equality where Q denotes a cube whose sidelength is 3 times that of Q with the same center.With help of (2.3) and Lemma 2.1, we have 2), and (2.4), we have We make use of Hölder's inequality on (2.5) to obtain where 1 p = m j=1 1 p j , and Thus, by showing Lemma 2.1, we complete the proof of Proposition 1.1.

2.2.2.
Quasi-Banach space estimates (1.6).Since we obtain L 1 -improving estimates for A Σ , one can apply the argument of Subsection 2.1 to show that A Σ satisfies a Hölder-type multilinear estimates on L p (R d ) for 1 p = m j=1 1 p j with p j in (2.12).That is, we have 2d and 1 ≤ p j ≤ 2. This proves the quasi-Banach space estimates.

A NONLINEAR BRASCAMP-LIEB INEQUALITY APPROACH TO L p -IMPROVING
ESTIMATES FOR A Θ S 3.1.Nonlinear Brascamp-Lieb inequality.Let f j be nonnegative integrable funtions, L j : R d → R d j be linear surjections, and c j ∈ [0, 1] for j = 1, • • • , m.We also identify a finite dimensional Hilbert sapce H and a Euclidean space R n , for instance we let H = R d and H j = R d j .Then we can consider the linear Brascamp-Lieb inequality where L = (H, {H j } 1≤j≤m , {L j } 1≤j≤m ), c = (c 1 , . . ., c m ), and BL(L, c) is the smallest such constant.Here, we call (L, c) a Brascamp-Lieb datum, and BL(L, c) a Brascamp-Lieb constant.There have been studies on nonlinear generalizations of the Brascamp-Lieb inequality.Bennett, Carbery, and Wright [7] showed that (3.1) holds for d j = d−1 and c j = 1 m−1 when L j 's are smooth submersions supported in a sufficiently small neighborhood.They also proved that L j 's could be C 3 mappings under certain transversality conditions on the submersions.Later, Bennett and Bez [4] extended the results of [7] to general d j and C 1,β mappings.Recently, Bennett, Bez, Buschenhenke, Cowling, and Flock [5] proved the following nonlinear Brascamp-Lieb inequality.
Although Theorem 3.1 is stated with C 2 submersions, the proof of [5] guarantees that the theorem still holds if one takes C 1+θ submersions for any θ > 0. It is known [6] that BL(L, c) is finite if and only if the following conditions hold.
These conditions are called the transversality condition and the scaling condition, respectively.We also present necessary conditions for finiteness of BL(L, c): But, it is not simple to check (3.3) for a given Brascamp-Lieb datum.The following lemma may be useful in such verification.First, we say a proper subspace For a given subspace V c , we split the Brascamp-Lieb datum into two parts, (L Vc , c) and (L V ⊥ c , c) as follows: In this paper, we choose Lemma 3.2 (Lemma 4.6, [6]).Let V c be a critical subspace.Then BL(L, c) is finite if and only if (L Vc , c) and (L V ⊥ c , c) satisfy (3.3) and (3.4) for any subspace V of V c and V ⊥ c , respectively.Now we will prove Theorem 1.3.We first decompose S k into a finite cover {S k τ } for which A S k (F)(x) can be written as a finite summation of the following operators. where submersion and χ τ is a smooth cut-off function.
To simplify our proof, we consider a more general m-linear operator T B K .Suppose B j : R d × R k → R d be C 2 submersions and L j = dB j (0, 0) with j = 1, . . ., m.Then T B K (F) is given by where K is a nonnegative bounded function supported in a ball . Moreover, we take c j = 1/p j for j = 1, . . ., m and c m+1 = 1/p ′ where 1/p = 1/p 1 + • • • + 1/p m − k/d.Then we prove the following proposition.
Then we have Proposition 3.3 states that the Brascamp-Lieb inequality implies an L p improving estimates.
Proof.Since p ≥ 1, by making use of duality we get , where Q(ε) is a cube centered at the origin with side length ε and Q n (ε) denotes εn translation of Q(ε) for n ∈ Z d .Then it follows that where τ εn [f ](x) = f (x + εn).Then we apply Theorem 3.1 to τ εn [f j ] p j , τ εn [g] p ′ together with additional mapping L m+1 = dπ R d , which yields Note that ε in (3.7) is uniform in n due to B j (x + εn, y) = B j (x, y) + εn and also that Q denotes a cube whose side length is 3 times of that of Q with the same center.
We choose g such that g p ′ ≤ 1, so we ignore g L p ′ ( Qn) , and that τ εn [f j ] L p j ( Q(ε)) = f j L p j ( Qn(ε)) .Thus by Hölder's inequality we have p j ≥ 1, one can choose r j 's such that 1 r j ≤ 1 p j for each j = 1, . . ., m, then we use ℓ p j ֒→ ℓ r j embedding to obtain (3.10) Since Q n (ε) are finitely overlapped, taking supremum over g p ′ ≤ 1 in (3.10) gives for desired p, p 1 , . . ., p m .Now we present the proof of Theorem 1.3.
where I d denotes the d × d identity matrix.If there is no confusion, we simply write , dΦ = ( dφ 1 , . . ., dφ dc ).(3.13) Without loss of generality we assume that dΦ(0) is a d c × k zero matrix.That is, we have for j = 1, . . ., m where Θ 1 j and Θ 2 j are d × k and d × d c matrices, respectively.Since Θ 1 j has k linearly independent columns, its rank is k.In the case of j = m + 1, we have where Z d means all d × d elements are zero.We show that L = (L 1 , . . ., L m , dπ R d ) and p = ( 1 m , . . ., 1 m , k d ) are Brascamp-Lieb data by making use of Lemma 3.2.
Since dim(L j K) equals to dim(K) for any subspace K of K π with j = 1, . . ., m, we also have Thus K π is a critical subspace and (L Kπ , p) is a Brascamp-Lieb datum.
On the other hand, for It remains to verify (3.3) for any proper subspace of K ⊥ π .In order to show this, we consider a subspace Then we define Note that our choice of {Θ j } satisfying (1.8) allows us to have That is, there are at most ℓ = d K − k j's such that K j is a subspace of K. Therefore we have dim (3.16) Here we choose p j = m for all j = 1, . . ., m in order to minimize the loss, i.e. to maximize the lower bound of (3.16).Thus we fix p as Note that the last line of (3.16) is greater than or equal to d K whenever From d > d K > k, it follows that the left-hand side of (3.17) is larger than Thus (3.17) holds whenever Let K be not equal to any K j for j = 1, . . ., m so that dim(K The last line is greater than or equal to dim(K) = k if On the other hand, let K = K µ for some µ = 1, . . ., m.By (1.9), we have The last line is greater than or equal to dim For d K = k−1, we consider a subspace K such that K is not contained in K j for all j = 1, . . ., m.Then for some µ ∈ {1, . . ., m} the worst case in verifying (3.3) ) gets lower as dim(K ∩ K µ ) gets larger.Thus we have dim(K/K µ ) = 1, and this may happen for any j = 1, . . ., m.It follows that The last line is greater than or equal to Then the worst case is that K is given by intersection of K µ and K ν for some ν = µ.Thus we have dim(K/K µ ) = dim(K/K ν ) = 0.However, if we choose any other j = µ, ν, then we have dim (1.9).Without loss of generality, say µ = 1 and ν = 2 so that by (1.9) one can check The last line is greater than or equal to k − 1 whenever Similar to k, k − 1 dimensional cases of K, for an arbitrary k − n dimensional subspace K, one can check that the worst case happens when K is contained in K j i for j 1 , . . ., j n .Thus we have Then the last line is greater than or equal to k − n whenever Together with (3.17), one can conclude that BL(L, p) is finite for given data (L, p) Note that we can rewrite the condition as for all 0 ≤ n ≤ k − 1 when m ≥ d.Thus, (L, p) is a Brascamp-Lieb datum.Hence, by Proposition 3.3 we prove Theorem 1.3.

PROOF OF THEROEM 1.5
Recall that the lacunary maximal function M Θ S is defined by where S has κ-nonvanishing principal curvatures and Θ = {Θ j } is a family of mutually linearly independent rotation matrices.
Observe that for any fixed ℓ ∈ Z, we can write the identity operator I as follows Then we have where the second summation runs over the symmetric group S m over {1, . . ., m}.For Note that M n (F) corresponds to α = 0 case in (4.3).Therefore, the lacunary maximal function M Θ S can be controlled by a constant mutiple of By similarity of A α,τ ℓ (F) and Ãα,τ ℓ (F) together with the symmetry on τ ∈ S m , instead of the first summation it suffices to consider estimates for A α ℓ (F), which is given by Then the proof will be completed by combination of the following lemmas and induction on m-linearity: Lemma 4.1.For m = 2 and α = 1 we have where F = (f 1 , f 2 ) and Proof.For m = 2 we have , where M HL denotes the Hardy-Littlewood maximal function.Since φ ℓ (x) = 2 ℓd φ(2 ℓ x), we have Since y is contained in a compact surface S, we have for any N > 0 S are bounded on L p for p ∈ (1, ∞], we need to handle the summation of In particular, we have 1 p = 2d d+1 when we consider averages over S = S d−1 . Lemma 4.3.Let n ∈ N m and 1 = m j=1 1 r j for some r 1 , . . ., r m ∈ (1, ∞).Then we have Proofs of Lemmas 4.2 and 4.3 will be given in Section 5 and note that Lemma 4.3 is an easy consequence of the assumption (1.13).Since M n ≤ S n by definition, it follows from interpolation between Lemmas 4.2 and 4.3 that 0 , this proves the theorem for m = 2 case inside of the convexhull.Then together with interpolation with trivial For the induction, we assume that Theorem 1.5 holds for N -linear operators with N = 2, • • • , m − 1.Note that we already showed that m = 2-case holds.By the assumption, we have the following lemma: Lemma 4.4.For α = 1, . . ., m, we have Moreover, if we assume that Theorem 1.5 holds for N -linear operators with N = 2, 3, . . ., m− 1, then it follows that satisfies multilinear estimates of Theorem 1.5.
Proof.The first assertion of the lemma follows directly by the proof of Lemma 4.1.For the second assertion, it is just an (m − α)-sublinear average, hence the conclusion follows directly by the assumption.
We assume that Theorem 1.5 is true for N -linear operators with N = 2, • • • , m − 1 and prove the N = 2 case.For general m, by Lemma 4.4 we have where To obtain (5.1), we exploit the approach of Chirst and Zhou [12], which is based on the multilinear Calderón-Zygmund decomposition.We apply the Calderón-Zygmund decomposition at height Cλ p p j to each f j , j = 1, . . ., m for some C > 0 so that for each j we have f j = g j + b j such that g j ∞ ≤ Cλ For C S = 5 max(1, diam(S)) we define E = ∪ m j=1 ∪ γ C S Q j,γ so that meas(E) λ −p .Note that C S Q is a cube whose side-length is C S times of that of Q with the same center as Q.Thus we estimate each level set for x ∈ R d \ E. where To proceed further, we introduce the following lemmas whose proofs will be given in the last part of this subsection: For m = 2 and p = 1 2 , Lemma 5.1 is given in [12].The proof for general m ≥ 2 and p = κ+2 2(κ+1) case is given in similar manner.  1.
By using Lemmas 5.1 and 5.2, we have We apply Hölder's inequality to the last line and obtain Oberserve that the summation over i 1 , . . ., i j−1 , i j+1 , . . ., i m yields rν .Then the proofs of Lemma 5.1 yields that (5.9)

It is because we have
We have (5.10) 1) .
whose sidelength is 2 −i .Then thanks to Proposition 1.1, it suffices for the first and second term in the minimum to show that The first term, 1, is directly given by the fact that ψ τ 1 = 1 and Young's inequality.
For the second term, we make use of the vanishing property of b Q .Let c Q be the center of Q.
for any N > 0. Thus we apply Minkowski's integral inequality to obtain This establishes Therefore we have (5.14) The first and the last equalities follow from the disjointness of Q's.This gives a decay estimate when n j + ℓ < i j .Lastly, we assume that ℓ > i j so that for x ∈ (C S Q) ∁ and z ∈ Q, we have dist(x − 2 −ℓ y, z) ≥ s(Q) uniformly in y ∈ S, because we choose C S = 5 max(1, diam(S)).Thus it follows that where the kernel of P τ is given by Therefore, we obtain with help of (5.14) that Observe that for any N > 0 Thus we have P n j +ℓ p→p ≤ 2 −ℓ+i j regardless of n j ≥ 0. This proves the lemma.
When i 1 ∼ i 2 , then the left side of (5.15) is bounded by a costant multiple of |n|.Thus we consider the case of i 2 is greater than i Then the proof will be completed by combination of the following lemmas and an induction argument which is slightly different from the argument in Section 4: The proof of Lemma 6.1 is same with that of Lemma 4.1, so we omit it.Note that M HL , M Σ are bounded on L p for p ∈ (1, ∞], hence we need the boundedness of the second term in (6.6).The proof of Lemma 6.2 is to repeat the proof of Lemma 4.2.The only difference occurs in showing Lemma 5.1 in terms of A n ℓ which corresponds to A n ℓ , since A n ℓ is an average over Σ which is (md − 1)-dimensional and each f j depends on x − y j not x − Θ j y.This difference is harmless, however, because only the compactness of S does matter in the proof of Lemma 5.1 and Σ is a compact hypersurface.On the other hand, the range m+1 2 ≤ 1 p < 2d+κ 2d follows from Proposition 1.2.The proof of Lemma 6.3 is the same with that of Lemma 4.3 together with (1.7), so we omit it.

Theorem 3 . 1 .
[5, Theorem 1.1]  Let (L, c) be a Brascamp-Lieb datum.Suppose that B j : R d → R d j are C 2 submersions in a neighborhood of a point x 0 and dB j