ASYMPTOTIC BEHAVIOR OF THE BEST SOBOLEV TRACE CONSTANT IN EXPANDING AND CONTRACTING DOMAINS

We study the asymptotic behavior for the best constant and extremals of the Sobolev trace embedding W 1,p(Ω) ↪→ Lq(∂Ω) on expanding and contracting domains. We find that the behavior strongly depends on p and q. For contracting domains we prove that the behavior of the best Sobolev trace constant depends on the sign of qN − pN + p while for expanding domains it depends on the sign of q − p. We also give some results regarding the behavior of the extremals, for contracting domains we prove that they converge to a constant when rescaled in a suitable way and for expanding domains we observe when a concentration phenomena takes place.


Introduction.
Let Ω be a smooth bounded domain in R N , N ≥ 2. Of importance in the study of boundary value problems for differential operators in Ω are the Sobolev trace inequalities.For any 1 < p < N , and 1 < q ≤ p * = p(N − 1)/(N − p) we have that W 1,p (Ω) → L q (∂Ω) and hence the following inequality holds: S q u p L q (∂Ω) ≤ u p W 1,p (Ω) , for all u ∈ W 1,p (Ω).This is known as the Sobolev trace embedding Theorem.The best constant for this embedding is the largest S q such that the above inequality holds, that is, Moreover, if 1 < q < p * the embedding is compact and as a consequence we have the existence of extremals, i.e. functions where the infimum is attained, see 76 J. FERNANDEZ BONDER & J.D. ROSSI [8].These extremals are weak solutions of the following problem in Ω, where ∆ p u = div(|∇u| p−2 ∇u) is the p-Laplacian and ∂ ∂ν is the outer unit normal derivative.
Standard regularity theory and the strong maximum principle, [16], show that any extremal u belongs to the class C 1,α loc (Ω) ∩ C α (Ω) and that is strictly one signed in Ω, so we can assume that u > 0 in Ω.Let us fix p, q with 1 < q < p * and Ω a bounded smooth domain in R N , C 1 is enough for our calculations.For µ > 0 we consider the family of domains The purpose of this work is to describe the asymptotic behavior of the best Sobolev trace constants S q (Ω µ ) as µ → 0+ and µ → +∞.
As a precedent, see [4] for a detailed analysis of the behavior of extremals and best Sobolev constants in expanding domains for p = 2 and q > 2. In that paper it is proved that the extremals develop a peak near the point where the curvature of the boundary attains a maximum.In [5] and [13] a related problem in the halfspace R N + for the critical exponent is studied.See also [6], [7] for other geometric problems that leads to nonlinear boundary conditions.
Let us call u µ an extremal corresponding to Ω µ .Making a change of variables, we go back to the original domain Ω.If we define v µ (x) = u µ (µx), we have that v µ ∈ W 1,p (Ω) and We can assume, and we do so, that the functions u µ are chosen so that ∂Ω |v µ | q dσ = 1.
We remark that the quantity (1) is not homogeneous under dilations or contractions of the domain.This is a remarkable difference with the study of the Sobolev embedding W 1,p 0 (Ω) → L q (Ω).First, we deal with the case µ → 0+.As we will see the behavior of the Sobolev constant and extremals is very different when the domain is contracted than when it is expanded.Our first result is the following: and if we scale the extremals u µ to the original domain Observe that the behavior of the Sobolev trace constant, strongly depends on p and q.If we call β pq = (N q − N p + p)/q then we have that, as µ → 0+, Let us remark that the influence of the geometry of the domain appears in (4).
In the special case p = q, problem (2) becomes a nonlinear eigenvalue problem.For p = 2, this eigenvalue problem is known as the Steklov problem, [2].In [8] it is proved, applying the Ljusternik-Schnirelman critical point Theory on C 1 manifolds, that there exists a sequence of variational eigenvalues λ k +∞ and it is easy to see that the first eigenvalue λ 1 (Ω) verifies λ 1 (Ω) = S p (Ω).So Theorem 1.1 shows a difference in the behavior of the first eigenvalue of (2) with respect to the domain with the behavior of the first eigenvalue of the following Dirichlet problem where it is a well known fact that λ 1 increases as the domain decreases, see [1], [10].
Theorem 1.2.There exists a constant λ 2 such that This constant λ 2 is the first nonzero eigenvalue of the following problem Moreover, if we take an eigenfunction u 2,µ associated to λ 2 (Ω µ ) and scale it to Ω as in Theorem 1.1, we obtain that v 2,µ → v 2 in W 1,p (Ω), where v 2 is an eigenfunction of (7) associated to λ 2 .Also, every eigenvalue ) is a sequence of eigenvalues such that there exists λ with let (v j ) be the sequence of associated eigenfunctions rescaled as in Theorem 1.1, then (v j ) has a convergent subsequence (v j k ) and a limit v, that is an eigenfunction of (7) with eigenvalue λ.
Observe that the first eigenvalue of ( 7) is zero with associated eigenfunction a constant.Hence Theorem 1.1 says that the first eigenvalue and the first eigenfunction of our problem (2) converges to the ones of (7).Theorem 1.2 says that λ(Ω µ ) → +∞ as µ → 0+ for the remaining eigenvalues and that problem (7) is a limit problem for (2) when µ → 0+.We believe that Theorem 1.2 is our main result.Now, we deal with the case µ → +∞.In this case we find, as before, that the behavior strongly depends on p and q.We prove, For the lower bound in (2) in the case p < q < p * we have to assume that the corresponding extremals v µ rescaled such that max Ω v µ = 1 verify |∇v µ | ≤ Cµ.Moreover, for all cases, we have that the corresponding extremals u µ rescaled as in Theorem 1.1 concentrates at the boundary, in the sense that As before the behavior of the Sobolev trace constant depends on p and q.We have that, as µ → +∞, The hypothesis |∇v µ | ≤ Cµ is a regularity assumption, see [15] for C 1,α loc regularity results.As a consequence of our arguments we have that the extremals do not develop a peak if 1 < q < p as in this case we have that and For p = q it is proved in [12] that the first eigenvalue λ 1 (Ω µ ) = S p (Ω µ ) is isolated and simple.As a consequence of this if Ω is a ball the extremal v µ is radial and hence it does not develop a peak.Finally, for q > p the extremals develop peaking concentration phenomena in the sense that, for every a > 0, with max Ω v µ = 1.This is in concordance with the results of [4] where for p = 2, q > 2 they find that the extremals concentrates, with the formation of a peak, near a point of the boundary where the curvature maximizes.We believe that for q > p, extremals develop a single peak as in the case p = 2. Nevertheless that kind of analysis needs some fine knowledge of the limit problem in R N + that is not yet available for the p−Laplacian.
Let us give an idea of the proof of the lower bounds.In the case p = q we can obtain the lower bound by an approximation procedure.We replace W 1,p (Ω) by an increasing sequence of subspaces in the minimization problem.Then we prove a convergence result and find a uniform bound from below for the approximating problems.We believe that this idea can be used in other contexts.For the case q > p we use our assumption |∇v µ | ≤ Cµ to prove a reverse Hölder inequality for the extremals on the boundary that allows us to reduce to the case p = q.
Finally, for large µ, in the case p = q we can prove that every eigenvalue is bounded.
The rest of the paper is organized as follows.In Section 2, we deal with the case µ → 0 and in Section 3, we study the case µ → +∞.Throughout the paper, by C we mean a constant that may vary from line to line but remains independent of the relevant quantities.

2.
Behavior as µ → 0+.In this section we focus on the case µ → 0+.First we prove Theorem 1.1 and then study the case where q = p (the eigenvalue problem).
Let us begin with the following Lemma.
Lemma 2.1.Under the assumptions of Theorem 1.1, it follows that Proof.Let us recall that Then, taking u ≡ 1 it follows that as we wanted to see.
This Lemma shows that the ratio S q (Ω µ )/µ (N q−N p+p)/q is bounded.So a natural question will be to determine if it converges to some value.This is answered in Theorem 1.1 that we prove next.
Proof of Theorem 1.1.Let u µ ∈ W 1,p (Ω µ ) be a extremal for S q (Ω µ ) and define v µ (x) = u µ (µx), we have that v µ ∈ W 1,p (Ω).We can assume that the functions u µ are chosen so that Equation ( 3) and Lemma 2.1 give, for µ < 1, so there exists a function v ∈ W 1,p (Ω) and a sequence µ j → 0+ such that Moreover, Hence ∇v µ → 0 in L p (Ω).It follows that the limit v is a constant and must verify and so the full sequence v µ converges weakly in W 1,p (Ω) to v. From our previous bounds we have and Therefore, we have strong convergence, v µ → |∂Ω| −1/q in W 1,p (Ω).The proof is finished.
Now we turn our attention to the case p = q which is a nonlinear eigenvalue problem.We recall that Theorem 1.1 says that λ 1 (Ω µ ) = S p (Ω µ ) ∼ µ → 0. First we focus on the behavior of the second eigenvalue λ 2 .For the proof of Theorem 1.2 we need the following Lemmas.We believe that these results have independent interest.
has a weak solution if and only if ∂Ω h(x) dσ = 0.Moreover, the solution is unique up to an additive constant.
Proof.It is straightforward to check that if there exists a weak solution to (8) then . By a standard compactness argument, one can verify that the following Poincare inequality holds, for every w ∈ X and some constant C. Let us now define Critical points of Φ in W 1,p (Ω) are weak solutions of (8).By (9), Φ is a strictly convex, bounded below functional on X, and so there exists a unique function w ∈ X such that Φ (w)(v) = 0 for every v ∈ X.Now, using the fact that ∂Ω h(x) dσ = 0, it is easy to see that Φ (w)(v) = 0 for every v ∈ W 1,p (Ω) and the proof is now complete.
Now we find a variational characterization of the first non-zero eigenvalue of the limit problem (7).
Proof.Let u n be a minimizing sequence with u n L p (∂Ω) = 1.By a compactness argument we can extract a subsequence, that we still call u n , such that Hence u ∈ Y − {0}, u L p (∂Ω) = 1.Moreover, we have that Therefore u is a minimizer.Now we are ready to deal with the proof of Theorem 1.2 which is the main result of the paper.
Proof of Theorem 1.2.We can assume that 0 ∈ Ω and then we can take u(x) = x 1 in the characterization of λ 2 given by ( 6) to obtain Hence if we consider v 2,µ any eigenfunction associated to λ 2 (Ω µ ) normalized with v 2,µ L p (∂Ω) = 1 we get Therefore we have that As it is proved in [9], |{v 2,µj > 0} ∩ ∂Ω|, |{v 2,µj < 0} ∩ ∂Ω| > c independent of µ j , then ṽ2 changes sign.Hence, we get Taking a subsequence, if necessary, we can assume that and, as hence we obtain that λ = 0. Taking ϕ ≡ 1 in the weak form of the equation satisfied by v 2,µ we get that Passing again to the limit we have that Let w be a function where the infimum ( 11) is attained with w L p (∂Ω) = 1.As w ∈ A (see ( 6)), we have Taking the limit as µ → 0+ we get from where it follows that v 2,µ → ṽ2 strongly in W 1,p (Ω).Once again, we pass to the limit as µ → 0+ in the weak formulation satisfied by v 2,µ to get that ṽ2 is an eigenfunction associated to λ2 .By the characterization of λ2 given in Lemma 11 we get that this is the first non-zero eigenvalue for problem (7).Now we find the behavior of the remaining eigenvalues.Let λ(Ω µ ) be an eigenvalue (variational or not).Then, as the variational eigenvalues λ k (Ω µ ) form an unbounded sequence, there exists k such that λ 2 (Ω µ ) ≤ λ(Ω µ ) ≤ λ k (Ω µ ).Now, let x 1 , . . ., x k ∈ ∂Ω and r = r(k) be such that dist(x i , x j ) > 2r.Let φ ∈ C ∞ (Ω) be a nonnegative function with support B(0, r) and let φ j (x) = φ(x − x j ).Now, let us define Changing variables we get, As φ i have disjoint support, and As the boundary of Ω is regular we have that there exists a constant C k such that Using these estimates in ( 12) we obtain Finally we study the convergence of the eigenvalues and eigenfunctions corresponding to the rest of the spectrum.By our hypotheses we have that As v j is bounded in W 1,p (Ω) we can extract a subsequence (that we still call v j ) such that v j v weakly in W 1,p (Ω), Using that v j are solutions of (2), we obtain Taking φ ≡ 1 we get By Lemma 2.2 and ( 14), there exists a unique w ∈ W 1,p (Ω) with Combining ( 13), the variational formulation of ( 15) with φ = v j − w and the fact that we are dealing with a strongly monotone operator (see [3]), we get The first two terms go to zero as j → ∞.Concerning the last one, we have that it is bounded by Therefore, taking the limit j → ∞, we get ∇v j → ∇w in L p (Ω) and as ∇v j ∇v weakly in L p (Ω) we conclude that ∇v = ∇w and so v = w and v j → v strongly in W 1,p (Ω).Finally, taking limits in (13) we obtain that v is a weak solution of (7) as we wanted to prove.

3.
Behavior as µ → +∞.In this section we study the behavior of the Sobolev constant in expanding domains, that is when µ → +∞.To clarify the exposition we divide the proof of Theorem 1.3 in several Lemmas.Let us begin by the upper bounds.
Proof.We have p = q and look for a bound on the first eigenvalue λ 1 (Ω µ ).Changing variables as before we have that Using that |∇v| ≤ C/r we obtain Finally, choose r = µ −1 to obtain the desired result.
Proof.We observe that the same calculations of Lemma 3.2 show that S q is bounded independently of µ for 1 < q < p. Now, as in the case p = q (Lemma 3.1), let us take v(x) such that v = a = constant on ∂Ω and v = 0 in Ω r = {x ∈ Ω ; dist(x, ∂Ω) ≥ r}.We fix a such that ∂Ω |v| q dσ = 1.
Using the same arguments as in Lemma 3.1 we get S q (Ω µ ) ≤ Cµ (N q−N p+p)/q C µ −p r p−1 + Cr and choosing r = µ −1 we obtain S q (Ω µ ) ≤ Cµ (N q−N p+p−q)/q .Now let us prove that the extremals concentrates at the boundary.
Lemma 3.4.Let 1 < q < p * .The extremals concentrate at the boundary in the sense that Proof.Let v µ be an extremal such that v µ L q (∂Ω) = 1.From our previous bound we get, for p = q, Now we turn back to the case 1 < q < p.We have, from our previous calculations, Hence Ω |v µ | p dx ≤ Cµ (N −1)(q−p)/q → 0 µ → +∞.
Finally, for p < q < p * we get that and therefore, as we are in the case q > p and so N q > p(N − 1), we get The proof is now complete.
To get the bound from below for λ 1 in the case p = q we use the following idea, first we replace the minimization problem in W 1,p (Ω) with a minimization problem in a sequence of increasing subspaces and next we find that for an adequate choice of the subspaces we get a uniform lower bound for the approximate problems.This idea combined with a convergence result for the approximations gives the desired result.So, let us first state and prove the convergence result.Since this procedure works for every 1 < q < p * we prove it in full generality.Now we want to describe a general approximation procedure for S q .These results are essentially contained in [14] but we reproduce the main arguments here in order to make the paper self-contained.
The Sobolev trace constant S q can be characterized as As we have already mentioned, the idea is to replace the space W 1,p (Ω) with a subspace V h in the minimization problem (17).To this end, let V h be an increasing sequence of closed subspaces of W 1,p (Ω), such that We observe that the only requirement on the subspaces V h is (18).This allows us to choose V h as the usual finite elements spaces, for example.
With this sequence of subspaces V h we define our approximation of S q by S q,h = inf We have that, under hypothesis (18), S q,h approximates S q when h → 0.
Theorem 3.1.Let v be an extremal for (17).Then, there exists a constant C independent of h such that, for every h small enough.
Proof.As V h ⊂ W 1,p (Ω) we have that Let us choose Now we use that and hypothesis (18) to obtain that for every h small enough, The result follows from (20) and (21).
Now we prove a result regarding the convergence of the approximate extremals.We will not use it but it completes the analysis of the approximations.Theorem 3.2.Let u h be a function in V h where the infimum (19) is archived.Then from any sequence h → 0 we can extract a subsequence h j → 0 such that u hj converges strongly to an extremal in W 1,p (Ω).That is, there exists an extremal of (17), v, with lim hj →0 Proof.Theorem 3.1 and hypothesis (18) gives that Hence there exists a constant C such that for every h small enough, u h W 1,p (Ω) ≤ C. Therefore we can extract a subsequence, that we denote by u hj , such that Hence, from the L q (∂Ω) convergence we have, Therefore w is an admissible function in the minimization problem (17).Now we observe that, if v is an extremal, and therefore, lim hj →0 The space W 1,p (Ω) being uniformly convex, the weak convergence, (22), and the convergence of the norms, (23), imply the convergence in norm.Therefore u hj → w in W 1,p (Ω).This limit w verifies w p W 1,p (Ω) = S q and w L q (∂Ω) = 1.Hence it is an extremal and we have that lim hj →0 u hj − w W 1,p (Ω) = 0.
With these convergence results we can prove the lower bound in the case p = q.Lemma 3.5.Let p = q, then S p (Ω µ ) = λ 1 (Ω µ ) ≥ C, for every µ large.
Proof.Let us choose a particular subspace V h of W 1,p (Ω).As the boundary of Ω is smooth, we can define new coordinates near the boundary as follows.As before we denote by Ω r = {x ∈ Ω; dist(x, ∂Ω) ≥ r} and by ∂Ω r = {x ∈ Ω; dist(x, ∂Ω) = r} and we use the following construction.We define Φ(ξ, r) = ξ − rν(ξ), where ν(ξ) is the exterior normal vector at ξ ∈ ∂Ω.Φ : ∂Ω × (0, R) → Ω \ Ω R .We recall that Φ is a difeomorphism if R is small enough.With this application Φ we can define a triangulation as follows.First, choose a uniform regular triangulation of size h of the set ∂Ω × (0, R).Now, by the application Φ we can get a triangulation of the strip Ω \ Ω R .In fact, we can select as nodes x ij the points Φ(ξ i , r j ), where (ξ i , r j ) is a node of the uniform mesh of ∂Ω × (0, R).Our space V h is defined by all the continuous functions in W 1,p (Ω) that are linear over each triangle of the strip Ω \ Ω R .This space is the usual space of linear finite elements in special triangulations defined using the mapping Φ, see [3] for detailed information on the finite elements method.
Let us call u h the functions in V h .We have indexed the nodes x ij in a way such that x i1 ∈ ∂Ω and x ij is at distance j − 1 (in nodes) from the boundary, ∂Ω.We denote by u ij the value of u h at the node x ij and by a ij the value of the gradient of u h on the triangle T ij .We assume that the index i runs from 1 to l and j from 1 to k 0 .Remark that k 0 ∼ R/h and l ∼ |∂Ω|/h N −1 .
We want to find a lower bound (independent of h and µ) on the approximation of the first eigenvalue, To this end we consider a function First, let us observe that if k = k 0 (there are k 0 triangles between the two boundaries of Ω \ Ω r ), then we have As k 0 ∼ R/h we get that and we are done.Hence let us assume that k < k 0 .As before we can bound the term µ Now we observe that Using this fact we get, Using ( 24) and ( 25) we obtain Hence, if we call τ = µhk we get that Since the subspaces that we have chosen verify hypotheses (18), we can use the convergence result, Theorem 3.1, to get that λ Let us look at the case 1 < q < p more carefully, and obtain a bound from below using the lower bound obtained for λ 1 (Ω µ ).Lemma 3.6.Let 1 < q < p.Then, for every µ large, S q (Ω µ ) ≥ Cµ βpq−1 .Moreover this shows that, if v is an extremal, Hence there is no peaking formation in this case.
Proof.As we mentioned in the introduction, we have that Using that 1 < q < p we get that, by Holder's inequality Hence, using our previous lower bound for λ 1 (Ω µ ) we get that there exists a constant C such that S q (Ω µ ) ≥ Cµ βpq−1 .The upper bound proved in Lemma 3.3, S q (Ω µ ) ≤ Cµ βpq−1 , gives that This ends the proof.
To finish the proof of Theorem 1.3 we need the following Lemma.
Proof.First we prove that there exists a constant C such that S q (Ω µ ) ≥ C. Let v µ be an extremal in Ω.By rescaling v µ we can obtain an extremal ṽµ such that max Ω ṽµ = 1.That is, 0 < ṽµ ≤ 1 and there exits a point x 0 ∈ ∂Ω with ṽµ (x 0 ) = 1.Arguing as in Lemma 3.6 we have S q (Ω µ ) = µ βpq−1 Ω We end the article proving that every eigenvalue is bounded as µ → +∞.