On weak interaction between a ground state and a trapping potential

We study the interaction of a ground state with a class of trapping potentials. We track the precise asymptotic behavior of the solution if the interaction is weak, either because the ground state moves away from the potential or is very fast.


Introduction
We consider as in [4] the nonlinear Schrödinger equation with a potential iu t = −∆u + V (x)u + β(|u| 2 )u , (t, x) ∈ R × R 3 . (1.1) For the linear potential V and the nonlinearity β, we assume the following.
is an immediate consequence of Q w = e iθ Q r , where w 1 = r cos θ and w 2 = r sin θ. We set the continuous modes space as follows: (1.4) A pair (p, q) is admissible when 2/p + 3/q = 3/2 , 6 ≥ q ≥ 2 , p ≥ 2. (1.5) We recall the following result by [8] on dynamics of small energy solutions of (1.1) (for an analogous result with weaker hypotheses on the spectrum see [5]).
Specifically we assume what follows, which implies by [12], the existence of orbital stability of the ground states of (1.9).
We add to the previous hypotheses few more about the linearized operator H ω defined in (2.38).
We assume 0 < N j e j (ω) < ω < (N j + 1)e j (ω) with N j ∈ N. We set N = N 1 . Here each eigenvalue is repeated a number of times equal to its multiplicity. Multiplicities and n are constant in ω.
(H10) H ω has no other eigenvalues except for 0 and the ±e j (ω). The points ±ω are not resonances. For the definition of resonance, see Sect.3 [2].
We are interested to study how a solution u(t) of (1.1) initially close to a ground state of (1.9) which moves at a large speed is affected by the potential V . Notice that u(t) at no time has small H 1 norm and so is not covered by Theorem 1.2. Unsurprisingly, in view of [4,1,3], we prove that the ground state survives the impact, but that as t → ∞ the solution u(t) approaches the orbit of a ground state of (1.9), up to a certain amount of radiation which satisfies Strichartz estimates, a term localized in spacetime, and a small amount of energy trapped by the Schrödinger operator −∆ + V , which behaves like in Theorem 1.2. The difference with [4] is that in [4] we had σ p (−∆ + V ) = ∅ while here σ p (−∆ + V ) = {e 0 }.
Thanks to the weakness of the interaction with the potential, we are able to show that this representation is preserved for all times and that there is a separation of moving ground state and of trapped energy. Furthermore, we prove that the stabilization processes around the energy trapped by the potential, described in Theorem 1.2, and around the ground state, described in [3,4], continue to hold.
In [4], in the absence of trapped energy, we described u(t) in terms of the local analysis of the NLS around solitons developed in the series [1,2,3]. The main two novelties in [4] consisted in the fact that the coordinate changes and the effective Hamiltonian in [4] depend on the time variable and that proof of the dispersion of continuous modes require the theory of charge transfer models as in [11] instead of the simpler dispersive analysis of [2,3].
These features of [4] are present here. The additional complication is that, along with a part of u(t) which has the same description as in [4], u(t) has also a term representing the energy trapped by the potential. In this paper we will describe in detail in Sect. 2 the decomposition and coordinates representation of u(t). In the following sections we will focus mainly on the coupling terms between trapped energy and the rest of u(t), often referring to [4]. Notice that in view of the result in [5] it could be possible to relax substantially the hypotheses on σ p (−∆ + V ) obtaining a result similar to Theorem 1.4.
In the proof we will assume at first that additionally see right below (1.19). Notice that in [4] it is assumed that u 0 ∈ Σ n for sufficiently large n, but inspection of the proof shows easily that (1.17) suffices. We will then show that in fact the result extends rather easily to u 0 ∈ H 1 . We will make extensive use of notation and results in [1,4]. We refer to [4] for a more extended discussion to the problem and for more references and we end the introduction with some notation.
Given two Banach spaces X and Y we denote by B(X, Y ) the space of bounded linear operators from X to Y . For x ∈ X and ε > 0, we set We set x = (1 + |x| 2 ) 1 2 and For any n ≥ 1 and for K = R, C we consider the the Banach space Σ n = Σ n (R 3 , K 2 ) defined by We set Σ 0 = L 2 (R 3 , K 2 ). Equivalently we can define Σ r for r ∈ R by the norm For r ∈ N the two definitions are equivalent, see [3].
From now on, we identify C = R 2 and set J = 0 1 −1 0 , so that multiplication by i in C is Later on, we complexify R 2 and i will appear in such meaning. That is for U = t (u 1 , u 2 ), iU = t (iu 1 , iu 2 ). So, be careful not to confuse −J with i which has the different meaning.

Linearized operator and its generalized null space
We will consider the group τ = (D, −ϑ) → e Jτ ·♦ u(x) := e iϑ u(x−D). The Φ p are constrained critical points of E 0 with associated Lagrange multipliers λ(p) ∈ R 4 so that ∇E 0 (Φ p ) = λ(p) · ♦Φ p , where we have We set also For any fixed vector τ 0 a function u(t) := e J(tλ(p)+τ0)·♦ Φ p is a solitary wave solution of (1.9). We now introduce the linearized operator . By an abuse of notation, we set L ω := L p when v(p) = 0 and ω(p) = ω. (2.7) We have the following identity, see [1] Sect.7, which implies σ(L p ) = σ(L ω(p) ), and which follows by Hypothesis (H5) implies that rank ∂λi ∂pj i↓ , j→ = 4. This and (H6) imply 9) where N g (L) := ∪ ∞ j=1 ker(L j ). Recall that we have a well known decomposition We denote by P Ng (p) the projection on N g (L p ) and by P (p) the projection on N ⊥ g (L * p ) associated to (2.10). (2.12) We now decompose the solution of (1.11) in to the large solitary wave given in (H4), small bound state given in Prop. 1.1 and the remainder part which will belong in both the N ⊥ g (L * p ) and the galilean transform of H c [w]. Proposition 2.1. Fix ε 1 > 0 and ω 1 ∈ O. Let κ ∈ P be s.t. v(κ) = 0 and ω(κ) = ω 1 . Then there exists and for all t ≥ 0, Remark 2.2. The solution u which we consider in Theorem 1.4 will always belong to (t, u(t)) ∈ B(ε 2 ) provided ε 0 sufficiently small. Therefore, we can always decompose the solution as Proposition 2.1 is a direct consequence of the following two lemmas.
Proof of Lemma 2.4. We apply the implicit function theorem (Theorem A.1) to We first compute the Jacobian matrix of F. We compute the derivatives ofR.
Therefore, we have Further, we have where I 10 is the unit matrix and each component in A can be bounded by where C is independent of (p, τ, w) ∈ P × R 4 × B R 2 (0; δ 0 ). Now, there exists a universal constantδ s.t. if the absolute value of each component of A is less thanδ, then (I 10 + A) −1 exists and its operator norm is bounded by 2. Now we claim there The bounds for C u − e Jτ ·♦ Φ p L 2 + C|w| is obvious so we only consider the bound of X(τ, t). Notice that if |v| ≥ Cδ −1 , then since T (t,δ) = R 4 , we only have to consider the case |v| ≤ Cδ −1 . In this case, since Therefore, we see there existsδ 1 which satisfies the claim. Finally, setting δ 1 = δ 2 =δ 1 , by Theorem , there exists δ 3 , δ 4 > 0 independent to the choice of We fix π ∈ P. Now, Proposition 2.1 can be reframed as follows.

Spectral coordinates associated to L p
We will summarize in this section a number of facts about equation (1.1) when V ≡ 0 which have been proved in [1,4] or which can be easily proved following the ideas therein. First of all we observe that we have coordinates (τ, p, r) for the quantity U defined by with δ > 0 sufficiently small. For any U ∈ H 1 (R 3 , R 2 ) we have also Π j = Π j (U ). Then (τ, Π, r) is also a system of coordinates in ℵ. The functions (τ, Π) depend smoothly in U while we have , which is a smooth function, functions (t, u) → (τ, Π, r) remain defined. The next task is to further decompose the variable r. This is done in terms of the spectral decomposition of the operator L p0 as we explain now.
We now consider the complexification of L 2 (R 3 , R 2 ) into L 2 (R 3 , C 2 ) and think of L p and J as operators in L 2 (R 3 , C 2 ). Then we set (2.38) We have .
Proof. For the proof of the existence of a such a frame for any fixed π we refer to Lemma 5.2 [1].
Here we discuss the fact that the dependence in π is smooth. Let us pick l 1 = 1 < l 2 < ... < l k ≤ n and set l k+1 = n + 1, with e j (ω) = e i (ω) if and only if j, i ∈ [l a , l a+1 ) for some a. The numbers l 1 , ..., l k do not depend on ω by the constancy of multiplicity in Hypothesis (H7).
For ω 0 = ω(p 0 ) we can suppose we have a frame { ξ j (ω)} satisfying the equalities in claim (2), that is for ω = ω 0 , L = n + 1 and ℓ = 1 we have for any m for a small interval I ω0 with center ω 0 . Fix now an index l a and let γ a be a small circle with counter clock orientation and centered in e la (ω 0 ). By taking I ω0 small we can assume that e la (ω) is for all ω ∈ I ω0 contained in a compact subset of the interior of the disk encircled by γ a . Then the following is a projection on ker(H ω − e la (ω)): . l). Notice that ξ l (ω) depends smoothly on ω and that we obtain a frame { ξ j (ω) : j ∈ [l a , l]} which is C ∞ in ω ∈ I ω0 and s.t. (2.41) is true for all ω ∈ I ω0 , for L = l + 1 and ℓ = l a . Finally, notice that if e j (ω) = e k (ω), then J −1 ξ j (ω), ξ k (ω) = 0. So we have built a frame smooth in ω which satisfies (2.41) for L = n + 1 and ℓ = 1 and for all ω ∈ I ω0 . The identities J −1 ξ j (ω), ξ k (ω) = 0 hold for all j, k, see Lemma 5.2 [1]. So Lemma 2.8 is proved.
The following spectral decomposition remains determined Correspondingly for any r ∈ N ⊥ g (H * p0 ) with r = r we have, for a z ∈ C n and an f ∈ L 2 c (p 0 ), with a frame {ξ j (π) : j ∈ 1, ..., n} as in Lemma 2.8. Notice that J −1 ξ j (π), P c (π)f ′ = 0. We also have The representation (2.44) is possible because of the following fact.
Lemma 2.9. Under (H4)-(H7) and (H10), given p 0 and for any fixed n ∈ N, there exists a > 0 such that for π ∈ P with |π − p 0 | < a the maps (2.47) Using now the fact that ξ j (π) ∈ C ∞ (P, Σ k ), we conclude that if |π −p 0 | < a k with a k > 0 sufficiently small, the operator in (2.47) is an isomorphism in L 2 Finally, by the argument in Lemma 2.3 [1], we can pick a fixed a k for all k ≥ −n.

Change of coordinate
To distinguish between an initial system of coordinates obtained from Lemma 2.5 and the further decomposition of r due to (2.44) and a "final" system of coordinates in Theorem 3.5 below, we will add a "prime" to the initial coordinates, except for the pair (Π, w). In particular we have functions (t, u) → (τ ′ , Π, z ′ , f ′ ). In particular, with ℵ defined in (2.37), we have We introduce now appropriate symbols.
We introduce now where Z 0 and Z 1 are finite sums of the following type: where the vector e(ω) is introduced in (H8) and where G µν (·, π, Π, and g µν ∈ C m ( U , C). We assume furthermore that Z 0 and Z 1 are real valued for f = f , and hence their coefficients satisfy the following symmetries: g µν = g νµ and G µν = −G νµ .
We have the following elementary fact, proved in Remark 5.6 [5], which tells us that the pairs (µ, ν) in Def. 3.3 in the case of the polynomials which interest us, do not depend on π.
The main result of [1], see also [4], is the following.
Theorem 3.5. There is an ε 3 > 0 and a map (3.11) in the sense of (3.10)-(3.11) is a homeomorphism in its image with the image containing ) in the case of (3.11)) and such that in the new variables (τ, Π, z, f ) we have where we have for k, m ∈ N preassigned and arbitrarily large: (1) ψ is smooth and with ψ(Π, Π, Π(f )) = O(|Π(f )| 2 ) near 0.
(4) Z 1 is in normal form as in (3.6) with |µ + ν| ≤ N + 1. dz j ∧ dz j + Ω(P c (π)df, P c (π)df ). (3.13) Here we skip the proof of Theorem 3.5 which is a minor modification of the arguments in [1]. It is important to observe that here and in [4] the role of the fixed p 0 in the normal forms argument is taken by the time varying π(t), with π(t) = Π(u(t)) in [4] and by π(t) = Π(U [t, u(t)]) in here.
It is important to check the dependence of various coordinates on the variables (π, u) and (π, U ).

Equations
Equation (1.11) can be written as u t = J∇E(u) = X E (u) = {u, E} where we have the following notions: • the exterior differential dF (u) of a Frechét differentiable function F defined in an open subset of H 1 ; • the gradient ∇F (u) defined by ∇F (u), X = dF (u)X; • the symplectic form Ω(X, Y ) := J −1 X, Y ; • the Hamiltonian vectorfield X F of F with respect to a Ω defined by Ω(X F , Y ) = dF Y , that is X F = J∇F ; • the Poisson bracket of two scalar functions {F, G} := dF X G , • if G has values in a given Banach space E and is Frechét differentiable with Frechét derivative dG, and if G is a scalar valued function, then we set {G, G} := dGX G .
We have introduced in Lemma 2.5 the functional B(ε 2 ) ∋ u → U [t, u] for the set B(ε 2 ) defined in (2.14). The following elementary lemma relates Poisson brackets associated to Ω in the u and the U space.
Proof. We have, summing on repeated indexes This yields (4.1). (4.2) follows for The following lemma will play an important role later.
Using the notation of Lemma 4.2 and of Lemma 2.5 we get the following elementary lemma.

Bootstrapping
As in [4], Theorem 1.4 follows from the following Theorem.
Theorem 5.1. Consider the constants 0 < ǫ < ε 0 of Theorem 1.4. Then there is a fixed and we have C > 0 such that we have (t, u(t)) ∈ B(ε 2 ) for all t ∈ I = [0, ∞) x ) ≤ Cǫ for all admissible pairs (p, q), Furthermore, there exist ω + and v + such that Theorem 5.1 will be obtained as a consequence of the following Proposition.
• We have • There exist ω + and v + such that the limits (5.6) are true.
where we sum only on multiindexes such that e · µ − e j < ω 0 for any j such that for the j-th component of µ we have µ j = 0.
Proof. Compared to [4], the one additional term in (4.21) here is the term d U f A, which we now analyze. By the fact that the inverse of (3.8) has the same structure (the flows which yield (3.8) when reversed yield the inverse of (3.8), see Lemma 3.4 [4]) we have Hence Then and similarly So we conclude with for any preassigned c We have Proceeding like above we conclude that where the last three terms are like those in the r.h.s. of (6.2). We have Therefore f (U, Q) is of the form R 1 + R 2 with the estimate in ( where: with G µν (t, Π(f )) the coefficients of Z 1 , see (3.12) and where (6.3) are satisfied. Notice that in (6.4) we can drop R 0,2 k,m from the argument of V , absorbing the difference inside R 1 + R 2 , so that σ 3 P c (K ω0 )V (· + vt + y 0 + D ′ )h becomes the second term in the r.h.s of (6.4). Set D := vt + y 0 + D ′ . Set Set now g(t) = g(t)e iσ3 t 0φ (s)ds for aφ which will be introduced later. Then, irrespective of theφ, we have V (· + D) = gV g −1 . We now set Then, for a fixed δ > 0, we add to (6.4) the term iδP D h − iδP D h = 0. We will think of −iδP D h as a damping term in (6.4) and iδP D h as a reminder term, since it can be absorbed inside the reminder R 1 + R 2 , as we show now. Proof. Obviously it is enough to prove e −iσ3 t 0φ (s)ds g −1 h, ψ L 1 (0,T )+L 2 (0,T ) ≤ cǫ for ψ = φ 0 , σ 1 φ 0 . (6.10) We will consider the case ψ = φ 0 . The other case is similar. We have from h = M −1 e x ) term, we can write the last line in (6.11) in the form for some real valued function λ(t).
We can rewrite (6.4) Then the proof of Lemma 6.1 is exactly the same as in [4] using Theorem 7.1 below.
We set now (e · (µ − ν))G µν (t, 0). (6.14) Lemma 6.3. Assume the hypotheses of Prop. 5.2 and let T > ε −1 0 . Then for fixed s > 1 there exist a fixed c such that if ε 0 is sufficiently small, for any preassigned and large L > 1 we have Proof. The proof is exactly the same of Lemma 8.5 in [4].
There is a set of variables ζ = z + O(z 2 ) such that for a fixed C we have e j |ζ j | 2 = −Γ(ζ) + r (6.16) and s.t., for a fixed constant c 0 and a preassigned but arbitrarily large constant L, we have (6.17) For the proof see [4,2]. By [2] Lemma 10.5 we have Γ(ζ) ≥ 0. We make now the following hypothesis: (H11) there exists a fixed constant Γ > 0 s.t. for all ζ ∈ C n we have: (6.18) Then integrating and exploiting (6.15) we get for t ∈ [0, T ] and fixed c j e j |z j (t)| 2 + 4Γ α as in (H11) From the last inequality and from Lemma 6.1 we conclude that for ε 0 > 0 sufficiently small and any for fixed c. We bound the r.h.s. of (4.13). By Lemma 5.3 we have for a fixed c Next, r ′ = S 0,1 k,m + e JR 0,2 k,m ·♦ f . Then the above can be bounded by This completes the proof of Proposition 5.2.

Linear dispersion
We have the following result. Theorem 7.1. Consider for P c F (t) = F (t) and P c u(0) = u 0 the equation For v the vector in Theor.1.4, set Then for any σ 0 > 3/2 there exist a c 0 > 0 and a C > 0 such that, if c(T ) < c 0 , σ > σ 0 and δ > δ 0 , then for any admissible pair (p, q), see (1.5), we have for i = 0, 1 Proof. Consider the problem By the proof of Theorem 9.1 in [4], Theorem 7.1 is a consequence of Proposition 7.2 below.
Proposition 7.2. Let U (t, t 0 ) be the group associated to (7.4). Then for σ > 3/2 there exists a fixed C > 0 such that for all 0 ≤ t 0 < t ≤ T x The proof is the same of Proposition 9.2 in [4] with a small difference. Notice that in [4] the operator σ 3 (−∆ + ω 0 + V ) does not have eigenvalues, while here it does have the eigenvalues ±(e 0 + ω 0 ), with projection on the vector space generated by the eigenspaces given by the operator P introduced in (6.8).
Now, the proof is exactly the same of Proposition 9.2 in [4] except for the following modification. The analogue of (9.43) [4] is now where g(t) = g(t)e iσ3 t 0φ (s)ds like after (6.7), where we choose the sameφ of [4] and where σ 3 V (x) = g −1 (t)σ 3 V (x + D)g(t) and by (6.8) we have P = g −1 (t)P D g(t).
8 Dropping the hypothesis u 0 ∈ Σ 2 Up to now we have assumed u 0 ∈ Σ 2 , that is (1.17), to guarantee that as we remark at the end of Sect. 3 the coordinates of U [t, u(t)] belong to the image of the map (3.8) in the sense of (3.9)-(3.11).
For the same reason in the series [2,3,4] it is assumed that u 0 ∈ Σ ℓ for fixed ℓ ≫ 1 with depends on the N = N 1 in Hypothesis (H7). This is used only in order to make sense of the pullback by means of (3.8) of the form Ω discussed in claim (8) of Theorem 3.5. However everywhere in [2,3,4] and here the distance of u(t) and of u 0 from ground states is measured only with the metric of H 1 (R 3 ). Now we discuss briefly the fact that we can drop (1.17) and assume only u 0 ∈ H 1 . Let u 0 ∈ H 1 with u 0 ∈ Σ 2 and let {u n (0)} n≥1 be a sequence with u n (0) → u 0 in H 1 and with u n (0) ∈ Σ 2 for any n ≥ 1. We can apply our result to each solution u n (t). By the well posedness of (1.1) and by the continuity of the maps defined in Proposition 2.1, in (2.36) and at the beginning of Sect. 3 we have for the coordinates of u n (t) and of u(t) (τ ′ n (t), p ′ n (t), z ′ n (t), f ′ n (t), w n (t)) → (τ ′ (t), p ′ (t), z ′ (t), f ′ (t), w(t)) (8.1) in R 8 × C n × H 1 × C. Furthermore, since (3.8) is a local homeomorphism of R 4 × C n × (H 1 ∩ L 2 c (p 0 )), see (3.10) and the comments immediately below (3.10), we also have a limit (τ n (t), p n (t), z n (t), f n (t), w n (t)) → (τ (t), p(t), z(t), f (t), w(t)) (8.2) with on the left the final coordinates of u n (t). Notice that on the right of (8.2) we have the final coordinates of u(t) since map (3.8) makes them correspond to the initial coordinates of u(t).
{y n } is a Cauchy sequence so it has a limit. Finally, if there exist two y, y ′ ∈ B Y (0, Then, the map ω → φ ω is in C ∞ (O, Σ n ) for arbitrary n ∈ N.
Therefore, since B 0 = 0, for sufficiently small ε, we have Now, we can show that if ω → φ ω is in C m (O, Σ n ), then ǫ → (A + B ε ) −1 is C m with values in B(Σ n , Σ n ). By induction, one can show ω → φ ω is in C ∞ (O, Σ n ).