Asymptotics for the Sasa--Satsuma equation in terms of a modified Painlev\'e II transcendent

We consider the initial-value problem for the Sasa-Satsuma equation on the line with decaying initial data. Using a Riemann-Hilbert formulation and steepest descent arguments, we compute the long-time asymptotics of the solution in the sector $|x| \leq M t^{1/3}$, $M$ constant. It turns out that the asymptotics can be expressed in terms of the solution of a modified Painlev\'e II equation. Whereas the standard Painlev\'e II equation is related to a $2 \times 2$ matrix Riemann-Hilbert problem, this modified Painlev\'e II equation is related to a $3 \times 3$ matrix Riemann--Hilbert problem.


Introduction
In this paper, we consider the long-time behavior of the solution of the Sasa-Satsuma equation [11] u t − u xxx − 6|u| 2 u x − 3u(|u| 2 ) x = 0, (1.1) with initial data u(x, 0) = u 0 (x) ∈ S(R) in the Schwartz class. Our main result shows that u(x, t) admits an expansion to all orders in the asymptotic sector |x| < M t 1/3 of the form where {u j (y)} ∞ 1 are smooth functions of y . = x/(3t) 1/3 and M > 0 is a constant. It also shows that the leading coefficient u 1 (y) is given by where u P (y) satisfies the following modified Painlevé II equation: u P (y) + yu P (y) + 2u P (y)|u P (y)| 2 = 0. except for a sign difference and the presence of the absolute value squared in the last term. We will show that (1.3) is related to a 3 × 3 matrix RH problem much in the same way that (1.4) is related to a 2×2 matrix RH problem cf. [5]. In the case of a real-valued solution, equation (1.1) reduces to a version of the mKdV equation, (1.3) reduces (up to a sign) to (1.4), and the expansion (1.2) reduces to the analogous asymptotic formula for the corresponding mKdV equation (see [4], and [2] for the higher order terms, in the case of the standard mKdV equation). It turns out that the leading coefficient u 1 (y) in (1.2) has constant phase, that is, u 1 (y) = |u 1 (y)|e iα where α ∈ R is independent of y. It is somewhat remarkable that this is the case for any choice of the complex-valued initial data u 0 (x) = u(x, 0); however, we also recall that the Sasa-Satsuma has a class of one-soliton solutions of constant phase (see [11] or [1]): 2ae a(x+a 2 t−x 0 ) e iφ 1 + e 2a(x+a 2 t−x 0 ) , a, φ, x 0 real constants.
The starting point for our analysis is a Riemann-Hilbert (RH) representation for the solution of (1.1) obtained via the inverse scattering transform formalism. The asymptotic formula (1.2) is derived by performing a Deift-Zhou [4] steepest descent analysis of this RH problem. The main novelty compared with the analogous derivation for the mKdV equation is that the Lax pair of (1.1) involves 3×3 instead of 2×2 matrices.
The inverse scattering problem for (1.1) was studied already by Sasa and Satsuma [11]. The initial-boundary value problem for (1.1) on the half-line was considered in [12]. Asymptotic formulas for the long-time behavior in the sector 0 < c 1 < x < c 2 were obtained in [6,10].
Our main results are presented in Section 2. They are stated in the form of three theorems (Theorem 1-3) whose proofs are given in Section 4, 5, and 6, respectively. Section 3 recalls the Lax pair formulation of (1.1). The RH problem associated with the modified Painlevé II equation (1.3) is discussed in Appendix A. Appendix B considers an extension of this RH problem which is needed to obtain the higher order terms in (1.2).

Main results
Our first theorem shows how solutions of (1.1) can be constructed starting from an appropriate spectral function ρ 1 (k). We let S(R) denote the Schwartz class of smooth (complex-valued) rapidly decaying functions.
Then the 3 × 3-matrix RH problem • m(x, t, k) is analytic for k ∈ C \ R and extends continuously to R from the upper and lower half-planes; has a unique solution for each (x, t) ∈ R 2 and the limit lim k→∞ (km(x, t, k)) 13 exists for each (x, t) ∈ R 2 . Moreover, the function u(x, t) defined by is a smooth function of (x, t) ∈ R 2 with rapid decay as |x| → ∞ which satisfies the Sasa-Satsuma equation Proof. See Section 4.
Our second theorem gives the long-time asymptotics of the solutions constructed in Theorem 1 in the sector |x| ≤ M t 1/3 . Theorem 2 (Asymptotics of constructed solutions). Under the assumptions of Theorem 1, the solution u(x, t) of (1.1) defined in (2.2) satisfies the following asymptotic formula as t → ∞: where • The formula holds uniformly with respect to x in the given range for any fixed M > 0 and N ≥ 1. • The variable y is defined by where s . = ρ 1 (0) and u P (y; s) denotes the smooth solution of the modified Painlevé II equation (1.3) corresponding to s according to Lemma A.1. In particular, u 1 (y) has a constant phase, that is, arg u 1 is independent of y.
By applying the above two theorems in the case when ρ 1 (k) is the "reflection coefficient" corresponding to some given initial data u 0 (x), we obtain our third theorem, which establishes the asymptotic behavior of the solution of the initial-value problem for (1.1) in the sector |x| ≤ M t 1/3 . Before stating the theorem, we introduce some notation.
Given u 0 ∈ S(R), define U 0 (x) and Λ by Define the 3 × 3-matrix valued function X(x, k) as the unique solution of the Volterra integral equation whereΛ acts on a matrix A byΛA = [Λ, A], i.e., eΛA = e Λ Ae −Λ . Define the scattering matrix s(k) by Then the "reflection coefficient" ρ 1 (k) is defined by We will see in Section 6 that the (33) entry s 33 (k) of s(k) has an analytic continuation to the upper half-plane. Possible zeros of s 33 (k) give rise to poles in the RH problem, see (6.8). For simplicity, we assume that no such poles are present (solitonless case).
Remark 2.2 (Scattering transform). Let S denote the subset of S(R) consisting of all functions u 0 (x) such that the associated scattering matrix s(k) defined in (2.6) satisfies s 33 (k) = 0 for Im k ≥ 0. Theorem 3 shows that the map which takes u 0 (x) to ρ 1 (k) (the scattering transform) is a bijection from S onto its image in S(R). The inverse of this map (the inverse scattering transform) is given by the construction of Theorem 1 for t = 0.

Lax pair
An essential ingredient in the proofs of Theorem 1-3 is the fact that equation (1.1) is the compatibility condition of the Lax pair equations [11] ψ x (x, t, k) = L(x, t, k)ψ(x, t, k), ψ t (x, t, k) = Z(x, t, k)ψ(x, t, k), (3.1) where k ∈ C is the spectral parameter, ψ(x, t, k) is a 3 × 3-matrix valued eigenfunction, the 3 × 3-matrix valued functions L and Z are defined by Note that U and V are rapidly decaying as |x| → ∞ if u is, and that L, Z obey the symmetries where A † denotes the complex conjugate transpose of a matrix A and

Proof of Theorem 1
Suppose In particular, v is Hermitian and positive definite for each k ∈ R. Hence the result of Zhou [13] implies that there exists a vanishing lemma for the RH problem for m(x, t, k), i.e., the associated homogeneous RH problem has only the zero solution.
Defining the nilpotent matrices w ± (x, t, k) by and denote the nontangential boundary values of Cf from the left and right sides of R by C + f and C − f , respectively. Then C + and C − are bounded operators on L 2 (R) and C + − C − = I. Given two functions w ± ∈ L 2 (R) ∩ L ∞ (R), we define the operator C w : In view of the vanishing lemma, this implies (see e.g. [9, Theorem 5.10]) that I − C w is an invertible bounded linear operator on L 2 (R), and that the 3 × 3 matrix L 2 -RH problem for m has a unique solution m(x, t, k) for each (x, t) ∈ R 2 given by . The smoothness and decay of w ± together with the smooth dependence on (x, t) implies that m is a classical solution of the RH problem and that m admits an expansion where the coefficients m j (x, t) are smooth functions of (x, t) ∈ R 2 (see e.g. [8,Section 4] for details in a similar situation). Since ρ 1 ∈ S(R), an application of the Deift-Zhou steepest descent method [4] implies that m and the coefficients m j have rapid decay as |x| → ∞ for each t. In particular, the limit in (2.2) exists for each (x, t) ∈ R 2 and u(x, t) = 2i(m 1 (x, t)) 13 is a smooth function of (x, t) ∈ R 2 with rapid decay as |x| → ∞. Then where U and V are defined in terms of u(x, t) by (3.3) and (3.4), respectively.
Proof. The symmetries (4.1) of v together with the uniqueness of the solution of the RH problem imply the following symmetries for m: In particular, the coefficient m 1 in (4.4) satisfies (4.7) Define the operator L by Substituting the expansion (4.4) into (4.8), we find In view of (4.7), this implies that Lm satisfies the following homogeneous RH problem: Thus, by the vanishing lemma, Lm = 0. This proves the first equation in (4.5).
In order to prove the second equation in (4.5), we define the operator Z by where the matrices A(x, t), B(x, t) and C(x, t) are yet to be determined. Substituting the asymptotic expansion (4.4) into (4.9), we find Thus, we define A, B, C by the equations (1) , and C = V (0) , it will follow from the vanishing lemma that Zm = 0, which will prove the second equation in (4.5). Comparing (4.7) and (4.10a), we see that A = −4U = V (2) , and then (4.10b) becomes The terms of order O(k −1 ) in the asymptotic expansion of the equation Lm = 0 yield Equation (4.7) can then be written as m According to (4.12), we have (4.14) Equations (4.13) and (4.14) imply It only remains to prove that C = V (0) . The terms of order O(k −2 ) in the expansion of the equation Lm = 0 yield It follows that C = 4m 2,x − Bm 1 . On the other hand, (4.12) and (4.15) imply The compatibility condition of (4.5) shows that u(x, t) satisfies (1.1). The proof of Theorem 1 is complete.

Proof of Theorem 2
Let ρ 1 ∈ S(R) and let u(x, t) be the associated solution of (1.1) defined by (2.2). Our goal is to find the asymptotics of u(x, t) in the sector P defined by denote the right and left halves of P. For conciseness, we will give the proof of the asymptotic formula (2.3) for (x, t) ∈ P ≥ ; the case when (x, t) ∈ P ≤ can be handled in a similar way but requires some (minor) changes in the arguments (see [2] for the required changes in the case of the mKdV equation). The jump matrix v(x, t, k) defined in (2.1) involves the exponentials e ±tΦ(ζ,k) , where Φ(ζ, k) is defined by Suppose (x, t) ∈ P ≥ . Then there are two real critical points (i.e., solutions of ∂Φ/∂k = 0) located at the points ±k 0 , where (see Figure 1) As t → ∞, the critical points ±k 0 approach 0 at least as fast as t −1/3 , i.e., 0 ≤ k 0 ≤ Ct −1/3 .

Analytic approximation.
We first decompose ρ = (ρ 1 , ρ 2 ) into an analytic part ρ a and a small remainder ρ r . Let N ≥ 1 be an integer. Let Γ (1) ⊂ C denote the contour We orient Γ (1) to the right and let V (resp. V * ) denote the open subset between Γ (1) 1 (resp. Γ 2 ) and the real line, see Figure 2. Lemma 5.1 (Analytic approximation). There exists a decomposition where the functions ρ 1,a and ρ 1,r have the following properties: (a) For each (x, t) ∈ P ≥ , ρ 1,a (x, t, k) is defined and continuous for k ∈V and analytic for k ∈ V . (b) The function ρ 1,a obeys the following estimates uniformly for (x, t) ∈ P ≥ : The sets V and V * and the contour Γ (1) .

Local model. Let us introduce new variables y and z by
Let p denote the N th order Taylor polynomial of ρ at k = 0, i.e., p(t, z)e 2i(yz− 4z 3 3 )
Hence, u(x, t) = 2i lim k→∞ (km(x, t, k)) 13 has an expansion of the form (2.3) with leading coefficient given by This completes the proof of Theorem 2.

Proof of Theorem 3
Let u 0 ∈ S(R) and suppose u(x, t) is a smooth solution of (1.1) with initial data u(x, 0) = u 0 (x) and with rapid decay as |x| → ∞. If ψ satisfies the Lax pair equations (3.1), then the eigenfunction Ψ defined by ψ = Ψe −i(kx−4k 3 t)Λ satisfies We define two solutions {Ψ j } 2 1 of (6.1) as the unique solutions of the integral equations The third columns of the matrix equations (6.2) involves the exponential e 2ik(x −x) . Since the equations in (6.2) are Volterra integral equations, it follows that the third column vectors of Ψ 1 and Ψ 2 are bounded and analytic for k ∈ C − and k ∈ C + , respectively, with smooth extensions to R. Similar considerations apply to the first and second columns; thus Ψ 1 (x, t, k) is bounded and analytic for k ∈ (C + , C + , C − ), Ψ 2 (x, t, k) is bounded and analytic for k ∈ (C − , C − , C + ), where k ∈ (C + , C + , C − ) indicates that the first, second, and third columns of the equation are valid for k in C + , C + and C − , respectively. Moreover, for each t and each j ≥ 0, there are bounded functions f − (x) and f + (x) of x ∈ R with rapid decay as x → −∞ and x → +∞, respectively, such that As k → ∞, Ψ 1 and Ψ 2 have asymptotic expansions of the form where the coefficients Ψ (n) j (x, t) are smooth bounded functions of x for each t and the expansion is valid uniformly for k ∈ (C + ,C + ,C − ) if j = 1 and for k ∈ (C − ,C − ,C + ) if j = 2. The above properties follow from an analysis of the Volterra equations (6.2); see e.g. [3] or Theorem 3.1 in [7] for similar proofs.
Lemma 6.1. The reflection coefficient ρ 1 (k) belongs to the Schwartz class S(R).
Proof. The expression in (2.6) for the (ij)th entry of s(k) involves the exponential factor e ikx(λ i −λ j ) , where λ 1 = λ 2 = −λ 3 = 1. It follows from the properties of Ψ 2 and U that s(k) is a smooth function of k ∈ R and that the (33)-entry s 33 admits an analytic continuation to the upper half-plane. It also follows (by replacing X in (2.6) by its large k expansion and integrating by parts repeatedly in the resulting expression) that s 13 , s 23 , s 31 , s 32 have rapid decay as |k| → ∞. For the diagonal element s 33 (k), the exponential factor is absent from the integral in (2.6), and substituting in the large k expansion of X we instead obtain uniformly for k ∈C + for some coefficients {s Let s * ij (k) = s ij (k) denote the Schwartz conjugate of s ij (k), i, j = 1, 2, 3. Let [A] j denote the jth column of a matrix A.
Lemma 6.2. The function m(x, t, k) defined by satisfies the RH problem of Theorem 3 with ρ 1 (k) given by (2.7).
Proof. We saw in the proof of Lemma 6.1 that s 33 admits an analytic continuation to the upper half-plane. A similar argument shows that s 11 , s 12 , s 21 , s 22 admit analytic continuations to the lower half-plane. Hence m is well-defined by (6.8) and the properties of Ψ 1 , Ψ 2 together with the assumption that s 33 (k) = 0 for Im k ≥ 0 imply that m(x, t, k) is analytic for k ∈ C \ R with continuous boundary values on R from above and below. The jump m + = m − v across R is a consequence of a long but straightforward computation which uses (6.6), the symmetries (6.7) of s, and the fact that det s = 1. Finally, the normalization condition m(x, t, k) = I + O(k −1 ) follows from the large k behavior of Ψ 1 , Ψ 2 , and s.
In view of Theorem 2, the next lemma completes the proof of Theorem 3. Proof. Substituting the expansions (6.4) into (6.1), we find that The lemma then follows from the definition (6.8) of m and the fact that s 33 (k) = 1 + O(k −1 ) as k → ∞.
we infer that where the spectral functions S n (k) and T n (k) are given in terms of the entries of s(k) by Appendix A. Modified Painlevé II RH problem and let P denote the contour P = P 1 ∪ P 2 oriented as in Figure 5.
Lemma A.1 (modified Painlevé II RH problem). Let s ∈ C be a complex number and define the matrices S 1 and S 2 by Then the RH problem • m P (y, ·) is analytic in C \ P with continuous boundary values on P \ {0}; 3 )Λ S n , z ∈ P n , n = 1, 2, has a unique solution m P (y, z) for each y ∈ R. Moreover, there are smooth functions {m P j (y)} ∞ 1 of y ∈ R with decay as y → −∞ such that, for each integer N ≥ 0, uniformly for y in compact subsets of R and for arg z ∈ [0, 2π]. The (13)-entry of the leading coefficient m P 1 is given by where u P (y) ≡ u P (y; s) satisfies the modified Painlevé II equation (1.3) and has constant phase, that is, arg u P is independent of y.
Proof. The jump matrix v P obeys the symmetries v P (y, z) = (v P ) † (y,z) = Av P (y, −z)A.
We infer from the first of these symmetries that the RH problem for m P admits a vanishing lemma, see [13,Theorem 9.3]. As in Section 4, this implies that there exists a unique solution m P which admits an expansion of the form (A.1). A Deift-Zhou steepest descent analysis shows that the coefficients m P j (and their y-derivatives) have exponential decay as y → −∞.