Nonlinear enhanced dissipation in viscous Burgers type equations

We construct a class of infinite mass functions for which solutions of the viscous Burgers equation decay at a better rate than solution of the heat equation for initial data in this class. In other words, we show an enhanced dissipation coming from a nonlinear transport term. We compute the asymptotic profile in this class for both equations. For the viscous Burgers equation, the main novelty is the construction and description of a time dependent profile with a boundary layer, which enhanced the dissipation. This profile will be stable up to a computable nonlinear correction depending on the perturbation. We also extend our results to other convection-diffusion equations.


Introduction and presentation of the results
We are interested in this paper by the long time behavior of solutions to a generalized viscous Burgers equation on the real line for α ∈]0, 1[, κ + , κ − > 0 and J a smooth function sastisfying |J(u)| ⩽ C|u| 3 .Remark that u |t=0 is not integrable, and J = 0 correspond to the classical viscous Burgers equation.
It is well known that for the heat equation ∂ t u − ∂ 2 x u = 0, for an initial data u 0 ∈ L 1 (R) we have the asymptotic profile when t → +∞, uniformly in z ∈ R. A similar result holds for the viscous Burgers equation ∂ t u − ∂ 2 x u + u∂ x u = 0 for initial data u 0 ∈ L 1 (R), (see [13], [20], [22]), as we have −∞ e −s 2 /4 ds when t → +∞, uniformly in z ∈ R, where M = R u 0 .The same result holds with the term J(u) in the equation of (1.1).
Although the limit profile is changed by the Burgers term u∂ x u, the decay rate and the scaling in time are still the same as for the heat equation.In both case, the L ∞ norm of the solution decays like t − 1 2 .Other asymptotic behavior results have been established in other convection-diffusion equations for initial datas in L 1 (R), we refer to [23] and references therein, as well as [8], [10], [12], [18], [21].See also [1] for some results on non integrable initial datas.
Going back to our problem (1.1), as a comparison, we look first at the heat equation for this type of infinite mass initial data.There, up to a rescaling, we can show that the solution converges to a global attractor.Proposition 1.1 For κ > 0, α ∈]0, 1[, consider f the solution of the heat equation ∂ t f − ∂ 2 x f = 0 for an initial condition f 0 ∈ C 0 (R) that satisfies Then, uniformly in z ∈ R, we have the convergence This result is first proven in [16] and the proof is redone in Annex A to make this paper self contained.Remark that the decay in time of f is slower than if f 0 would be in L 1 (R), in fact, t − α 2 is the size of t − 1 2 √ t − √ t f 0 .Furthermore, the asymptotic profile is smooth, and behaves like κ|z| −α at infinity, connecting back to the initial data.
In this paper, we will construct a stable solution of (1.1) that converges, up to a rescaling, to an asymptotic profile.However, it will have two main differences compared to the result of Proposition 1.1.First, the rescaling will not be the same, and surprisingly, the solution will decay in time like t − α 1+α , that is faster than the heat equation for the same initial data.The scales of the rescaling are thus dicated by the nonlinear term, which happens also in the nonlinear heat equation with a pure power term, see for instance [16] or [19].Secondly, the asymptotic profile will be, in the rescaling where it is of size 1, discontinuous at one point.This discontinuity can be seen as a sort of boundary layer (although there is no boundaries in this problem) that helps the dissipation.About this enhanced dissipation, we can state the following result.Theorem 1.2 Given α ∈ 1  4 , 1 , κ + , κ − > 0, there exists an initial data u |t=0 with u |t=0 (x) = κ± |x| α (1 + o x→±∞ (1)) such that the solution of ∂ t u − ∂ 2 x u + u∂ x u = 0 for this initial data satisfies t α 1+α ∥u(., t)∥ L ∞ (R) ⩽ c 0 where c 0 > 0 is a constant independent of time.Furthermore, this solution is stable in some sense.
See Theorem 1.4 for a more precise statement and the shape of the asymptotic profile, and Proposition 1.5 for a statement in the case J ̸ = 0.By Proposition 1.1, for the initial data of Theorem 1.2, if it was instead evolving following the heat equation, we would have t α 1+α ∥u(., t)∥ L ∞ (R) → +∞ when t → +∞.This means that the additional nonlinear transport term improved the dissipation.Enhanced dissipation results are well known for the heat equation with an additional linear transport term (see for instance [2] [3], [5], [6], [?], [11] and references therein) or for Navier-Stokes on T × R (see [4], [7], [14], [17]).We require κ + , κ − > 0 in Theorem 1.2, and although we can require less, we do not know how to show that this enhancement is true in general for any κ + , κ − ∈ R * .

Profile for the viscous Burgers equation
We focus first in the case J = 0 of equation (1.1).There, the Hopf-Cole formula gives us an explicit formulation of the solution of the equation.However, since our goal is to be able to generalized it for any J, we will not use it here.The results we can obtain with the Hopf-Cole formula will be the subject of a companion paper.
Here, we want to construct an approximate solution of the viscous Burgers equation in the right scaling.

The rescaled problem and the underlying ODE
We consider the viscous Bugers equation ∂ t u − ∂ 2 x u + u∂ x u = 0 and α ∈]0, 1[.We want to do a change of variable such that the terms ∂ t u and u∂ x u are the dominant ones.We define ε(t) := t α−1 α+1 and h(z, ε(t)) = t α 1+α u zt 1 1+α , t , leading to the equivalent equation Remark that the term coming from the laplacian, ε∂ 2 z h, is small when ε → 0 (that is t → +∞).This means that at this scale, the nonlinear effect dominates the dynamic.Interestingly, if we simply remove the laplacian, we get the Burgers equation ∂ t u+u∂ x u = 0, for which the L ∞ norm is conserved.Since we will show some decay stronger than the heat equation, this means that although the laplacian is fading out, it still has a major effect on the dynamic.
We want to construct, for ε > 0 small, a solution to the ODE problem for κ + , κ − ∈ R * .This will give us an approximate solution of (1.1).Remark that the problem (1.3) is doubly degenerate when ε → 0: the coefficient in front of the term with two derivatives goes to 0, but also, the limit problem when ε = 0 is ill defined when h(z) = z 1+α , since then the coefficient in front of the term with one derivative cancels out.

The case ε = 0
We consider in this section the ODE for some given (z 0 , b) ∈ R 2 and α ∈]0, 1[.This is the equation of (1.3) with ε = 0. We summarize here the properties of the solutions.
First, remark that h(z) = z and h(z) = 0 are solutions of this equation (they are the two black lines on the figure).Furthermore, we can write the equation as which is ill defined if h(z) = z 1+α at some point.This is the red line in the figure.The blue curves are solution of the equation.In particular, we have to take (z 0 , b) ∈ R 2 with b ̸ = z0 1+α for the equation to make sense.Now, we divide the set On the figure, the separation between these sets are the red and black lines (the role of the dotted green lines will be explained later).
The equation has a symmetry: if z → h(z) is a solution then so is z → −h(−z).Remark also that since this is a first order ODE, solutions cannot cross the axes 0, Id and Id 1+α .In particular if a solution has a point in a bold set J ∈ {A, B, C, D, E, F}, then it is fully included in J.
By symmetry, we describe similarly the domains and limits if (z 0 , b) In particular, remark that they are no continuous solutions to the problem for κ + , κ − ∈ R * .Therefore, we expect solution of (1.3) to have jumps in the limit ε → 0.
In the next subsection, we will give some conditions to describe what jumps are possible in the limit ε → 0. We will show that for κ + , κ − > 0, at most one is a viscosity solution.

Viscosity solutions
First, if h is a solution of (1.3) with ε = 0 in the distribution sense, then it must satisfy the Rankine-Hugoniot conditions.Here, it states that at any discontinuity z c ∈ R, we must have On the figure, this means that the middle point of any jump must be on the red line.This prevents for instance jumps one bold set to itself, but also for instance from F to D. The dotted green line 2 1+α Id and 1−α 1+α Id are the ones such that the red line is the middle between 0 and 2 1+α Id, and the middle between Id and 1−α 1+α Id.Remark that for any α ∈]0, 1[, we have the order 0 This is represented by the two orange cups on the figure This means that, if we expect h, a solution of (1.3) with ε = 0 with some discontinuities, to be the limit when ε → 0 of a sequence of function h ε solution of (1.3), since the h ε are smooth, then some jumps cannot happen.For instance, although the Rankine-Hugoniot conditions allows jumps from F to E, they are not viscous (this would require the existence of z such that We continue.We infer that it is not possible to have two jumps that cross the axis {z = 0}.Indeed, otherwise we denote z a < z b two consecutives values such that h ε (z a ) = h ε (z b ) = 0, and integrating the equation between z a and z b leads to Finally, suppose that a solution has a point z a < 0 where h(z a ) = 2 1+α z a , h ′ (z a ) > 0 and a point z b > z a such that h(z b ) = 0.Then, integrating the equation between z a and z b leads to When ε → 0, this also lead to a contradiction.This prevent the possibility to have a solutions having jumps between B and D followed by a jump from D to A.
We summarize these conditions in the following image.
Jumps are not possible from a bold set to itself.Two bolds sets are connected by a black line if there is a possible jump between them satisfying the Rankine-Hugoniot condition.Crossed red arrows are jumps that are forbidden by the viscosity conditions.The jump between A and D can only be done once.
These viscosity conditions severely limit what jumps are allowed.We are looking for solutions starting from A or B and ending in D or F. These conditions imposes that, for the cases A → F, A → D and B → D, only a single jump is possible.Furthermore, in these cases (that is κ + , κ − having the same sign or κ − > 0, κ + < 0), the position of the jump is fully determined by κ + and κ − .For instance if κ + , κ − > 0, these two values gives on which blue curves in A and F the solution is, and we can check that there is only one value z c for which the red line is the middle point of these two blue curves.
For the last case B → F, where it seems that no connection is possible, we omitted the case where the jump does not finish in a bold set, but finish on the identity line.It is therefore maybe possible to go from B to F with two jumps, both connecting to {(z, z), z ∈ R}.It is however difficult to prove or disprove that such a thing might happen.It might also be possible that the solution of the viscous Burgers equation with such an initial data simply does not converge with this rescaling.The results of these three above subsections are shown in subsections 2.1 to 2.3.

Construction of the profile for small ε > 0
In this section, given κ + , κ − > 0, α ∈]0, 1[ and ε > 0 small enough, we want to construct a solution of the ODE problem But first, we define the function h 0 by the unique solution to the problem if z > z c , and the unique viscous solution to , where z c ∈ R is the position of the jump given by the conditions described above, uniquely determined by κ + , κ − , α.We will show in subsection 2.3 the existence and uniqueness of the solution of these problems.We define as the function h 0 is discontinous at z c .
We construct a solution of (1.5) using a shooting method, and this solution will be close to h 0 far from z c , see the following result.
Proposition 1.3 For any κ > 0, α ∈]0, 1[, there exists z c ∈ R, ε 0 > 0 such that, for ε 0 > ε > 0, there exists two Furthermore, there exists w 0 > 0 depending only on α and κ, such that Section 2 is devoted to the proof of this result.For ε ̸ = 0 and a = , the solution to This is why, to get the exact same equivalent at +∞, we need to change slightly z c (ε) and a(ε).We use the notation h ε for solutions of the problem (1.5) (that is depending on the behavior at ±∞) and h ε for solutions depending on its value at z c .
The function h ε in Proposition 1.3 is close to the discontinuous function h 0 except in a vicinity of z c , the discontinuity point.h ε solves (1.3) but not (1.2) because of the term 1−α 1+α ε∂ ε h ε , however this term is small compared to the other ones.We will show the stability of h ε in a space that contains in particular this error term in the next subsection.
To construct h ε , we found the right scale around z c to have now a continuous function (it is z−zc(ε) ε ≃ 1 rather than z ≃ 1).The proof of Proposition 1.3 is done in two parts.First, we compute the first order in ε of the solution in z c (ε) − w 0 ε ln 1 ε , z c (ε) + w 0 ε ln 1 ε for some w 0 > 0 large but independent of ε, and we show that at the boundaries of this interval, it become close to the value of h 0 at the same point.Then, outside this interval, h 0 and h ε verify similar equation for small ε, and start with similar values.We thus show that they stay close.

Stability of the profile
We recall that ε(t) = t α−1 α+1 and h ε is the solution described in Proposition 1.3.We want to show that if at a time T > 0 large, we solve the viscous Burgers equation with the initial data h ε(T ) +f 0 at time T in the rescaled variables, then for all times t ⩾ T we stay close to h ε(t) .Interestingly, h ε(t) will not be the first order, we need to modify it nonlinearly, depending on f 0 .It turns out that the mass of f 0 will change the profile near z c , in a non negligeable way.The stability result is as follow.
Theorem 1.4 Given α ∈ 1 4 , 1 , κ + , κ − > 0, there exists T 0 > 0 such that, for any T ⩾ T 0 , there exists ν > 0 depending on T such that, considering h ε(t) , z c (t), a(t) defined in Proposition 1.3, the solution u to the problem where u is the unique solution to the problem Section 3 is devoted to the proof of this result.Let us make some remarks about it.
• This result implies Theorem 1.2 and gives us the asymptotic profile when t → +∞ for any z ̸ = z c .The convergence is uniform on R if we remove any open set containing z c .In a vicinity of z c we still have convergence to some limit, and there this limit depends on u, that is f 0 .Remark that u depends nonliearly on f 0 , and thus the correction coming from u is not simply a modulation on the • With the conditions on f 0 , we check that our initial data decays like κ ± |x| −α when x → ±∞, and f 0 is small when compared to the main profile, since ν depends on T .Also, the condition α > 1 4 is a technical one, we expect the result to hold for any α ∈]0, 1[.This condition will be used to show that ∂ ε h ε has enough decay at ±∞ to estimate it in H 1 (R), see subsection 3.4.
The core idea of the proof is to write the solution for t ⩾ T as and now the error f is massless (that is R f = 0).We write it f = ∂ x g, and it turns out that we can integrate the equation to have a new equation on g.We show there some coercivity on the linear part on g in H 2 (R), and we control the nonlinear part, from which we deduce that ∥g∥ H 2 (R) → 0 when t → +∞.

Generalisation to equation (1.1)
Our approach also work for the equation for some C 0 > 0.
Proposition 1.5 For α ∈ 2 3 , 1 and T 0 , ν depending on C 0 , the result of Theorem 1.4 also holds for the problem Section 3.6 is devoted to the proof of this result.It is done simply by checking that the term ∂ x (J(u)) can be considered as en error term in the proof of the stability of Theorem 1.4.This is true because at the scale where we see the profile h ε , this term is small compared to the other ones.As before, the condition α > 23 is a technical one, and is here to make sure that J(u) has enough decay at ±∞ to estimate it in H 1 (R).

Some related open problems
Our results should extend easily for values of κ + , κ − ∈ R * except in the case κ + < 0, κ + > 0. There, it is maybe possible to construct a specific solution, but it is likely that it is an unstable one.If we generalized to the equation it is likely that a similar result can be shown with some improvements in the proofs.
It seems for now difficult to generalize this result to higher dimension, but it would be of interest, in particular if similar profiles can be constructed for the 2d Euler or Navier-Stokes equation.

Acknowledgments
The authors are supported by Tamkeen under the NYU Abu Dhabi Research Institute grant CG002.The authors have no competing interests to declare that are relevant to the content of this article.

Construction of the profile h ε
This section is devoted to the proof of Proposition 1.3 and the viscosity properties described in the introduction.First, in section 2.1 we set the change of scaling and compute the Rankine-Hugoniot condition for the viscous Burgers equation.Section 2.2 is devoted to the case ε = 0. Section 2.3 is about the construction of h 0 (which will be the limit of h ε when ε → 0), as well as the study of its properties.Section 2.4 and 2.5 are the study of the shooting problem at the heart of Proposition 1.3, respectively close and far from the shooting point z c .Section 2.6 regroups all these elements and concludes the proof of Proposition 1.3.

Change of variable and viscosity conditions
In this subsection, our goal is to prove some results of subsections 1.1.1 to 1.1.3 of the introduction.

Computation of the underlying ODE problem
We consider here the equation we have We define ε(t) = t α−1 α+1 and we do the change of variable By Proposition 1.1, this scaling is not adapted to the heat equation with the same initial condition.Remark if we tried to use this scale anyway, we would get the same equation (2.1) but without the term −h∂ z h.When ε → 0, the limit problem will be α 1+α h + z∂zh 1+α = 0, which has only the solution h = Cz −α for some C > 0, which is unbounded at z = 0.

Rankine-Hugoniot condition
For the equation α 1+α h + z∂zh 1+α − h∂ z h = 0, integrating it between z c − ν and z c + ν leads, after some computations, to Therefore, letting ν → 0 leads to that we factorize as which is the Rankine-Hugoniot condition stated in the introduction.

Some properties of solutions of
1+α , and we consider here the problem that is the problem described in section 1.1.2.The fact that if (z 0 , b) ∈ J ∈ {A, B, C, D, E, F} implies that (z, h(z)) ∈ J for all values of z on which the solution h of (2.2) is well defined is a consequence of standard Cauchy theory arguments, since the boundary between two bold set is either a solution of (2.2), or the set The solution h of (2.2) with (z 0 , b) ∈ A is defined on R and satisfies We leave the study of the behavior when z → −∞ for subsection 2.3.
Suppose that z + ̸ = +∞.There exists C 0 > 0 depending on z 0 and b such that, for z ∈ [z 0 , z + [ we have on [z 0 , z + [.With z + < +∞, we deduce that h and ∂ z h are bounded near z + , which is a contradiction, therefore z + = +∞.We define for z ⩾ z 0 the function u(z) = h(z) − z > 0. It satisfies the equation on [z 0 , +∞[.Now, we compute, using the equation satified by h, that , and we can write equation (2.3) as On ]z − , z 0 ], by similar arguments as previously, we have h > 0 and We also leave the study of the behavior when z → +∞ for subsection 2.3.
1+α > b > 0 in the problem (2.2).As in the previous subsection, we consider the largest interval on which the solution is defined, that we write ]z − , z + [.We have A consequence of this and equation and with h > 0, we deduce that z + = +∞.We also see that z − > 0 because the condition z 1+α > h(z) > 0 can no longer hold at z = 0. 2

The remaining cases
For (z 0 , b) ∈ E, we can deal with the limit for large z as in the case of A, and we can show that z − > 0 as in the case of F. By symmetry, we show similar properties in B, C and D.

Definition and properties of the profile h 0
The goal of this subsection is to show that, given α ∈]0, 1[ and κ + , κ − > 0, there exists a unique value of z c an a unique viscous solution of (1.6)-(1.7) in the sense of the introduction.We will also study its properties.

A connected implicit problem
We look for an implicit solution of α 1+α h + z 1+α − h ∂ z h = 0 of the form z = g(h).Differentiating with respect to z, we have 1 = ∂ z hg ′ (h) and replacing, we deduce that The solution of this equation is of the form g(h This is why we define for α ∈]0, 1[ and κ > 0 of the function We are interested in the solutions of the implicit problem z = g κ (y(z)).First, remark that g κ (y) → +∞ when y → 0 ± , g κ (y) → ±∞ when y → ±∞ and g ′ κ (y) → 1 when y → ±∞.We compute for y ̸ = 0 that In particular, g ′ κ > 0 on ] − ∞, 0[.We have g ′ κ (y) = 0 if and only if y = y κ = (κα) . This implies that on [y 0 , +∞[,we have g ′ κ (y) > 0. We compute easily that By the implicit function theorem, given κ + , κ − > 0 we construct two particular branches of functions.

Lemma 2.3
The functions h ± satisfy, on their domain of definitions, the equation Proof We first check that g ′ κ (y) = 1 − κα y|y| α = (1 + α)y − αg κ (y), and since g κ± (y * ± (z)) = z, we have Furthermore, differentiating with respect to z the equation z = g κ (y(z)) leads to Furthermore, for z ⩽ z 0 , n ∈ N, there exists C n > 0 depending on n, z 0 , α such that Similarly, for (z 0 , b) ∈ F , the solution of the problem (2.2) is h + defined above for the value and for z ⩾ z 0 , Proof Given (z 0 , b) ∈ A, we look for a value κ such that . This inequality is a consequence of the fact that (z 0 , b) ∈ F , which implies that z0 1+α > b > 0, and thus z 0 − b ⩾ αb.
Now, concerning the computations of ∂ n z h ± , we have and we can conclude by induction.2 We complete this subsection with a technical lemma on the dependency on b and z 0 of h ± .
Lemma 2.5 The function (b, z 0 ) → h ± is differentiable, and there exists K > 0 depending on b, z 0 such that as well as on the domain of definition of h ± .
Proof Take y(z) a function solution of the implicit problem defined on some interval [z 0 , +∞[.By the remarks above Lemma 2.3 we have y(z) ∼ z when z → +∞.Writing y = z + ȳ(z), we check that when z → +∞.We then check easily that we have similarly when z → +∞, which implies the result of the lemma for h + , and a similar proof works for h − . 2

Connection between the jump and the limits at ±∞
We recall the notation is a smooth function and a bijection from Proof By Lemma 2.1, the solution of with z c > 0 and 0 < h(z c ) < zc 1+α (that is zc 1+α > a > 0) is well defined for all z ⩾ z c , and we have Similarly, the solution of ) is well defined for all z ⩽ z c , and we have We deduce that (z c , a) Let us show that it is a bijection.
Writing a = z c b with b ∈ α 1+α , 1 1+α , we have ζ is a smooth function of b in α 1+α , 1 1+α , and We check that This completes the proof of the lemma. 2 We now can construct the function h 0 : for κ + , κ − > 0, take (z c (κ then h 0 is the solution of Lemma 2.6 for these values.It is almost a solution of (1.3), but it is discontinuous at z c .It satisfies the Rankine-Hugoniot condition, and by Lemma 2.6 it is the only solution among the ones behaving like κ ± |z| −α at ±∞ doing so.
Our goal in the next subsection is to construct a better approximation h ε , that will be continuous at z c , and be close to h 0 away from it when ε is small.

Shooting from z c and shape of the profile near it
We want to understand the solution to the problem for some given parameters z c , a > 0 with zc 1+α > a > α 1+α z c that for now are independent of ε.In this subsection, we take h to be the solution of Lemma 2.6 associated to the values of z c and a.The function h is discontinuous at z c .We want to show that for a right choice of z c and a, h ε is close to h far from z c , and we want to compute the shape of h ε near z c .

Estimates in z
There exists w 0 > 0 depending only on α, z c , a such that h ε , the solution of (2.4), is well defined on z c , z c + w 0 ε ln 1 ε and satisfies as well as for some constant C > 0 depending only on α, z c , a when ε → 0, and also This lemma implies that, when we are at distance w 0 ε ln 1 ε to the right of z c for some constant w 0 > 0 independent of ε, the function h ε and h and their derivatives are close.In particular, h ′ ε (z c ) = −a 2 2ε is large when ε is small, but h ′ ε z c + w 0 ε ln 1 ε is of size 1.In other words, at z c + w 0 ε ln 1 ε the jump has ended, and h ε , h ′ ε will be for now on bounded uniformly in ε.The choice of w 0 ε ln 1 ε is not necessarly optimal, it might be improved, but it is enough here.We also compute the first order correction in C 1 between h ε and h to the right of z c .Proof We decompose, for z > z c , Z = z−zc ε > 0, the solution of (2.4) as and we recall that α 1+α h(z) + z 1+α − h(z) h ′ (z) = 0.The function h is discontinuous at z = z c , but we focus here only on z ∈ z c , z c + w 0 ε ln 1 ε .We choose F the solution of that is by Lemma 2.6.We also check that Let us compute the equation satisfied by G.We have Finally, We define the source part as and the operator on G as Let us estimate S(Z) for Z > 0. We have Take for now any w 0 > 0, independent of ε.Then, since for Z ∈ 0, w 0 ε ln 1 ε for a constant K > 0 depending only on w 0 , α, a.
Let us now look at the coefficient in the operator O(G).We write it and In particular, A 1 and A 2 are bounded by constants independent of ε if ε < 1.By the estimates on S, for any Z 0 > 0, if ε > 0 is small enough depending on Z 0 (so that the nonlinear term εG(Z)G ′ (Z) can be neglected), there exists for Z ∈ [0, Z 0 ].This is because the equation satisfied by G is, except for the term −εGG ′ , linear with a bounded source term.Without this nonlinear term the solution would then be global, and taking ε > 0 small enough depending on Z 0 , since G(0), G ′ (0), A 1 and A 2 are bounded uniformly in ε, the solution exists at least on [0, Z 0 ] with a uniform estimates depending on Z 0 .Now, remark that A 1 (Z) → zc 1+α − h(z + c ) and if Z ⩾ w 0 ln 1 ε with w 0 large (such that F ′ w 0 ln 1 ε ⩽ ε 2 for instance) when ε → 0. We therefore write the equation on G as We simplify We define, to simplify the notations, λ : ) so that the equation on G can be written as For ε > 0 small enough we have λ 2 − 4εµ > 0, and then we can write with Let us show that for C 0 > 0 large enough (independently of ε) and ε small enough, we have (2.7) for Z ∈ 0, w 0 ln 1 ε .This is true on [0, Z 0 ] for some Z 0 > 0 by (2.5).Now, if the result is not true, we denote w 0 ln 1 ε ⩾ Z c ⩾ Z 0 the first value such that this estimates becomes an equality.Then, on [0, Z c ] we have and plotting this estimates in (2.6) leads to for some constants K, C > 0 independent of ε and C 0 .We can easily show a similar estimate on G ′ (Z c ), up to an increase on K 0 .We deduce that if Z 0 , C 0 are large enough and ε small enough, then which is a contradiction.
This completes the proof of (2.7).Going back to h ε (z) = h(z) + (F (Z) + a) + εG(Z),we deduce that ) → 0 when ε → 0, and taking w 0 large enough, by Lemma 2.4 Finally, , we conclude the proof of this lemma by 2 By standard Cauchy theory, at fixed z c and a, ε → h ε is a smooth function.We conclude this subsection by some estimates on ∂ ε h ε .Lemma 2.8 For α ∈]0, 1[, z c > 0, zc 1+α > a > α 1+α z c , there exists ε 0 , C > 0 depending only on α, z c , w 0 such that, if ε 0 > ε > 0 and h ε is the solution of (2.4) for these parameters, then with Furthermore, for z ∈ z c , z c + w 0 ε ln 1 ε we have Proof We recall that with Z = z−zc ε , we have In the previous lemma, we did not write the dependency of G in ε since we did not differentiate with respect to it, but we do so here.We deduce that With the explicit formula of F and (2.7) we check that for z ∈ z c , z c + w 0 ε ln 1 ε we have where K > 0 is depending only on α, z c , w 0 .Furthermore, from the proof of Lemma 2.7 we know that G satisfies the equation We check that and by similar arguments as the proof of Lemma 2.7, we conclude that for some constant C > 0 depending only on α, z c , w 0 .Finally, we have Similarly,

and since
hence, by Lemma 2.7, This concludes the proof of this lemma.
, there exists w 0 > 0 depending on α, z c , a such that h ε , the solution of (2.4), is well defined on z c − w 0 ε ln 1 ε , z c , satisfies for some constant C > 0 depending only on α, z c , a when ε → 0, and also Proof For z < z c , keeping the notation Z = z−zc ε < 0, we decompose h ε , solution of (2.4) as for the same function F as in the proof of Lemma 2.7, but another function G.We recall that the function h is not continuous at z c , and since we consider here z < z c , it will have a different limit for z − z c < 0 close to 0. We take As in the proof of Lemma 2.7, we check that G satisfies the equation and We now define G(Z) = G(−Z), that satisfies the equation We therefore consider Z > 0 in the rest of the proof.Now, remark that > 0, and we can complete the proof in a similar fashion as for Lemma 2.7.
, there exists ε 0 , C > 0 depending only on a, z c , w 0 such that, if ε 0 > ε > 0 and h ε is the solution of (2.4) for these parameters, then The proof of this result is similar to the proof of Lemma 2.8 and we omit it.
2.5 Profile far from z c

Profile on the right of z c
We start with an apriori estimate on solutions to the ODE problem.
Lemma 2.11 For any z d > 0, there exists K > 0 such that the solution to the problem We write it as First, we have u(z d ) < 0, and we show that as long as u exists, we have u(s)ds , this implies that h ε is decreasing, and in particular ε , then u ′ (z) > 0, which is impossible.We deduce that u is bounded, and therefore global.Using the same idea, we can show that u(z) ⩽ −λ z for some small (but independent of ε if ε is small enough) constant λ > 0. In particular, h ε (z)z λ/2 → 0 when z → +∞.Similarly, we can check that u(z) We have that and therefore, by a comparaison principle, on z ⩾ z d we have Using these estimates in the equation h ′ ε = uh ε completes the proof of the lemma. 2 We recall that h is solution of and is discontinuous at z c .Lemma 2.12 The function h ε , the solution of (2.4), satisfies for |z| large enough (uniformly in ε).
Remark that this does not imply that lim z α h ε (z) = κ + (z c , a) when z → +∞, simply that their differences goes to 0 when ε → 0.
Proof We introduce first a generic problem that we will use both to estimate h ε and ∂ ε h ε .
We consider for now the problem for given functions J 1 , J 2 , S of z, and initial values of v at some point z d , and with J 2 that does not cancel.We introduce the function A defined by A(z d ) = 1 and Then, writing v = Au, we have We continue, we introduce B with B(z d ) = 1 and that is We introduce for γ > 0 the quantity z ⩾ C 0 , then by comparaison there exists K > 0 (depending only on C 0 and γ) such that We continue, we have Integrating between z d and z leads to (2.11) Step 1. Existence and properties of κ ε,+ (z c , a).
We take z d = z c + w 0 ε ln 1 ε , and we recall that h ε satisfies for ε > 0 small enough the equation ).We decompose h ε = h + g with h solution of (2.8), so that the equation satisfied by g is This is equation (2.9) with J 1 = α 1+α − h ′ , J 2 = z 1+α − h ε and S ε = −h ′′ .Remark that for z ⩾ z d , we have J 2 (z) > 0. Now, we have and with for s ⩾ z d , we deduce that there exists K > 0 depending on z d , α such that for z ⩾ z d , and z α A(z) converges when z → +∞ to a finite constant bounded uniformly in ε.With g = Au, we define N (z and by (2.10) we deduce that Combining these estimates in (2.11) and the integral of (2.11), we deduce that for some constant C > 0 independent of ε and for all z ⩾ z d .Furthermore, More precisely, and with the explicit definition of A, we have and by Lemma 2.5 we have With (2.12), we conclude that for z ⩾ z d , as well as This last estimate can be improved (we can remove the ln 1 ε ) but it is not needed here, we will only need ε∂ ε h ε to be small and not ∂ ε h ε itself.
Step 2. Differentiation with respect to ε at fixed z d .
We consider here h ε the solution to the problem < 0, and they are, with z d , independent of ε.We have as previously that for z ⩾ z d , We introduce this new notation since we want to differentiate h ε with respect to ε, but its dependency on ε comes from the ε in the equation but also from z c and the value of h ε here.For h ε , the dependency on ε comes only from the ε in front of ∂ 2 z h ε in the equation.By standard Cauchy-Theory, ε → h ε is differentiable, and v = ∂ ε h ε satisfies the problem Following a similar proof as step 1, we check that, with h ε = Au, we have that u(z) converges to a finite limite u(+∞), with |u(+∞)| ⩽ K ln 1 ε , and that We also check, as in step 1, that and for some k 0 depending on ε and K > 0.
Step 3. Differentiation with respect to z d .
We consider here h ε the solution to the problem ε for some K > 0 independent of ε.As previously, estimate (2.13) holds.We want to compute We have −∂ z h ε (z d ) = −b and since which is bounded by K ln 1 ε with K > 0 independent of ε.As in the previous case, we check that for some k 0 , K > 0 and We consider here h ε the solution to the problem This is similar to the previous steps, and we also check that ∂ b h ε can be estimates similarly Step 5. Conclusion.

The function h ε is solution to
satisfying (by Lemmas 2.7 and 2.8) Therefore, ∂ ε h ε can be written as a sum of the functions v of steps 2 to 4, and since |∂ ε z d | ⩽ w 0 ln 1 ε , this concludes the proof of this lemma.2

Profile on the left of z c
Lemma 2.13 The function h ε , the solution of (2.4), is well defined on −∞, z c − w 0 ε ln 1 ε and satisfies for |z| large enough.
The proof is similar to the one of Lemma 2.12 and we omit it.

End of the proof of Proposition 1.3
Take κ + , κ − > 0, α ∈]0, 1[.By Lemma 2.6, we choose z c , a > 0 such that We infer that for ε small enough, we can take z c (ε), a(ε) > 0 such that when ε → 0, and that this choice is unique, and determined h ε .This is a consequence of the implicit function theorem on the function Indeed, by Lemmas 2.6, 2.12 and 2.13, we have K(0, z c , a) = 0 and the jacobian at ε = 0 is invertible.By Lemma 2.12 and 2.13 we have the estimates The other properties in Proposition 1.3 are a consequence of Lemmas 2.7, 2.9, 2.12 and 2.13.

Properties of ∂ ε h ε
We recall that the function h ε is a solution h ε of the previous subsection, with a particular choice of z c (ε), a(ε) such that the limits at ±∞ of |z| α h ε (z) are κ ± respectively, quantities independent of ε.The function h ε depends on ε by h ε as above, but also through z c (ε) and a(ε).
Lemma 2.14 The function ∂ ε h ε satisfies Proof By Lemma 2.12, we check that for z ⩾ z c (ε) + 1, we have and estimate is a consequence of Lemma 2.8.For z ⩽ z c (ε) the proof is similar.The decay at infinity of , and integrating between −x and x for some large Differentiating with respect to ε leads to and going to the limit x → +∞, we check with

Estimates of D 1 and D 2 and choice of W
In this subsection, we choose the values of λ and W so that D 1 and D 2 are strictly positive, and satisfy some good estimates.

Estimates for y > 0
The goal of this subsection is to show that for y > 0 we have for some constant C > 0 independent of ε.We recall that for y > 0, In this region, we chose W = 1.Then, We recall that −h ′ 0 (z c + εy) ⩾ 0, and from (3.4) we have Finally, from (2.7) we have which implies that, for t ⩾ T with T > 0 large enough and |M | small enough (so that c M is close to 0), we have

Estimates for y < 0
The goal of this subsection is to show that for y < 0 and fixing a well chosen weight W we have and D 2 (y) ⩾ 1.
For y < 0, we recall that H ε (y) = h ε (z c + εy) = h 0 (z c + εy) + F (y) − a + εG(y), hence Let us estimate the coefficient in factor of This also show that for y ⩽ y 0 − 1, we have We choose γ such that for y ⩽ −γε −1 we have ⩽ 2Cε, we check that taking λ large enough and t ⩾ T with T large enough, we have D 2 (y) ⩾ 1.

Summary
With the above choices for W and λ, we have for some K > 0 independent of ε and D 2 ⩾ 1.
Remark that in the case y > 0, we could not have choosen a similar weight W , because D 1 contains a term y ∂yW W .For y < 0, ∂yW W < 0 this is a positive quantity, but for y > 0, this would pose an issue.

Estimates on the source terms
We focus here on estimates on x .
This completes the proof of Theorem 1.4.
This section is devoted to the proof of Theorem 1.5, that is about the equation Doing the same change of variable as for the proof of Theorem 1.4, the only change in equation (3.5) is that we add the term E J := t 2α 1+α J t − α 1+α (H ε + u M + ∂ y g) .We recall that |J(u)| ⩽ K|u| 3 .The scalar product of E J with gW can be controlled by Since H ε (y) = h ε (z c + εy), we check that if α > 1 2 we have , and to consider it as an error term to conclude as in subsection 3.5, we need t − 2α 1+α R E J gW to decay in time strictly faster than t −1−δ for δ > 0 small provided that ∥g∥ 2 H 2 (R) ⩽ Kt −δ .This is the case if 1−4α 1+α < −1, that is α > 2  3 .
We can check similarly that we can treat R ∂ y E J ∂ y gW and R ∂ 2 y E J ∂ 2 y gW similarly.For the later one, we use the fact that we also control µ∥∂ 3 y g∥ 2 L 2 (W ) .

2 3
Stability of h εThis section is devoted to the proof of Theorem 1.4.For the viscous Burgers equation ∂ t u − ∂ 2x u + u∂ x u = 0 and ε(t) = t α−1 α+1 , we introduce now the rescaling y