On Uniqueness and Blowup Properties for a Class of Second Order SDEs

As the first step for approaching the uniqueness and blowup properties of the solutions of the stochastic wave equations with multiplicative noise, we analyze the conditions for the uniqueness and blowup properties of the solution $(X_t,Y_t)$ of the equations $dX_t= Y_tdt$, $dY_t = |X_t|^\alpha dB_t$, $(X_0,Y_0)=(x_0,y_0)$. In particular, we prove that solutions are nonunique if $0<\alpha<1$ and $(x_0,y_0)=(0,0)$ and unique if $1/2<\alpha<1$ and $(x_0,y_0)\neq(0,0)$. We also show that blowup in finite time holds if $\alpha>1$ and $(x_0,y_0)\neq(0,0)$.


Introduction and main results
The basic uniqueness theory for ordinary differential equations (ODE) has been well understood for a long time. If F (u) is a Lipschitz continuous function, theṅ has a unique solution valid for all time t ≥ 0. Furthermore, the Lipschitz condition on the coefficients cannot be weakened to Hölder continuity with index less than 1.
The situation for stochastic differential equations (SDE) is very different. The classical Yamada-Watanabe theory of strong uniqueness [16] states that if f (x) is a locally Hölder continuous function of index 1/2 with at most linear growth, then dX = f (X)dW, X 0 = x 0 has a unique strong solution valid for all time t ≥ 0. The Hölder continuity condition cannot be weakened to indices below 1/2. Besides the Hölder 1/2 condition, another notable difference from the ODE case is that the Yamada-Watanabe uniqueness result for SDE is essentially a one-dimensional result. That is, much less is known for vector-valued SDE, whereas the above statement for ODE is still true in the case of vector-valued solutions.
The basic conditions for uniqueness of partial differential equations (PDE) are the same as for ODE: coefficients must be Lipschitz continuous. But the corresponding results for stochastic partial differential equations (SPDE) have only appeared recently. These results are restricted to the stochastic heat equation, Here x ∈ R,Ẇ =Ẇ (t, x) is two-parameter white noise, and f is Hölder continuous with index γ. In this case, strong uniqueness holds for γ > 3/4 [12], but fails for γ < 3/4 [9]. One can also replace white noise by colored noise, which may allow x to take values in R d for d > 1, and may change the critical value of γ.
In fact, the case of γ = 1/2 is the well-studied case of super-Brownian motion, also called the Dawson-Watanabe process, see [4], [13].
Other types of SPDE than the stochastic heat equations are still unexplored with regard to uniqueness, except for the standard fact that uniqueness holds with Lipschitz coefficients. For example, there is no information about the critical Hölder continuity of f (u) for uniqueness of the stochastic wave equation: Here again x ∈ R andẆ =Ẇ (t, x) is two-parameter white noise.
In order to shed light on uniqueness for the stochastic wave equation, we propose studying the corresponding SDEẌ = f (X)Ḃ. By making this equation into a system of first order equations, we arrive at the equations Here B = B t is a standard Brownian motion, and we use the subscripts X t or Y t to indicate dependence on time, rather than X(t) or Y (t). Here we focus on the coefficient f (x) = |x| α because this function had special importance in the stochastic heat equation, and it is a prototype of a function which is Hölder continuous of order α.
Now we are ready to present our main results. In our first theorem, we show that when α > 1/2 and the initial condition is nonzero, strong uniqueness holds for the solutions of (1.3) up to the time of hitting the origin or blowup. In the next theorem, we prove that when α > 1/2, the unique strong solution of (1.3) from Theorem 1.1 never reaches the origin. Theorem 1.2. If α > 1/2 and (x 0 , y 0 ) = (0, 0), then the unique strong solution (X t , Y t ) to (1.3) never reaches the origin.
In our next result, we prove the nonuniqueness for the solutions of (1.3) initiated at the origin. A few remarks are in order. 3. Note that if 0 < α < 1, the coefficient |x| α is locally Lipschitz continuous except in a neighborhood of x = 0. If 0 < α ≤ 1, then the solutions do not blow up in finite time thanks to the sublinear growth of the coefficients away from 0. Now we turn our attention to the question of blowup in finite time. In the case of stochastic heat equation (1.1), the critical Hölder continuity index γ of f is 3/2. If γ > 3/2, then the solution blows up in finite time with positive probability (see [11], [8]). For γ < 3/2, the solution does not blow up almost surely [7]. It is still unknown what happens when γ = 3/2.

Remarks
The blowup property of the stochastic wave equation appears to be more difficult to analyse. It is still not known what conditions on f give finite time blowup of the solution of (1.2) (see [10]). Sufficient conditions for the divergence of the expected L 2 norm of the solutions in finite time were derived by Chow in [3]. This result however is insufficient to establish the almost sure blowup of the solutions to (1.2).
We study the solution of (1.3) as the first step for approaching the stochastic wave equation.
The finite time blowup of the solutions of the first order stochastic differential equations can be checked by the Feller test for explosions (for example, see [5]); however, there is not a simple way to check in the case of higher order equations. It is well-known that the solution of (1.3) doesn't blow up if the coefficients have at most linear growth (that is α ≤ 1). In the next theorem, we prove that when α > 1, the solution of (1.3) blows up in finite time with probability one. Before stating the theorem, we define some stopping times. σ Y can be defined analogously. Then, the following theorem holds.
We now give some remarks.

Remarks:
1. The result of Theorem 1.4 is derived by showing that the blowup property of the solutions of (1.3) follows from the transience property of a simplified time changed system. By proving that the inverse time change transforms infinite time to a finite time, we establish the finite time blowup property.
2. From the proof of Theorem 1.4 it follows that |X t | and |Y t | will fluctuate up and down as t → σ X and won't converge to any number in R ∪ {∞}. However, due to the correlation between them, |X t | ∨ |Y t | → ∞ as t → σ X (see Remark 5.2 in Section 5).
Structure of the paper. The rest of this paper is dedicated to the proofs of Theorems 1.1-1.4. In Sections 2-5, we prove Theorems 1.1-1.4 respectively.

Proof of Theorem 1.1
Since the coefficients of SDE (1.3) are locally Lipschitz, the solutions are strongly unique for α ≥ 1. We now focus on the case 1/2 < α < 1.
is constant for t ≥ τ n . We claim that for each i = 1,2, there is at most one time t > τ n at which X i,n is constant for t ≥ τ n and this constant cannot be 0 because |(X i,n τn , Y i,n τn )| ∞ = 0. In this case, there is no time t ≥ τ n at which X i,n is a nonzero constant for t ≥ τ n , then X i,n t is a nonconstant affine function of t for t ≥ τ n , and so equals 0 at most once for t ≥ τ n .
We will also define stopping times σ i 1 < σ i 2 < · · · as the successive times t at which X i,n t = 0. We claim that with probability 1, these times do not accumulate. The preceding argument shows that for i fixed, there is at most one value of k for which σ i k > τ n . For t < τ n , since |(X i,n t , Y i,n t )| ∞ > 2 −n , we see that once X i,n t = 0, it cannot again hit 0 before time τ n without first achieving the level X i,n t = 2 −n . To see this, first assume EJP 22 (2017), paper 72. that when X i,n t = 0, we have Y i,n t > 0. The case Y i,n t < 0 is similar and will be omitted. As long as t < τ n , we have |Y i,n t | < 2 n and so X i,n k 's are distanced at least by 2 · 2 −2n = 2 −2n+1 and isolated. For simplicity, define σ i 0 = 0. Also, if {σ i l } l≥1 is a finite and σ i k is the last of these stopping times, define σ i k+m = σ i k for m > 0.
We moreover defineσ From (2.2), it follows that in order to prove Theorem 1.1, it is enough to show the pathwise uniqueness for the solutions of (2.2) for any n ≥ 1. We have shown that the sequence of stopping timesσ i 1 <σ i 2 < · · · has no accumulation points for i = 1, 2, therefore the following lemma is the last ingredient in the proof of Theorem 1.1.
k a.s., and thereforẽ Proof. We prove the lemma for k = 0, that isσ 1 Recall that |x| α is a Lipschitz continuous function except in a neighborhood of x = 0. Hence it is enough to prove the uniqueness of the solutions to (2.2) starting at X i,n 0 = 0 up to the first time that either one of |X i,n for i = 1, 2 and t ∈ [0, η]. It also follows that η ≤ 1.

Uniqueness and blowup properties for SDEs
By the Cauchy-Schwarz inequality and Ito's isometry, we get .
is a one-dimensional stochastic integral, it follows that Y t is a time-changed Brownian motion. In particular, if we define is a standard Brownian motion as long as We also defineX EJP 22 (2017), paper 72.
Then, by the chain rule and the inverse function differentiation rule, with the same initial conditions as before. Thus, and observe that dh(x) = |x| 2α dx. Since we are assuming that α > 0, it follows that dh(0) = 0 and (3.4) holds for x = 0. It is easy to check that (3.4) also holds when x > 0 and x < 0.
LetṼ t := h(X t ). and thereforeṼ Motivated by this time change argument, let s ds, y 0 +B t .
We will show that Z t never equals (0, 0). Z t is a jointly Gaussian random variable. Using (4.4) and by a simple calculation, we find that the covariance matrix of (Ṽ t ,Ỹ t ) is Since (Ṽ t ,Ỹ t ) is jointly Gaussian, its joint probability density has the following bound. (3.8) We define the following events for natural numbers N . We wish to prove that P (A) = 0, and it is enough to prove that P (A N ) = 0 for all N . From now on, let N be fixed.
Fix 0 < δ < 1 and let k, m, n be natural numbers. We define a few more events: As k varies, k2 −2n is a grid of points which gets denser as n increases.
and therefore To deal with E 5,n,N , recall that Lévy's modulus of continuity for Brownian motion (see Mörters and Peres [6], Theorem 1.14) states that for T > 0 fixed, we have Now we deal withṼ t . Note that on E 1,n,N , the velocity ofṼ t is bounded by n in absolute value. It follows that on E 1,n,N , all of the E 6,k,n 's occur and so on E 1,n,N , E 7,n,N also occurs.

Proof of Theorem 1.3
Since the solution is starting at (x 0 , y 0 ) = (0, 0), we see that (X t , Y t ) ≡ (0, 0) is a solution to (1.3). Our goal is to exhibit another solution, but this will be a weak solution.
To gain information about strong uniqueness, we recall the following lemma of Yamada and Watanabe (see V.17, Theorem 17.1 of Rogers and Williams [14]).    T −1 (t) is a strictly increasing function and as we show in Lemma 4.2 below, T −1 (t) is almost surely finite and continuous for all t > 0. Therefore, there exists a continuous, increasing functional inverse, which we call T (t).
Note that the initial condition of this system is (0, 0) and that such a system is not constant.
It remains to verify that (X t , Y t ) solves (1.3). By the chain rule, for any t > 0 such thatṼ T (t) = 0, In fact, the above formula also holds whenṼ T (t) = 0, at which point the derivative is zero.
From this calculation, we can easily check that Y t is a martingale with quadratic variation Y t = |X t | 2α . Then we can define a Brownian motion by In this way, Using the chain rule, and recalling that by definition d dtṼ t =B t , it follows that for any t > 0 such thatṼ T (t) = 0, We show in Lemma 4.2, that T −1 is strictly increasing. This means that T (t) is also strictly increasing. According to (4.3), this means that the set of times {t > 0 :Ṽ T (t) = 0} has Lebesgue measure zero with probability one. Consequently, we conclude that The triple (X t , Y t , B t ) is a non-constant weak solution to (1.3) with initial condition (0,0).
It remains to prove the following lemma which guarantees that the time changes T (t) and T −1 (t) are continuous, increasing, and well-defined.  With probability one, I(t) < +∞ for all t > 0 and t → I(t) is strictly increasing and continuous.
Proof of Lemma 4.2. We check that for all t > 0 and for 0 < β < 2/3, Note thatṼ t is a normal random variable with mean 0. Next we compute its variance. Secondly, provided 3β/2 < 1, which is equivalent to β < 2/3. Furthermore, we remark that because I(t) = t 0 |Ṽ s | −β ds is an integral with a strictly positive integrand, it is continuous and strictly increasing until its blow-up time. Since we demonstrated that EI(t) < +∞ for all t > 0, I(t) does not blow up and is strictly increasing and continuous for all t > 0.

Proof of Theorem 1.4
The proof of Theorem 1.4 contains two main ingredients. Recall that in Section 3, we showed that a solution of system (1.3) with α > 1/2 and (x 0 , y 0 ) = (0, 0) can be represented as a time change of (Ṽ t ,Ỹ t ). In Proposition 5.1, we will prove that (Ṽ t ,Ỹ t ) is transient. Subsequently in Lemma 5.3, we will show that the inverse time change T −1 (t) in (4.2) satisfies P sup t>0 T −1 (t) < +∞ = 1 when α > 1 and (x 0 , y 0 ) = (0, 0). In other words, the time change T −1 (t) changes an infinite time to a finite time almost surely, and this will complete the proof of Theorem 1.4.
Using inequality (3.8), we get It follows from a comparison principle that A bound of the probability of the event A c 3,n can be computed by time change and reflection principle:

Remark 5.2.
From the proof of Proposition 5.1, we can get a lower bound on the growth rate of (Ṽ t ,Ỹ t ). Since the time intervals [n 2 , (n + 1) 2 ] are of lengths 2n + 1, the fluctuations ofỸ t over such intervals are of order n 1/2+δ4 << n 1−δ3 for large values of n. This assertion holds because 0 < δ 3 < 1/2 and 0 < δ 4 < 1/2 − δ 3 . So the fluctuations won't bringỸ t to 0, if it is not already close to 0.
Note that bothṼ t andỸ t are recurrent processes which return to 0 infinitely often.
However, if we consider the collection of the processes (Ṽ t ,Ỹ t ), if one process takes a small value, the other will take a large value, due to the correlation between them. We will eventually have |(Ṽ t ,Ỹ t )| ∞ → ∞ as t → ∞.
Proof of Theorem 1.4 Suppose that α > 1 and the solution (X t , Y t ) of (1.3) started from (x 0 , y 0 ) = (0, 0). Let T (t) = t 0 |X s | 2α ds and h(x) = 1 2α+1 |x| 2α+1 sgn(x). The time- whereB t is a standard one-dimensional Brownian motion. Recall the justification of this time change considered in the proof of Theorem 1.2.
We will prove the Lemma shortly. If we assume for now that Lemma 4 is granted, then from (4.2) and (5.4) we can derive that By applying Lemma 5.3 for β = 2α 2α+1 , we can conclude that (5.5) is satisfied. Recall that α > 1, so that 2/3 < β < 1, which satisfies the condition for Lemma 5.3.
For the proof of Lemma 5.3, we first require an alternative representation of the expectation E|X| −β , where X ∼ N (m, σ 2 ) and 0 < β < 1. We write the integral representation of a confluent hypergeometric function in Lemma 5.4. Even though this expression is already well-known, the authors couldn't find a good reference for it (see [15] and Ch 13 of [1]). So we give a direct proof of the lemma as well.
Lemma 5.4. Let Z be a standard N (0, 1) random variable and let m ∈ R and σ 2 > 0. Then for any 0 < β < 1, Proof. First, we prove that if ξ is a nonnegative random variable, then for any α such that the integral converges By switching the order of integration and by a change of variables t = λξ we get Second, we prove that if Z ∼ N (0, 1), then the Laplace transform of |m + σZ| 2 is for any λ > 0, 1 + 2λσ 2 λ β/2−1 dλ.
We make the following change of variables u = 2λσ 2 1 + 2λσ 2 .
We are now ready to prove Lemma 5.3.
So, it is possible to find positive constants C 3 , · · · , C 6 such that exp{−C 2 uf (t)} ≤ C 3 exp{−C 4 ut −1 } + C 5 exp{−C 6 ut −3 } (5.9) for all t > 0. So, to prove (5.8), we only need to show the convergence of the integrals of the terms on the right. Let's first consider the first term. Without loss of generality, we may assume that