Next Article in Journal
Complex Time-Dependent ERP Hemispheric Asymmetries during Word Matching in Phonological, Semantic and Orthographical Matching Judgment Tasks
Next Article in Special Issue
Human Reaction Times: Linking Individual and Collective Behaviour Through Physics Modeling
Previous Article in Journal
Consonance, Symmetry and Extended Outputs
Previous Article in Special Issue
Multistage Estimation of the Rayleigh Distribution Variance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Asymptotic Distributions for Power Variations of the Solutions to Linearized Kuramoto–Sivashinsky SPDEs in One-to-Three Dimensions

1
School of Economics, Hangzhou Dianzi University, Hangzhou 310018, China
2
Zhiyuan College, Shanghai Jiao Tong University, Shanghai 200240, China
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(1), 73; https://doi.org/10.3390/sym13010073
Submission received: 22 November 2020 / Revised: 28 December 2020 / Accepted: 29 December 2020 / Published: 3 January 2021

Abstract

:
We study the realized power variations for the fourth order linearized Kuramoto–Sivashinsky (LKS) SPDEs and their gradient, driven by the space–time white noise in one-to-three dimensional spaces, in time, have infinite quadratic variation and dimension-dependent Gaussian asymptotic distributions. This class was introduced-with Brownian-time-type kernel formulations by Allouba in a series of articles starting in 2006. He proved the existence, uniqueness, and sharp spatio-temporal Hölder regularity for the above class of equations in d = 1 , 2 , 3 . We use the relationship between LKS-SPDEs and the Houdré–Villaa bifractional Brownian motion (BBM), yielding temporal central limit theorems for LKS-SPDEs and their gradient. We use the underlying explicit kernels and spectral/harmonic analysis to prove our results. On one hand, this work builds on the recent works on the delicate analysis of variations of general Gaussian processes and stochastic heat equation driven by the space–time white noise. On the other hand, it builds on and complements Allouba’s earlier works on the LKS-SPDEs and their gradient.

1. Introduction

The fourth order linearized Kuramoto–Sivashinsky (LKS) SPDEs are related to the model of pattern formation phenomena accompanying the appearance of turbulence (see [1,2,3,4] for the LKS class and for its connection to many classical and new examples of deterministic and stochastic pattern formation PDEs, and see [5,6] for classical examples of deterministic and stochastic pattern formation PDEs).
The fundamental kernel associated with the deterministic version of this class is built on the Brownian-time process in [3,7,8]. In this article, we give exact dimension-dependent asymptotic distributions of the realized power variations in time, for the important class of stochastic equation:
U t = ε 8 ( L + 2 ϑ ) 2 U + d + 1 W t x , ( t , x ) R + × R d ; U ( 0 , x ) = u 0 ( x ) , x R d ,
where L is the d-dimensional Laplacian operator, ( ε , ϑ ) R + × R is a pair of parameters, the noise term d + 1 W / t x is the space–time white noise corresponding to the real-valued Brownian sheet W on R + × R d , d = 1 , 2 , 3 . The initial data u 0 here is assumed Borel measurable, deterministic, and 2-continuously differentiable on R d whose 2-derivative is locally Hölder continuous with some exponent 0 < γ 1 .
Of course, Equation (1) is the formal (and nonrigorous) equation. Its rigorous formulation, which we work with in this paper, is given in mild form as kernel stochastic integral equation (SIE). This SIE was first introduced and studied by [1,2,3,7,8,9,10]. We give it below in Section 3, along with some relevant details.
The existence/uniqueness as well as sharp dimension-dependent L p and Hölder regularity of the linear and nonlinear noise version of (1) were investigated in [1,2,9,10]. It was studied in [4] that exact uniform and local moduli of continuity for the LKS-SPDE in the time variable t and space variable x, separately. In fact, it was established in [4] that exact, dimension-dependent, spatio-temporal, uniform and local moduli of continuity for the fourth order the LKS-SPDEs and their gradient. It was studied in [11] that the solution to a stochastic heat equation with the space–time white noise in time has infinite quadratic variation and is not a semimartingale, and also investigated temporal central limit theorems for modifications of the quadratic variation of the stochastic heat equation with space–time white noise in time.
The analysis of the asymptotic behavior of the realized variations is motivated by the study of the exact rates of convergence of some approximation schemes of scalar stochastic differential equations driven by a Brownian motion B (see, e.g., [11,12]), besides, of course, the traditional applications of the realized variations to parameter estimation problems (see, e.g., [13,14,15,16,17,18,19] in which asymptotic distributions for power variations of fractional Brownian motion (FBM) and related Gaussian processes were investigated).
In this paper we show that the realized power variation of the process U and its gradient in time, have infinite quadratic variation and dimension-dependent Gaussian asymptotic distributions. It builds on and complements Allouba and Xiao’s earlier works on the LKS-SPDEs and builds on the recent works on delicate analysis of variations of Gaussian processes and stochastic heat equations with space–time white noise. Our proof is based on the approach method in [11]. We make use of the product-moments of various orders of the normal correlation surface of two variates in [20] to establish exact convergence rates of variances of the realized power variation of the process U and its gradient in time. On one hand, this work builds on the recent works on delicate analysis of variations of general Gaussian processes and stochastic heat equation driven by the space–time white noise. Moreover, it builds on and complements Allouba’s earlier works on the LKS-SPDEs and their gradient.
The rest of the paper is organized as follows. Some notations and main results of this paper are stated in Section 2. In Section 3, we discuss the rigorous LKS-SPDE kernel SIE (mild) formulation and estimate the temporal increments of LKS-SPDEs and their gradient by using the LKS-SPDE kernel SIE formulation and spectral/harmonic analysis. As a consequence of the result obtained, both LKS-SPDEs and their gradient in time have infinite quadratic variation. In Section 4, we prove Theorems 1 and 2 by using the product-moments of various orders of the normal correlation surface of two variates in [20] and the approach method in [11], respectively. In the final section, the results are summarized and discussed.

2. Statement of Results

2.1. Exact Convergence Rates of Variances and Temporal CLTs for the Realized Power Variations of LKS-SPDEs

In order to establish our main results we first introduce some notation. We consider discrete Riemann sums over a uniformly spaced time partition t j = j Δ t , where Δ t = n 1 . Fix x R d . Let Δ U x ; j = U ( t j , x ) U ( t j 1 , x ) and σ x ; j = ( E [ Δ U x ; j 2 ] ) 1 / 2 . For any p N + and n N + , we define
Ξ p n ( U ( · , x ) ) t = j = 1 n t Δ U x ; j p .
Here and in the sequel, a denotes an integer satisfying a 1 < a a for a R + .
Let μ p denote the p-moment of a standard Gaussian random variable following an N ( 0 , 1 ) law, that is, μ 2 p 1 = 0 and μ 2 p = ( 2 p 1 ) ! ! = ( 2 p ) ! / ( p ! 2 p ) for all p N + . For j N + , let ϕ d ; j = 2 j 1 d / 4 ( j 1 ) 1 d / 4 ( j + 1 ) 1 d / 4 . For real number r 1 , define J d , r = j = 1 ϕ d ; j r . It follows from (49) below that J d , r is a positive and finite constant depending only on r. For any p N + , we define κ d , p = K d p λ d , p , where
K d = 1 2 d ( 2 d / 2 ) π d / 2 Γ ( d / 2 ) 8 ε d / 4 0 y d / 2 1 e y 2 d y ,
and
λ d , p = μ 2 p μ p 2 + p ! p ! 2 p 1 u = 1 p / 2 2 2 u J d , 2 u ( p / 2 u ) ! ( p / 2 u ) ! ( 2 u ) ! , if p is even , μ 2 p p ! p ! 2 p 2 u = 0 p / 2 2 2 u J d , 2 u + 1 ( p / 2 u ) ! ( p / 2 u ) ! ( 2 u + 1 ) ! , if p is odd .
Here Γ ( s ) = 0 u s 1 e u d u , s > 0 , is the Gamma function.
We will first show the exact convergence rates of variance for the realized power variation of processes U.
Theorem 1. 
Fix ( ε , ϑ ) R + × R and x R d , and assume d { 1 , 2 , 3 } . Assume that u 0 0 and ϑ = 0 in (1). Then for each fixed t > 0 and any p N + ,
n 1 + p ( 1 d / 4 ) Var ( Ξ p n ( U ( · , x ) ) t ) κ d , p t
as n tends to infinity.
By (4), we have the following convergence in probability for the realized power variation of the process U.
Corollary 1. 
Fix ( ε , ϑ ) R + × R and x R d , and assume d { 1 , 2 , 3 } . Assume that u 0 0 and ϑ = 0 in (1). Then for each fixed t > 0 and any p N + ,
n 1 + p ( 1 d / 4 ) / 2 Ξ p n ( U ( · , x ) ) t K d p / 2 μ p t
in L 2 and in probability as n tends to infinity.
Remark 1. 
Since Ξ 2 p n ( U ( · , x ) ) t is monotone, (5) implies that n 1 + p ( 1 d / 4 ) Ξ 2 p n ( U ( · , x ) ) t K d p μ 2 p t uniform convergence in probability in the time interval [ 0 , T ] with some T > 0 . Moreover, (5) implies that for a fixed point in space, the process U ( · , x ) has infinite quadratic variation.
Temporal central limit theorems (CLTs) for the realized power variation of processes U is as follows.
Theorem 2. 
Fix ( ε , ϑ ) R + × R and x R d , and assume d { 1 , 2 , 3 } . Assume that u 0 0 and ϑ = 0 in (1). Then for any p N + ,
U ( t , x ) , 1 n j = 1 n t ( n p ( 1 d / 4 ) / 2 Δ U x ; j p K d p / 2 μ p ) L ( U ( t , x ) , κ d , p 1 / 2 B ( t ) )
as n tends to infinity, where B = { B ( t ) , t [ 0 , T ] } is a Brownian motion independent of the process U, and the convergence is in the space D ( [ 0 , T ] ) 2 equipped with the Skorohod topology.
Remark 2. 
By (2) and (3), both K d and κ d , p in (4)–(6) are dependent on spatial dimension but independent of x.

2.2. Exact Convergence Rates of Variances and Temporal CLTs for the Realized Power Variations of LKS-SPDE Gradient

Fix x R . Let x Δ U x ; j = x U ( t j , x ) x U ( t j 1 , x ) and x σ x ; j = ( E [ x Δ U x ; j 2 ] ) 1 / 2 . For any p N + and n N + , we define
x Ξ p n ( U ( · , x ) ) t = j = 1 n t x Δ U x ; j p .
For any p N + , we define χ d , p = D 0 p λ d , p , where λ d , p is given in (2) and
D 0 = ( 2 π ) 1 8 ε 3 / 4 0 y 1 / 4 e y d y .
We will first show the exact convergence rates of variance for the realized power variation of the gradient processes x U ( t , x ) .
Theorem 3. 
Fix ( ε , ϑ ) R + × R and x R , and assume d = 1 . Assume that u 0 0 and ϑ = 0 in (1). Then for each fixed t > 0 and any p N + ,
n 1 + p ( 1 d / 4 ) V a r ( x Ξ p n ( U ( · , x ) ) t ) χ d , p t
as n tends to infinity.
By (8), we have the following convergence in probability for the realized power variation of the gradient process x U ( t , x ) .
Corollary 2. 
Fix ( ε , ϑ ) R + × R and x R , and assume d = 1 . Assume that u 0 0 and ϑ = 0 in (1). Then for each fixed t > 0 and any p N + ,
n 1 + p ( 1 d / 4 ) / 2 x Ξ p n ( U ( · , x ) ) t D 0 p / 2 μ p t
in L 2 and in probability as n tends to infinity.
Remark 3. 
Since x Ξ 2 p n ( U ( · , x ) ) t is monotone, (9) implies that n 1 + p ( 1 d / 4 ) x Ξ 2 p n ( U ( · , x ) ) t D 0 p μ 2 p t uniform convergence in probability in the time interval [ 0 , T ] with some T > 0 . Moreover, (9) implies that for a fixed point in space, the gradient process x U ( · , x ) has infinite quadratic variation.
Temperal central limit theorems for the realized power variation of the gradient processes x U ( t , x ) is as follows.
Theorem 4. 
Fix ( ε , ϑ ) R + × R and x R , and assume d = 1 . Assume that u 0 0 and ϑ = 0 in (1). Then for any p N + ,
x U ( t , x ) , 1 n j = 1 n t ( n p ( 1 d / 4 ) / 2 x Δ U x ; j p D 0 p / 2 μ p ) L ( x U ( t , x ) , χ d , p 1 / 2 B ( t ) )
as n tends to infinity, where B = { B ( t ) , t [ 0 , T ] } is a Brownian motion independent of the process U, and the convergence is in the space D ( [ 0 , T ] ) 2 equipped with the Skorohod topology.
Remark 4. 
It is natural to expect that (6) and (10) hold for x U ( t , x ) in d = 1 , 2 , 3 . However, substantial extra work is needed for proving these statements. In particular, in order to apply the method in [11], one will have to establish the property of the increments for U ( t , · ) . Unfortunately the method in [11] does not seem useful anymore and some new ideas may be needed.
Remark 5. 
By using Lemma 3 below, following the same lines as the proof of Theorem 1, we get Theorem 3. Similarly, following the same lines as the proof of Theorem 2, we get Theorem 4. Therefore, only Theorems 1 and 2 are proved and Theorems 3 and 4 are omitted.

3. Methodology

3.1. Rigorous Kernel Stochastic Integral Equations Formulations

As in [4], for the LKS-SPDE, we use the LKS kernel to define their rigorous mild SIE formulation. This LKS kernel, as shown in as in [1,2,3], is the fundamental solution to the deterministic version of (12) ( a 0 and b 0 ) below, and is given by:
K t ; x , y LKS ε , ϑ d = 0 e i ϑ s e | x y | 2 / ( 2 i s ) ( 2 π i s ) d / 2 K ε t ; s BM d s + 0 e i ϑ s e | x y | 2 / ( 2 i s ) ( 2 π i s ) d / 2 K ε t ; s BM d s = ( 2 π ) d R d e ε t 8 ( 2 ϑ + | ξ | 2 ) 2 e i ξ , x y d ξ = ( 2 π ) d R d e ε t 8 ( 2 ϑ + | ξ | 2 ) 2 cos ( ξ , x y ) d ξ , ( ε , ϑ ) R + × R ,
where i = 1 and K t ; s BM = e s 2 / ( 2 t ) 2 π t . Let b : R R be Borel measurable. The nonlinear drift-diffusion LKS-SPDE is
U t = ε 8 ( L + 2 ϑ ) 2 U + b ( U ) + a ( U ) d + 1 W t x , ( t , x ) R + × R d ; U ( 0 , x ) = u 0 ( x ) , x R d .
Then, the rigorous LKS kernel SIE (mild) formulation is the stochastic integral equation
U ( t , x ) = R d K t ; x , y LKS ε , ϑ d u 0 ( y ) d y R d 0 t K t s ; x , y LKS ε , ϑ d [ b ( U ( s , y ) ) d s d y + a ( U ( s , y ) ) W ( d s × d y ) ]
(see p. 530 in [5] and Definition 1.1 and Equation (1.11) in [1]). Of course, the mild formulation of (1.1) is then obtained by setting a 1 and b 0 in (13).
Notation 1. 
Positive and finite constants (independent of x) in Section i are numbered as c i , 1 , c i , 2 , .
We conclude this section by citing the following spatial Fourier transform of the ( ε , ϑ ) LKS kernels from Lemma 2.1 in [4].
Lemma 1. 
Let K t ; x LKS ε , ϑ d be the ( ε , ϑ ) LKS kernel. The spatial Fourier transform of the ( ε , ϑ ) LKS kernel in (11) is given by
K ^ t ; ξ LKS ε , ϑ d = ( 2 π ) d / 2 e ε t 8 ( 2 ϑ + | ξ | 2 ) 2 ; ( ε , ϑ ) R + × R .
Here, the following symmetric form of the spatial Fourier transform has been used: f ^ ( ξ ) = ( 2 π ) d / 2 R d f ( u ) e i ξ · u d u .

3.2. Estimates on the Temporal Increments of LKS-SPDEs and Their Gradient

Since U ( · , x ) is a centered Gaussian process, its law is determined by its covariance function, which is given in the following lemma. We also derive some needed estimates on the covariance function and the increment of U ( · , x ) .
Lemma 2. 
Fix ( ε , ϑ ) R + × R and x R d , and assume d { 1 , 2 , 3 } . Assume that u 0 0 and ϑ = 0 in (1). For all s , t ( 0 , T ] , we have
E [ U ( t , x ) U ( s , x ) ] = K d [ ( t + s ) 1 d / 4 | t s | 1 d / 4 ] ,
c 4 , 1 | t s | 1 d / 4 E [ ( U ( t , x ) U ( s , x ) ) 2 ] c 4 , 2 | t s | 1 d / 4 ,
and
| E [ ( U ( t , x ) U ( s , x ) ) 2 ] K d | t s | 1 d / 4 | c 4 , 3 s d / 4 + 1 | t s | 2 ,
where K d is given in (3).
Proof. 
To show (15), we use Parseval’s identity to get
E [ U ( t , x ) U ( s , x ) ] = R d 0 s K t r ; x , y LKS ε , 0 d K s r ; x , y LKS ε , 0 d d r d y = 0 s R d K ^ t r ; x , ξ LKS ε , 0 d K ^ s r ; x , ξ LKS ε , 0 d ¯ d ξ d r = ( 2 π ) d 0 s R d e ε ( t r ) 8 | ξ | 4 ε ( s r ) 8 | ξ | 4 d ξ d r = ( 2 π ) d 0 s R d e ε ( t + s 2 r ) 8 | ξ | 4 d ξ d r .
Thus, by using the following integral formula (see Corollary on page 23 in [21]):
R d f i = 1 d u i 2 d u 1 d u d = π d / 2 Γ ( d / 2 ) 0 y d / 2 1 f ( y ) d y ,
(15) becomes
E [ U ( t , x ) U ( s , x ) ] = ( 2 π ) d π d / 2 Γ ( d / 2 ) 0 y d / 2 1 0 s e ε ( t + s 2 r ) 8 y 2 d r d y .
This yields (15).
To verify (16), by (15), one has, up to a constant, the mean zero Gaussian process { U ( t , x ) , t 0 } is a BBM with indices H = 1 / 2 and K = 1 d / 4 . Thus, by the covariance function of BBM in [22], (15) holds.
To show (17), we introduce the following auxiliary Gaussian random field { G ( t , x ) , t R + , x R d } :
G ( t , x ) = R d R K ( t r ) + ; x , y LKS ε , 0 d K ( r ) + ; x , y LKS ε , 0 d W ( d r × d y ) .
where a + = max { a , 0 } for all a R . Then the LKS-SPDE solution U may be decomposed as U ( t , x ) = G ( t , x ) V ( t , x ) , where
V ( t , x ) = R d R K ( t r ) + ; x , y LKS ε , 0 d K ( r ) + ; x , y LKS ε , 0 d W ( d r × d y ) .
This idea of decomposition originated in [23] in the second order SPDEs setting; and it has been applied in [24,25], also in the second order heat SPDE setting. Fix x R d . By Theorem 3.1 in [4], one has for any 0 < s < t ,
E [ | G ( t , x ) G ( s , x ) | 2 ] = K d | t s | 1 d / 4 .
Fix x R d . We apply Parseval’s identity to the integral in y to get that for any 0 < s < t :
E [ | V ( t , x ) V ( s , x ) | 2 ] = R d R | K t r ; x , y LKS ε , 0 d I { 0 > r } K s r ; x , y LKS ε , 0 d I { 0 > r } | 2 d r d y = R R d | K ^ t r ; x , ξ LKS ε , 0 d I { 0 > r } K ^ s r ; x , ξ LKS ε , 0 d I { 0 > r } | 2 d ξ d r .
Since
K ^ t r ; x , ξ LKS ε , 0 d = ( 2 π ) d / 2 e i x , ξ ε ( t r ) 8 | ξ | 4 ,
Equation (24) becomes
E [ | V ( t , x ) V ( s , x ) | 2 ] = R d R | e ε ( t r ) 8 | ξ | 4 I { 0 > r } e ε ( s r ) 8 | ξ | 4 I { 0 > r } | 2 ( 2 π ) d d r d ξ .
Now, we apply Parseval’s identity to the inner integral in r. To this end, let
ϕ ( r , ξ ) = e ε ( t r ) 8 | ξ | 4 I { 0 > r } e ε ( s r ) 8 | ξ | 4 I { 0 > r }
Its Fourier transform in r is
ϕ ^ ( τ , ξ ) = 1 i τ + ε 8 | ξ | 4 e ε t 8 | ξ | 4 + e ε s 8 | ξ | 4 .
Hence, by Parseval’s identity, we see that for each 0 < s < t Equation (26) becomes
E [ | V ( t , x ) V ( s , x ) | 2 ] = ( 2 π ) d R d R | ϕ ^ ( τ , ξ ) | 2 d τ d ξ = ( 2 π ) d R d | e ε t 8 | ξ | 4 e ε s 8 | ξ | 4 | 2 R 1 τ 2 + ε 2 64 | ξ | 8 d τ d ξ c 4 , 4 R d | ξ | 4 e ε s 4 | ξ | 4 | 1 e ε ( t s ) 8 | ξ | 4 | 2 d ξ .
Since | 1 e u | 2 u for all u 0 , one has that for each 0 < s < t Equation (27) becomes
E [ | V ( t , x ) V ( s , x ) | 2 ] c 4 , 5 ( t s ) 2 R d | ξ | 4 e ε s 4 | ξ | 4 d ξ = c 4 , 5 π d / 2 Γ ( d / 2 ) ( t s ) 2 0 y d / 2 + 1 e ε s 4 y 2 d y c 4 , 6 s d / 4 + 1 ( t s ) 2 0 y d / 2 + 1 e y 2 d y .
Fix x R d . Since U and V are independent, one has
E [ | G ( t , x ) G ( s , x ) | 2 ] = E [ | U ( t , x ) U ( s , x ) | 2 ] + E [ | V ( t , x ) V ( s , x ) | 2 ] .
This yields (17). The proof of Lemma 2 is completed.  □
Since x U ( · , x ) is a centered Gaussian process, its law is determined by its covariance function, which is given in the following lemma. We also derive some needed estimates on the increment of x U ( · , x ) .
Lemma 3. 
Fix ( ε , ϑ ) R + × R and x R , and assume d = 1 . Assume that u 0 0 and ϑ = 0 in (1). For all s , t ( 0 , T ] , we have
E [ x U ( t , x ) x U ( s , x ) ] = D 0 [ ( t + s ) 1 / 4 | t s | 1 / 4 ] ,
c 4 , 7 | t s | 1 / 4 E [ ( x U ( t , x ) x U ( s , x ) ) 2 ] c 4 , 8 | t s | 1 / 4 ,
and
| E [ ( x U ( t , x ) x U ( s , x ) ) 2 ] D 0 | t s | 1 / 4 | c 4 , 9 s 7 / 4 | t s | 2 ,
where D 0 is given in (7).
Proof. 
To show (29), we use Parseval’s identity to get
E [ x U ( t , x ) x U ( s , x ) ] = R 0 s x K t r ; x , y LKS ε , 0 d x K s r ; x , y LKS ε , 0 d d r d y = 0 s R ξ 2 K ^ t r ; x , ξ LKS ε , 0 d K ^ s r ; x , ξ LKS ε , 0 d ¯ d ξ d r = ( 2 π ) 1 0 s R ξ 2 e ε ( t r ) 8 | ξ | 4 ε ( s r ) 8 | ξ | 4 d ξ d r = ( 2 π ) 1 0 s R ξ 2 e ε ( t + s 2 r ) 8 | ξ | 4 d ξ d r .
Thus, (32) becomes
E [ x U ( t , x ) x U ( s , x ) ] = ( 2 π ) 1 8 ε 3 / 4 ( ( t + s ) 1 / 4 ( t s ) 1 / 4 ) 0 y 1 / 4 e y d y .
This yields (29).
To verify (30), by (29), one has, up to a constant, the mean zero Gaussian process { x U ( t , x ) , t 0 } is a BBM with indices H = 1 / 2 and K = 1 / 4 . Thus, by the estimates on the increments of BBM in [22], (30) holds.
Fix x R . We apply Parseval’s identity to the integral in y to get that for any 0 < s < t :
E [ | x V ( t , x ) x V ( s , x ) | 2 ] = R R | x K t r ; x , y LKS ε , 0 d I { 0 > r } x K s r ; x , y LKS ε , 0 d I { 0 > r } | 2 d r d y = R R ξ 2 | K ^ t r ; x , ξ LKS ε , 0 d I { 0 > r } K ^ s r ; x , ξ LKS ε , 0 d I { 0 > r } | 2 d ξ d r .
Since
K ^ t r ; x , ξ LKS ε , 0 d = ( 2 π ) 1 / 2 e i x , ξ ε ( t r ) 8 | ξ | 4 ,
Equation (34) becomes
E [ | x V ( t , x ) x V ( s , x ) | 2 ] = R R ξ 2 | e ε ( t r ) 8 | ξ | 4 I { 0 > r } e ε ( s r ) 8 | ξ | 4 I { 0 > r } | 2 ( 2 π ) d d r d ξ .
Now, we apply Parseval’s identity to the inner integral in r. To this end, let
ϕ ( r , ξ ) = e ε ( t r ) 8 | ξ | 4 I { 0 > r } e ε ( s r ) 8 | ξ | 4 I { 0 > r }
Its Fourier transform in r is
ϕ ^ ( τ , ξ ) = 1 i τ + ε 8 | ξ | 4 e ε t 8 | ξ | 4 + e ε s 8 | ξ | 4 .
Hence, by Parseval’s identity, we see that for each 0 < s < t Equation (36) becomes
E [ | x V ( t , x ) x V ( s , x ) | 2 ] = ( 2 π ) 1 R R ξ 2 | ϕ ^ ( τ , ξ ) | 2 d τ d ξ = ( 2 π ) 1 R | e ε t 8 | ξ | 4 e ε s 8 | ξ | 4 | 2 R ξ 2 τ 2 + ε 2 64 | ξ | 8 d τ d ξ c 4 , 10 R | ξ | 2 e ε s 4 | ξ | 4 | 1 e ε ( t s ) 8 | ξ | 4 | 2 d ξ .
Since | 1 e x | 2 x for all x 0 , one has that for each 0 < s < t Equation (37) becomes
E [ | x V ( t , x ) x V ( s , x ) | 2 ] c 4 , 11 ( t s ) 2 R | ξ | 6 e ε s 4 | ξ | 4 d ξ c 4 , 12 s 7 / 4 ( t s ) 2 0 y 7 / 4 e y d y .
Thus, by using similar argument of the proof of (17), (31) holds. The proof of Lemma 3 is completed.  □

4. Results

4.1. Exact Convergence Rates of Variances for LKS-SPDEs

We need the following product-moment of various orders of the normal correlation surface of two variate, which are Equations (viii) and (ix) in [20].
Lemma 4. 
Suppose that ( X , Y ) N ( 0 , σ 1 2 ρ ρ σ 2 2 ) , where ρ = ( σ 1 σ 2 ) 1 E [ X Y ] . Then,
E [ X p Y p ] = p ! p ! 2 p σ 1 p σ 2 p j = 1 p / 2 ( 2 ρ ) 2 j ( p / 2 j ) ! ( p / 2 j ) ! ( 2 j ) ! , if p is even , ρ p ! p ! 2 p 1 σ 1 p σ 2 p j = 0 p / 2 ( 2 ρ ) 2 j ( p / 2 j ) ! ( p / 2 j ) ! ( 2 j + 1 ) ! , if p is odd .
Proof of Theorem 1. 
It is sufficient to prove (4) for the even p case since the odd p case can be proved similarly. For 1 i < j n t , define ρ x ; i j = ( σ x ; i σ x ; j ) 1 E [ Δ U x ; i Δ U x ; j ] . Note that for a random variable X following an N ( 0 , σ 2 ) law,
E [ X p ] = μ p σ p , p N + .
By (39) and (40), one has
Var ( Ξ p n ( U ( · , x ) ) t ) = E | j = 1 n t ( Δ U x ; j p μ p σ x ; j p ) | 2 = j = 1 n t E [ ( Δ U x ; j p μ p σ x ; j p ) 2 ] + 2 i = 1 n t j = i + 1 n t E [ ( Δ U x ; i p μ p σ x ; i p ) ( Δ U x ; j p μ p σ x ; j p ) ] = j = 1 n t ( E [ Δ U x ; j 2 p ] μ p 2 σ x ; j 2 p ) + 2 i = 1 n t j = i + 1 n t ( E [ Δ U x ; i p Δ U x ; j p ] μ p 2 σ x ; i p σ x ; j p ) = ( μ 2 p μ p 2 ) j = 1 n t σ x ; j 2 p + p ! p ! 2 p 1 u = 1 p / 2 2 2 u ( p / 2 u ) ! ( p / 2 u ) ! ( 2 u ) ! i = 1 n t j = i + 1 n t σ x ; i p σ x ; j p ρ x ; i j 2 u .
It follows from (16) that
c 5 , 1 1 n 1 + d / 4 σ x ; j 2 c 5 , 1 n 1 + d / 4 for all 1 j n t .
By (17), (42) and Lagrange mean value theorem, it holds that for any real number r > 0 and 1 < j n t ,
| σ x ; j r ( K d n 1 + d / 4 ) r / 2 | c 5 , 2 ( σ x ; j r 2 + ( K d n 1 + d / 4 ) ( r 2 ) / 2 ) | σ x ; j 2 K d n 1 + d / 4 | c 5 , 3 n 2 + ( 1 + d / 4 ) ( r 2 ) / 2 t j 1 ( d / 4 + 1 ) .
Note that since α + 1 d < α + 2 , one has 1 / 2 d / 4 < 1 . Thus
1 n j = 2 n t t j 1 ( d / 4 + 1 ) / 2 0 t u ( d / 4 + 1 ) / 2 d u = 2 1 d / 4 t ( 1 d / 4 ) / 2 .
It follows from (43) (with r = 2 p ) and (44) that
n 1 + p ( 1 d / 4 ) j = 1 n t | σ x ; j 2 p ( K d n 1 + d / 4 ) p | 0 .
Hence
n 1 + p ( 1 d / 4 ) j = 1 n t σ x ; j 2 p = n 1 + p ( 1 d / 4 ) j = 1 n t ( σ x ; j 2 p ( K d n 1 + d / 4 ) p ) + n 1 + p ( 1 d / 4 ) ( K d n 1 + d / 4 ) p n t K d p t .
It follows from (15) that
E [ Δ U x ; i Δ U x ; j ] = K d n 1 + d / 4 ( ( j + i ) 1 d / 4 ( j i ) 1 d / 4 ( j + i 1 ) 1 d / 4 + ( j i + 1 ) 1 d / 4 ( j + i 1 ) 1 d / 4 + ( j i 1 ) 1 d / 4 + ( j + i 2 ) 1 d / 4 ( j i ) 1 d / 4 ) ,
which simplifies to
E [ Δ U x ; i Δ U x ; j ] = K d ( n 1 + d / 4 ϕ d ; j + i 1 + n 1 + d / 4 ϕ d ; j i ) ,
where ϕ d ; j = 2 j 1 d / 4 ( j 1 ) 1 d / 4 ( j + 1 ) 1 d / 4 . Thus, by binomial expansion, for every 1 u p / 2 and 1 i < j n t ,
σ x ; i p σ x ; j p ρ x ; i j 2 u = σ x ; i p 2 u σ x ; j p 2 u ( E [ Δ U x ; i Δ U x ; j ] ) 2 u = K d 2 u σ x ; i p 2 u σ x ; j p 2 u ( n 1 + d / 4 ϕ d ; j + i 1 + n 1 + d / 4 ϕ d ; j i ) 2 u = K d 2 u v = 0 2 u 2 u v σ x ; i p 2 u σ x ; j p 2 u ( n 1 + d / 4 ϕ d ; j + i 1 ) v ( n 1 + d / 4 ϕ d ; j i ) 2 u v .
If we write ϕ d ; k = g ( k 1 ) g ( k ) , where g ( s ) = ( s + 1 ) 1 d / 4 s 1 d / 4 , then for each k 2 , the Lagrange mean value theorem gives ϕ d ; k = | g ( k ζ 1 ) | = ( d / 4 ) ( 1 d / 4 ) ( k ζ 1 + ζ 2 ) d / 4 1 for some ζ 1 , ζ 2 [ 0 , 1 ] . This yields that for all k N + ,
0 < ϕ d ; k c 5 , 4 k d / 4 + 1 ,
and hence, for any r 1 ,
k = 1 M ϕ d ; k r J d , r
with some J d , r > 0 as M .
Note that since j + i 1 ( j + i ) / 2 , one has
n 1 + d / 4 ϕ d ; j + i 1 c 5 , 5 n 2 1 ( t i + t j ) d / 4 + 1 .
Note that (49) gives n 1 + d / 4 ϕ d ; j i c 5 , 6 n 1 + d / 4 and n 1 + d / 4 ϕ d ; j + i 1 c 5 , 7 n 1 + d / 4 for all 1 i < j n t . Thus, by (42) and (51), for every 1 u p / 2 and 1 v 2 u ,
n 1 + p ( 1 d / 4 ) i = 1 n t j = i + 1 n t σ x ; i p 2 u σ x ; j p 2 u ( n 1 + d / 4 ϕ d ; j + i 1 ) v ( n 1 + d / 4 ϕ d ; j i ) 2 u v c 5 , 8 n d / 4 i = 1 n t j = i + 1 n t ( n 1 + d / 4 ϕ d ; j + i 1 ) c 5 , 9 n 2 d / 4 i = 1 n t j = i + 1 n t 1 ( t i + t j ) d / 4 + 1 ,
which tends to zero as n since 0 t 0 t ( u + v ) ( d / 4 + 1 ) d u d v < .
We now consider the term v = 0 in (48). Let B H = { B H ( t ) , t R + } be a FBM with index H ( 0 , 1 ) , which is a centered Gaussian process with E [ ( B H ( t ) B H ( s ) ) 2 ] = | s t | 2 H for s , t R + . Then, for H 0 = ( 1 d / 4 ) / 2 ,
E B H 0 j + 1 n B H 0 j n B H 0 i + 1 n B H 0 i n = 1 2 2 j i n 1 d / 4 j i 1 n 1 d / 4 j i + 1 n 1 d / 4 = 1 2 n 1 + d / 4 ϕ d ; j i .
Thus,
n 1 + d / 4 i = 1 n t j = i + 1 n t ϕ d ; j i = n 1 + d / 4 i = 1 n t 1 j = i + 1 n t ϕ d ; j i = 2 i = 1 n t 1 j = i + 1 n t E B H 0 j + 1 n B H 0 j n B H 0 i + 1 n B H 0 i n = 2 i = 1 n t 1 E B H 0 n t + 1 n B H 0 i + 1 n B H 0 i + 1 n B H 0 i n = i = 1 n t 1 n t i n 1 d / 4 + n t + 1 i n 1 d / 4 1 n 1 d / 4 = n t n 1 d / 4 + 1 n 1 d / 4 + n t n 1 + d / 4 .
This yields
n d / 4 i = 1 n t j = i + 1 n t ( n 1 + d / 4 ϕ d ; j i ) t .
By (42) and (49), one has for every 1 u p / 2 and any M > 0 ,
n 1 + p ( 1 d / 4 ) i = 1 n t j = i + M + 1 n t σ x ; i p 2 u σ x ; j p 2 u ( n 1 + d / 4 ϕ d ; j i ) 2 u c 5 , 10 M ( d / 4 + 1 ) ( 2 u 1 ) n d / 4 i = 1 n t j = i + M + 1 n t ( n 1 + d / 4 ϕ d ; j i ) c 5 , 11 M ( d / 4 + 1 ) ( 2 u 1 ) n d / 4 i = 1 n t j = i + 1 n t ( n 1 + d / 4 ϕ d ; j i ) .
This, together with (50), yields
n 1 + p ( 1 d / 4 ) i = 1 n t j = i + M + 1 n t σ x ; i p 2 u σ x ; j p 2 u ( n 1 + d / 4 ϕ d ; j i ) 2 u c 5 , 12 M ( d / 4 + 1 ) ( 2 u 1 ) t ,
which tends to zero by letting M .
By (43) (with r = p 2 u ), (42) and (53), one has for every 1 u p / 2 ,
n 1 + p ( 1 d / 4 ) i = 2 n t j = i + 1 n t | σ x ; i p 2 u ( K d n 1 + d / 4 ) ( p 2 u ) / 2 | σ x ; j p 2 u ( n 1 + d / 4 ϕ d ; j i ) 2 u c 5 , 13 n 1 d / 2 i = 2 n t 1 t i 1 d / 4 + 1 j = i + 1 n t ( n 1 + d / 4 ϕ d ; j i ) = 2 c 5 , 13 n 1 d / 2 i = 2 n t 1 t i 1 d / 4 + 1 n t i n 1 d / 4 + n t + 1 i n 1 d / 4 1 n 1 d / 4 c 5 , 14 n d / 4 i = 2 n t n t i n 1 d / 4 + n t + 1 i n 1 d / 4 + c 5 , 15 n 2 d / 4 i = 2 n t 1 t i 1 d / 4 + 1 c 5 , 16 n d / 4 1 n 1 d / 4 + n t 1 n 1 d / 4 + c 5 , 17 n 3 / 2 d / 4 / 2 i = 2 n t t i 1 ( d / 4 + 1 ) / 2 ,
which tends to zero as n since 0 t s ( d / 4 + 1 ) / 2 d s < . Hence, one has for every 1 u p / 2 ,
n 1 + p ( 1 d / 4 ) i = 2 n t j = i + 1 n t ( σ x ; i p 2 u ( K d n 1 + d / 4 ) ( p 2 u ) / 2 ) σ x ; j p 2 u ( n 1 + d / 4 ϕ d ; j i ) 2 u 0 .
Similarly, one has for every 1 u p / 2 ,
n 1 + p ( 1 d / 4 ) i = 2 n t j = i + 1 n t ( K d n 1 + d / 4 ) ( p 2 u ) / 2 ( σ x ; j p 2 u ( K d n 1 + d / 4 ) ( p 2 u ) / 2 ) ( n 1 + d / 4 ϕ d ; j i ) 2 u 0 .
For every 1 u p / 2 and any M > 0 ,
n 1 + p ( 1 d / 4 ) i = 2 n t j = i + 1 i + M ( K d n 1 + d / 4 ) p 2 u ( n 1 + d / 4 ϕ d ; j i ) 2 u = K d p 2 u n t 1 n j = 1 M ϕ d ; j 2 u K d p 2 u J d , 2 u t
as n and M .
Note that for every 1 u p / 2 and 1 i < j n t ,
σ x ; i p 2 u σ x ; j p 2 u = ( σ x ; i p 2 u ( K d n 1 + d / 4 ) ( p 2 u ) / 2 ) σ x ; j p 2 u + ( K d n 1 + d / 4 ) ( p 2 u ) / 2 ( σ x ; j p 2 u ( K d n 1 + d / 4 ) ( p 2 u ) / 2 ) + ( K d n 1 + d / 4 ) p 2 u .
Hence, by (59)–(62), one has for every 1 u p / 2 ,
n 1 + p ( 1 d / 4 ) i = 2 n t j = i + 1 i + M σ x ; i p 2 u σ x ; j p 2 u ( n 1 + d / 4 ϕ d ; j i ) 2 u K d p 2 u J d , 2 u t
as n and M . It follows from (42) that
n 1 + p ( 1 d / 4 ) j = 2 1 + M σ x ; i p 2 u σ x ; j p 2 u ( n 1 + d / 4 ϕ d ; j 1 ) 2 u 0 .
This, together with (48), (52) and (63), yields for every 1 u p / 2 ,
n 1 + p ( 1 d / 4 ) i = 1 n t j = i + 1 n t σ x ; i p σ x ; j p ρ x ; i j 2 u K d p J d , 2 u t
Therefore, by (41), (46) and (65), one has
n 1 + p ( 1 d / 4 ) Var ( Ξ p n ( U ( · , x ) ) t ) K d p μ 2 p μ p 2 + p ! p ! 2 p 1 u = 1 p / 2 2 2 u J d , 2 u ( p / 2 u ) ! ( p / 2 u ) ! ( 2 u ) ! t = κ d , p t .
This proves (4). The proof of Theorem 1 is completed.  □
Proof of Corollary 1. 
Write
n 1 + p ( 1 d / 4 ) / 2 Ξ p n ( U ( · , x ) ) t K d p / 2 μ p t = n 1 + p ( 1 d / 4 ) / 2 ( Ξ p n ( U ( · , x ) ) t E [ Ξ p n ( U ( · , x ) ) t ] ) + μ p n 1 + p ( 1 d / 4 ) / 2 j = 1 n t ( σ x ; j p ( K d n 1 + d / 4 ) p / 2 ) + K d p / 2 μ p n t n t .
Obviously, the third term of (67) tends to zero as n . It follows from (43) (with r = p ) and (45) that the second term of (67) tends to zero as n . Thus, by (4), one has
E [ | n 1 + p ( 1 d / 4 ) / 2 Ξ p n ( U ( · , x ) ) t K d p / 2 μ p t | 2 ] 0 .
This proves (5).  □

4.2. Temporal CLTs for LKS-SPDEs

The following lemma is needed to prove Theorem 2.
Lemma 5. 
Let X 1 , , X 4 be mean zero, jointly normal random variables, such that E [ X j 2 ] = 1 and ρ i j = E [ X i X j ] . Put Z j = X j p E [ X j p ] . Then, for any p N + ,
| E j = 1 4 Z j | c 6 , 1 | ρ 12 ρ 34 | + 1 1 ρ 12 2 max i 2 < j | ρ i j |
whenever | ρ 12 | < 1 . Moreover,
| E j = 1 4 Z j | c 6 , 2 max 2 j 4 | ρ 1 j | .
Furthermore, there exists ε > 0 such that
| E j = 1 4 Z j | c 6 , 3 max 1 i j 4 ρ i j 2
whenever | ρ i j | < ε for all 1 i j 4 .
Proof. 
Following the same lines as the proof of Lemma 3.3 in [11] with h j ( X j ) = Z j , 1 j 4 , we get Lemma 5 immediately.  □
Proposition 1. 
Fix ( ε , ϑ ) R + × R and x R d , and assume d { 1 , 2 , 3 } . Assume that u 0 0 and ϑ = 0 in (1). Fix r N + . Put
Θ r n ( U ( · , x ) ) t = n 1 / 2 + r ( 1 d / 4 ) / 2 i = 1 n t ( Δ U x ; i r μ r σ x ; i r ) .
Then, for all 0 s < t and all n N + ,
E [ | Θ r n ( U ( · , x ) ) t Θ r n ( U ( · , x ) ) s | 4 ] c 6 , 4 n t n s n 2 .
The sequence { Θ r n ( U ( · , x ) ) } is therefore relatively compact in the Skorohod space D R [ 0 , ) .
Proof. 
We follow the method of Proposition 3.5 in [11] to prove (71). Let S = { j N + 4 : n s + 1 j 1 j 4 n t } . For j S and k { 1 , 2 , 3 } , define h k = j k + 1 j k and let S k = { j S : h k = max { h 1 , h 2 , h 3 } } . Define N = n t ( n s + 1 ) and for i { 0 , 1 , , N } , let S k i = { j S k : max { h 1 , h 2 , h 3 } = i } . Further define T k = T k i , = { j S k i : min { h 1 , h 2 , h 3 } = } and V k v = V k i , , v = { j T k : med { h 1 , h 2 , h 3 } = v } , where “med” denotes the median function. For j S , define
Λ x ; j = k = 1 4 ( Δ U x ; j k r μ r σ x ; j k r ) .
Observe that
E [ | Θ r n ( U ( · , x ) ) t Θ r n ( U ( · , x ) ) s | 4 ] = n 2 + 2 r ( 1 d / 4 ) E | i = n s + 1 n t ( Δ U x ; i r μ r σ x ; i r ) | 4 4 ! n 2 + 2 r ( 1 d / 4 ) j S | E [ Λ x ; j ] | 4 ! n 2 + 2 r ( 1 d / 4 ) k = 1 3 j S k | E [ Λ x ; j ] | ,
and that
j S k | E [ Λ x ; j ] | = i = 0 N j S k i | E [ Λ x ; j ] | = i = 0 N = 0 i d / 4 j T k | E [ Λ x ; j ] | + i = 0 N = i d / 4 + 1 i j T k | E [ Λ x ; j ] | = i = 0 N = 0 i d / 4 v = i j V k v | E [ Λ x ; j ] | + i = 0 N = i d / 4 + 1 i v = i j V k v | E [ Λ x ; j ] | .
Let Z x ; k = σ x ; j k 1 Δ U x ; j k and
ξ x ; k = Z x ; k r E [ Z x ; k r ] = σ x ; j k r ( Δ U x ; j k r μ r σ x ; j k r ) .
Then
| E [ Λ x ; j ] | = k = 1 4 σ x ; j k r | E k = 1 4 ξ x ; k | .
By (47) and (49), one has for all k l N + ,
| E [ Δ U x ; k Δ U x ; l ] | c 6 , 5 n 1 + d / 4 | k l | d / 4 + 1 .
It follows from (42) and (75) that
| ρ x ; k l | = | E [ Z x ; k Z x ; l | = σ x ; j k 1 σ x ; j l 1 | E [ Δ U x ; j k Δ U x ; j l ] | c 6 , 6 | j k j l | d / 4 + 1 .
Suppose 0 i d / 4 . Fix v and let j V k v be arbitrary. If k = 1 , then i = max { h 1 , h 2 , h 3 } = h 1 = j 2 j 1 . If k = 3 , then i = max { h 1 , h 2 , h 3 } = h 3 = j 4 j 3 . In either case, by (69), (42), (74) and (76), one has
| E [ Λ x ; j ] | c 6 , 7 n 2 r ( 1 d / 4 ) i d / 4 + 1 c 6 , 7 1 ( v ) d / 4 + 1 + 1 i d / 4 + 1 n 2 r ( 1 d / 4 ) .
If k = 2 , then i = max { h 1 , h 2 , h 3 } = h 2 = j 3 j 2 and v = h 3 h 1 = ( j 4 j 3 ) ( j 2 j 1 ) . Hence, by (68), (42), (74) and and (76),
| E [ Λ x ; j ] | c 6 , 8 1 ( v ) d / 4 + 1 + 1 i d / 4 + 1 n 2 r ( 1 d / 4 ) .
Now choose k k such that h k = . With k given, j is determined by j k . Since there are two possibilities for k and N + 1 possibilities for j k , | V k v | 2 ( N + 1 ) . Therefore,
= 0 i d / 4 v = i j V k v | E [ Λ x ; j ] | c 6 , 9 ( N + 1 ) = 0 i d / 4 v = i 1 ( v ) d / 4 + 1 + 1 i d / 4 + 1 n 2 r ( 1 d / 4 ) c 6 , 10 ( N + 1 ) = 0 i d / 4 1 d / 4 + 1 + 1 i d / 4 n 2 r ( 1 d / 4 ) c 6 , 11 ( N + 1 ) n 2 r ( 1 d / 4 ) .
For the second summation, suppose i d / 4 + 1 i . In this case, if j T k , then = min { h 1 , h 2 , h 3 } , so that by (42), (70), (74) and (76),
| E [ Λ x ; j ] | c 6 , 12 n 2 r ( 1 d / 4 ) 2 ( d / 4 + 1 ) .
Since v = i | V k v | 2 ( N + 1 ) i and 1 / 2 d / 4 < 1 , one has
= i d / 4 + 1 i v = i j V k v | E [ Λ x ; j ] | c 6 , 13 ( N + 1 ) i i d / 4 + 1 i n 2 r ( 1 d / 4 ) 2 ( d / 4 + 1 ) c 6 , 14 ( N + 1 ) i i d / 4 1 u 2 ( d / 4 + 1 ) d u n 2 r ( 1 d / 4 ) c 6 , 15 ( N + 1 ) n 2 r ( 1 d / 4 ) .
Thus, using (72), (73), (77) and (78), one has
n 2 + 2 r ( 1 d / 4 ) E | j = n s + 1 n t ( Δ U x ; j r μ r σ x ; j r ) | 4 c 6 , 16 i = 0 N ( N + 1 ) n 2 = c 6 , 16 n t n s n 2 ,
which is (71).
To show that a sequence of cadlag processes { F n } is relatively compact, it suffices to show that for each T > 1 , there exist constants β > 0 , C > 0 , and q > 1 such that
R F n ( t , h ) = E [ | F n ( t + h ) F n ( t ) | β | F n ( t ) F n ( t h ) | β ] C h q
for all n N , all t [ 0 , T ] and all h [ 0 , t ] . (See, e.g., Theorem 3.8.8 in [26].) Taking β = 2 and using (71) together with Hölder inequality gives
R Θ r n ( U ( · , x ) ) ( t , h ) c 6 , 17 n t + n h n t n n t n t n h n .
If n h < 1 / 2 , then the right-hand side of this inequality is zero. Assume n h 1 / 2 . Then
n t + n h n t n n h + 1 n 3 h .
The other factor is similarly bounded, so that R Θ r n ( U ( · , x ) ) ( t , h ) c 6 , 18 h 2 .  □
Proposition 2. 
Fix ( ε , ϑ ) R + × R and x R d , and assume d { 1 , 2 , 3 } . Assume that u 0 0 and ϑ = 0 in (1). Then, for any 0 s < t and r N + ,
Θ r n ( U ( · , x ) ) t Θ r n ( U ( · , x ) ) s L κ d , r 1 / 2 | t s | 1 / 2 N
as n , where N is a standard normal random variable.
Proof. 
Let { n ( j ) } j = 1 be any sequence of natural numbers. We will prove that there exists a subsequence { n ( j m ) } such that Θ r n ( j m ) ( U ( · , x ) ) t Θ r n ( j m ) ( U ( · , x ) ) s converges in law to the given random variable.
For each m N + , choose n ( j m ) { n ( j ) } such that n ( j m ) > n ( j m 1 ) and n ( j m ) m 2 / d / 4 ( t s ) 1 . Let b = b ( m ) = n ( j m ) ( t s ) / m . For 0 k m , define u k = n ( j m ) s + k b , so that
Θ r n ( j m ) ( U ( · , x ) ) t Θ r n ( j m ) ( U ( · , x ) ) s = n ( j m ) 1 / 2 + r ( 1 d / 4 ) / 2 i = n ( j m ) s + 1 n ( j m ) t ( Δ U x ; i r μ r σ x ; i r ) = n ( j m ) 1 / 2 + r ( 1 d / 4 ) / 2 k = 1 m i = u k 1 + 1 u k ( Δ U x ; i r μ r σ x ; i r ) .
Let us now introduce the filtration
F t = σ { W ( A ) : A [ 0 , t ] × R d , λ ( A ) < } ,
where λ denotes Lebesgue measure on R d + 1 . Let τ k = n ( j m ) 1 u k 1 . For each pair ( i , k ) such that u k 1 < i u k , define
ξ x ; i , k = Δ U x ; i E [ Δ U x ; i | F τ k ] .
Note that ξ x ; i , k is F τ k + 1 -measurable and independent of F τ k . Recall that
U ( t , x ) = 0 t R d K t s ; x , y LKS ε , ϑ d W ( d s × d y ) .
Moreover, given constants 0 τ s t , one has
E [ U ( t , x ) | F τ ] = 0 τ R d K t s ; x , y LKS ε , ϑ d W ( d s × d y ) .
It follows from (80) and (81) that
U ( t + τ k , x ) E [ U ( t + τ k , x ) | F τ k ] = τ k t + τ k R d K t + τ k s ; x , y LKS ε , ϑ d W ( d s × d y ) .
This yields that { ξ x ; i , k } has the same law as { Δ U x ; i u k 1 } .
Now define σ x ; i , k 2 = E [ ξ x ; i , k 2 ] = σ x ; i u k 1 2 and
ζ x ; m , k = i = u k 1 + 1 u k ( ξ x ; i , k r μ r σ x ; i , k r ) ,
so that ζ x ; m , k , 1 k m , are independent and
Θ r n ( j m ) ( U ( · , x ) ) t Θ r n ( j m ) ( U ( · , x ) ) s = n ( j m ) 1 / 2 + r ( 1 d / 4 ) / 2 k = 1 m ζ x ; m , k + ϵ x ; m ,
where
ϵ x ; m = n ( j m ) 1 / 2 + r ( 1 d / 4 ) / 2 k = 1 m i = u k 1 + 1 u k ( ( Δ U x ; i r μ r σ x ; i r ) ( ξ x ; i , k r μ r σ x ; i , k r ) )
Since ξ x ; i , k and Δ U x ; i ξ x ; i , k = E [ Δ U x ; i | F τ k ] are independent, one has
σ x ; i 2 = E [ Δ U x ; i 2 ] = E [ ξ x ; i , k 2 ] + E [ | Δ U x ; i ξ x ; i , k | 2 ] = σ x ; i u k 1 2 + E [ | Δ U x ; i ξ x ; i , k | 2 ] .
This, together with (17), gives
E [ | Δ U x ; i ξ x ; i , k | 2 ] = σ x ; i 2 σ x ; i u k 1 2 c 6 , 19 n ( j m ) 1 + d / 4 ( i u k 1 ) d / 4 + 1 .
Thus, since Δ U x ; i ξ x ; i , k is Gaussian, by (40) and (84), one has
E [ | Δ U x ; i ξ x ; i , k | 4 ] c 6 , 20 n ( j m ) 2 + d / 2 ( i u k 1 ) d / 2 + 2 .
Note that (40) and (42) give E [ | Δ U x ; i | 4 r 4 ] c 6 , 21 σ x ; i 4 r 4 c 6 , 22 n ( j m ) ( 1 + d / 4 ) ( 2 r 2 ) and E [ | ξ x ; i , k | 4 r 4 ] c 6 , 23 σ x ; i u k 1 4 r 4 c 6 , 24 n ( j m ) ( 1 + d / 4 ) ( 2 r 2 ) . By Lagrange mean value theorem,
| Δ U x ; i r ξ x ; i , k r | c 6 , 25 ( | Δ U x ; i | r 1 + | ξ x ; i , k | r 1 ) | Δ U x ; i ξ x ; i , k | .
Thus, by (85) and Hölder inequality,
E [ | Δ U x ; i r ξ x ; i , k r | 2 ] c 6 , 26 ( E [ | Δ U x ; i | 4 r 4 ] + E [ | ξ x ; i , k | 4 r 4 ] ) 1 / 2 ( E [ | Δ U x ; i ξ x ; i , k | 4 ] ) 1 / 2 c 6 , 27 n ( j m ) r ( 1 d / 4 ) ( i u k 1 ) d / 4 + 1 .
Similarly, by (84) and Lagrange mean value theorem,
| σ x ; i r σ x ; i , k r | c 6 , 28 ( | σ x ; i | r 2 + | σ x ; i , k | r 2 ) | σ x ; i 2 σ x ; i , k 2 | c 6 , 29 n ( j m ) r ( 1 d / 4 ) / 2 ( i u k 1 ) d / 4 + 1 .
Therefore, by (86), (87) and Hölder inequality,
E [ | ϵ x ; m | ] n ( j m ) 1 / 2 + r ( 1 d / 4 ) / 2 k = 1 m j = u k 1 + 1 u k ( ( E [ | Δ U x ; i r ξ x ; i , k r | 2 ] ) 1 / 2 + μ r | σ x ; j r σ x ; j , k r | ) c 6 , 30 n ( j m ) 1 / 2 k = 1 m i = u k 1 + 1 u k ( i u k 1 ) ( d / 4 + 1 ) / 2 = c 6 , 31 n ( j m ) 1 / 2 k = 1 m i = 1 u k u k 1 i ( d / 4 + 1 ) / 2 .
Since u k u k 1 b , this gives
E [ | ϵ x ; m | ] c 6 , 32 n ( j m ) 1 / 2 m b ( 1 d / 4 ) / 2 = c 6 , 32 m ( d / 4 + 1 ) / 2 n ( j m ) d / 4 / 2 ( t s ) ( 1 d / 4 ) / 2 .
But since n ( j m ) was chosen so that n ( j m ) m 2 / d / 4 ( t s ) 1 , one has E [ | ϵ x ; m | ] c 6 , 33 m ( 1 d / 4 ) / 2 | t s | 1 / 2 and ϵ x ; m 0 in L 1 and in probability. Therefore, by (82), we need only to show that
n ( j m ) 1 / 2 + r ( 1 d / 4 ) / 2 k = 1 m ζ x ; m , k L κ d , r 1 / 2 | t s | 1 / 2 N
in order to complete the proof.
For this, we will use the Lindeberg-Feller theorem (see, e.g., Theorem 2.4.5 in [27]), which states the following: for each m, let ζ x ; m , k , 1 k m , be independent random variables with E [ ζ x ; m , k ] = 0 . Suppose:
(a)
n ( j m ) 1 + r ( 1 d / 4 ) k = 1 m E [ ζ x ; m , k 2 ] ν 2 , and
(b)
for all δ > 0 , lim m n ( j m ) 1 + r ( 1 d / 4 ) k = 1 m E [ | ζ x ; m , k | 2 I { n ( j m ) 1 / 2 + r ( 1 d / 4 ) / 2 | ζ x ; m , k | > δ } ] 0 .
Then n ( j m ) 1 / 2 + r ( 1 d / 4 ) / 2 k = 1 m ζ x ; m , k L ν N as n .
To verify these conditions, recall that { ξ x ; i , k } and { Δ U x ; i u k 1 } have the same law, so that
E [ | ζ x ; m , k | 4 ] = n ( j m ) 2 + 2 r ( 1 d / 4 ) E | i = 1 u k u k 1 ( Δ U x ; i r μ r σ x ; i r ) | 4 .
Hence, by (71),
n ( j m ) 2 + 2 r ( 1 d / 4 ) E [ | ζ x ; m , k | 4 ] c 6 , 34 ( u k u k 1 ) 2 n ( j m ) 2 .
Jensen inequality now gives m 1 + r ( 1 d / 4 ) k = 1 m E [ | ζ x ; m , k | 2 ] c 6 , 35 m b n ( j m ) 1 = c 6 , 35 ( t s ) , so that by passing to a subsequence, we may assume that (a) holds for some ν 0 .
For (b), let δ > 0 be arbitrary. Then
n ( j m ) 1 + r ( 1 d / 4 ) k = 1 m E [ | ζ x ; m , k | 2 I { n ( j m ) 1 / 2 + r ( 1 d / 4 ) / 2 | ζ x ; m , k | > δ } ] δ 2 n ( j m ) 2 + 2 r ( 1 d / 4 ) k = 1 m E [ | ζ x ; m , k | 4 ] c 6 , 36 δ 2 m b 2 n ( j m ) 2 = c 6 , 36 δ 2 m 1 ( t s ) 2 ,
which tends to zero as m .
It therefore follows that n ( j m ) 1 / 2 + r ( 1 d / 4 ) / 2 k = 1 m ζ x ; m , k L ν N as n and it remains only to show that ν = κ d , r 1 / 2 | t s | 1 / 2 . For this, observe that the continuous mapping theorem implies that | Θ r m ( U ( · , x ) ) t Θ r m ( U ( · , x ) ) s | 2 L ν 2 N 2 . By the Skorohod representation theorem, we may assume that the convergence is a.s. By Proposition 1, the family | Θ r m ( U ( · , x ) ) t Θ r m ( U ( · , x ) ) s | 2 is uniformly integrable. Hence, | Θ r m ( U ( · , x ) ) t Θ r m ( U ( · , x ) ) s | 2 ν 2 N 2 in L 1 , which implies E [ | Θ r m ( U ( · , x ) ) t Θ r m ( U ( · , x ) ) s | 2 ] ν 2 . But by Theorem 1, E [ | Θ r m ( U ( · , x ) ) t Θ r m ( U ( · , x ) ) s | 2 ] κ d , r | t s | , so ν = κ d , r 1 / 2 | t s | 1 / 2 and the proof is complete.  □
Proof of Theorem 2. 
It is sufficient to prove (6) for the even p case since the odd p case can be proved similarly. Let { n ( j ) } j = 1 be any sequence of natural numbers. By Proposition 1, the sequence { ( U ( · , x ) , Θ p n ( j ) ( U ( · , x ) ) ) } is relatively compact. Therefore, there exists a subsequence { n ( j k ) } and a cadlag process Y such that ( U ( · , x ) , Θ p n ( j k ) ( U ( · , x ) ) ) L ( U , Y ) . Fix 0 < s 1 < s 2 < < s < s < t . With notation as in Proposition 2, let
ζ x ; n ( j k ) = n ( j k ) 1 / 2 + p ( 1 d / 4 ) / 2 i = n ( j k ) s + 2 n ( j k ) t ( ξ x ; i , k p μ p σ x ; i , k p ) ,
and define
η x ; n ( j k ) = Θ p n ( j k ) ( U ( · , x ) ) t Θ p n ( j k ) ( U ( · , x ) ) s ζ x ; n ( j k ) .
As in the proof of Proposition 2, η x ; n ( j k ) 0 in probability. It therefore follows that
( Θ p n ( j k ) ( U ( · , x ) ) s 1 , , Θ p n ( j k ) ( U ( · , x ) ) s , ζ x ; n ( j k ) ) L ( Y ( s 1 ) , , Y ( s ) , Y ( t ) Y ( s ) ) .
Note that F ( n ( j k ) s + 1 ) n ( j k ) 1 and ζ x ; n ( j k ) are independent. Hence, ( Θ p n ( j k ) ( U ( · , x ) ) s 1 , , Θ p n ( j k ) ( U ( · , x ) ) s ) and ζ x ; n ( j k ) are independent, which implies Y ( t ) Y ( s ) and ( Y ( s 1 ) , , Y ( s ) ) are independent. This yields that the process Y has independent increments.
By Proposition 2, the increment Y ( t ) Y ( s ) is normally distributed with mean zero and variance κ d , p | t s | . Moreover, U ( 0 , x ) = 0 since Θ p n ( U ( · , x ) ) 0 = 0 for all n. Hence, Y is equal in law to κ d , p 1 / 2 B , where B is a standard Brownian motion. It remains only to show that U and B are independent.
Fix 0 < s 1 < s 2 < < s T and x R d . Let Z x = ( U ( s 1 , x ) , . . . , U ( s , x ) ) T and Σ x = E [ Z x Z x T ] . It is easy to see that Σ x is invertible. Hence, we may define the vectors v x ; j R by v x ; j = E [ Z x Δ U x ; j ] , and w x ; j = Σ x 1 v x ; j . Let ξ x ; j = Δ U x ; j w x ; j T Z x , so that ξ x ; j and Z x are independent.
Define
Θ ˜ p n ( U ( · , x ) ) t = n 1 / 2 + p ( 1 d / 4 ) / 2 j = 1 n t ( ξ x ; j p μ p σ x ; j p ) .
Then
| Θ p n ( U ( · , x ) ) t Θ ˜ p n ( U ( · , x ) ) t | n 1 / 2 + p ( 1 d / 4 ) / 2 | j = 1 n t ( Δ U x ; j p ξ x ; j p ) | .
By (40), binomial expansion and Hölder inequality,
E sup 0 t T | Θ p n ( U ( · , x ) ) t Θ ˜ p n ( U ( · , x ) ) t | c 6 , 37 n 1 / 2 + p ( 1 d / 4 ) / 2 ν = 1 p j = 1 n T ( E [ Δ U x ; j 2 p 2 ν ] ) 1 / 2 ( E [ ( w x ; j T Z x ) 2 ν ] ) 1 / 2 c 6 , 38 ν = 1 p n 1 / 2 + ν ( 1 d / 4 ) / 2 j = 1 n T ( E [ ( w x ; j T Z x ) 2 ν ] ) 1 / 2 c 6 , 39 max 1 i ν = 1 p n 1 / 2 + ν ( 1 d / 4 ) / 2 j = 1 n T | E [ U ( s i , x ) Δ U x ; j ] | ν .
Note that by (42) and Hölder inequality, one has | E [ U ( s i , x ) Δ U x ; j ] | c 6 , 40 σ x ; j c 6 , 41 n ( 1 d / 4 ) / 2 for all 1 i and 1 j n t , and that by (15) and Lagrange mean value theorem, for any 1 i and 1 j n t ,
E [ U ( s i , x ) Δ U x ; j ] = K d ( ( s i + t j ) 1 d / 4 ( s i + t j 1 ) 1 d / 4 ( s i t j ) 1 d / 4 + ( s i t j 1 ) 1 d / 4 ) = K d ( 1 d / 4 ) n ( ( s i + ( j ζ 1 ) / n ) d / 4 + ( s i ( j ζ 2 ) / n ) d / 4 ) 2 K d ( 1 d / 4 ) n ( s i ( j ζ 2 ) / n ) d / 4 ,
where ζ 1 , ζ 2 ( 0 , 1 ) . Then, for any 1 i and 1 ν 2 p ,
n 1 / 2 + ν ( 1 d / 4 ) / 2 j = 1 n T | E [ U ( s i , x ) Δ U x ; j ] | ν c 6 , 42 n 1 / 2 ν ( 1 + d / 4 ) / 2 1 n j = 1 n T ( s i ( j ζ 2 ) / n ) d / 4 ,
which tends to zero as n since 0 T ( s i u ) d / 4 d u < . Thus, ( Z x , Θ ˜ p n ( U ( · , x ) ) s 1 , , Θ ˜ p n ( U ( · , x ) ) s d ) L ( Z x , κ d , p 1 / 2 B ( s 1 ) , , κ d , p 1 / 2 B ( s ) ) . Since Z x and Θ ˜ p n ( U ( · , x ) ) are independent, this gives that U and B are independent
We now can complete the proof. Note that by (43) and (44),
max 0 t T | 1 n j = 1 n t ( n p ( 1 d / 4 ) / 2 Δ U x ; j p K d p / 2 μ p ) Θ p n ( U ( · , x ) ) t | μ p n 1 / 2 + p ( 1 d / 4 ) / 2 j = 1 n T | σ x ; j p ( K d n 1 + d / 4 ) p / 2 | 0 .
This finish the proof.  □

5. Conclusions

In this paper, we have presented that the realized power variations for the fourth order LKS-SPDEs and their gradient, driven by the space–time white noise in one-to-three dimensional spaces, in time, have infinite quadratic variation and dimension-dependent Gaussian asymptotic distributions. We are concerned with the fluctuation behavior, with delicate analysis of the realized variations, of the sample paths for the above class of equations and their gradient, and complement Allouba’s earlier works on the spatio-temporal Hölder regularity of LKS-SPDEs and their gradient. These asymptotic distributions are expressed in terms of the parameters of the problem, and may be used to analyze how the fluctuation behavior depends on those parameters.

Author Contributions

Conceptualization, W.W.; methodology and formal analysis, all authors; writing—original draft preparation, W.W.; writing—review and editing, all authors. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (11671115) and Natural Science Foundation of Zhejiang Province of China under grant No. LY20A010020.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors wish to express their deep gratitude to a referee for his/her valuable comments on an earlier version which improve the quality of this paper.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
SPDEStochastic partial differential equation
LKSLinearized Kuramoto–Sivashinsky
SIEStochastic integral equation
FBMfractional Brownian motion
BBMbifractional Brownian motion

References

  1. Allouba, H. L-Kuramoto–Sivashinsky SPDEs in one-to-three dimensions: L-KS kernel, sharp Hölder regularity, and Swift-Hohenberg law equivalence. J. Differ. Equ. 2015, 259, 6851–6884. [Google Scholar] [CrossRef]
  2. Allouba, H. A Brownian-time excursion into fourth-order PDEs, linearized Kuramoto–Sivashinsky, and BTPSPDEs on R+×Rd. Stoch. Dyn. 2006, 6, 521–534. [Google Scholar] [CrossRef]
  3. Allouba, H. A linearized Kuramoto–Sivashinsky PDE via an imaginary-Brownian-time-Brownian-angle process. C. R. Math. Acad. Sci. Paris 2003, 336, 309–314. [Google Scholar] [CrossRef] [Green Version]
  4. Allouba, H.; Xiao, Y. L-Kuramoto–Sivashinsky SPDEs v.s. time-fractional SPIDEs: Exact continuity and gradient moduli, 1/2-derivative criticality, and laws. J. Differ. Equ. 2017, 263, 15521610. [Google Scholar] [CrossRef] [Green Version]
  5. Duan, J.; Wei, W. Effective Dynamics of Stochastic Partial Differential Equations; Elsevier: Amsterdam, The Netherlands, 2014. [Google Scholar]
  6. Temam, R. Infinite-Dimensional Dynamical Systems in Mechanics and Physics, 2nd ed.; Springer: New York, NY, USA, 1997. [Google Scholar]
  7. Allouba, H. Brownian-time processes: The PDE connection II and the corresponding Feynman-Kac formula. Trans. Am. Math. Soc. 2002, 354, 4627–4637. [Google Scholar] [CrossRef] [Green Version]
  8. Allouba, H.; Zheng, W. Brownian-time processes: The PDE connection and the half-derivative generator. Ann. Probab. 2001, 29, 1780–1795. [Google Scholar]
  9. Allouba, H. Time-fractional and memoryful Δ2k SIEs on R+ × Rd: How far can we push white noise? Ill. J. Math. 2013, 57, 919–963. [Google Scholar] [CrossRef]
  10. Allouba, H. Brownian-time Brownian motion SIEs on R+ × Rd: Ultra regular direct and lattice-limits solutions and fourth order SPDEs links. Discret. Contin. Dyn. Syst. 2013, 33, 413–463. [Google Scholar] [CrossRef]
  11. Swanson, J. Variations of the solution to a stochastic heat equation. Ann. Probab. 2007, 35, 2122–2159. [Google Scholar] [CrossRef] [Green Version]
  12. Tudor, C.A. Analysis of Variations for Self-Similar Processes-A Stochastic Calculus Approach; Springer: Cham, Switzerland, 2013. [Google Scholar]
  13. Nourdin, I. Asymptotic behavior of weighted quardratic cubic variation of fractional Brownian motion. Ann. Probab. 2008, 36, 2159–2175. [Google Scholar] [CrossRef] [Green Version]
  14. Dobrushin, R.L.; Major, P. Non-central limit theorems for nonlinear functionals of Gaussian fields. Z. Wahrsch. Verw. Geb. 1979, 50, 27–52. [Google Scholar] [CrossRef]
  15. Taqqu, M.S. Convergence of integrated processes of arbitrary Hermite rank. Z. Wahrsch. Verw. Geb. 1979, 50, 53–83. [Google Scholar] [CrossRef]
  16. Breuer, P.; Major, P. Central limit theorems for nonlinear functionals of Gaussian fields. J. Multivar. Anal. 1983, 13, 425–441. [Google Scholar] [CrossRef] [Green Version]
  17. Giraitis, L.; Surgailis, D. CLT and other limit theorems for functionals of Gaussian processes. Z. Wahrsch. Verw. Geb. 1985, 70, 191–212. [Google Scholar] [CrossRef]
  18. Corcuera, J.M.; Nualart, D.; Woerner, J.H.C. Power variation of some integral fractional processes. Bernoulli 2006, 12, 713–735. [Google Scholar] [CrossRef]
  19. Tudor, C.A.; Xiao, Y. Sample path properties of the solution to the fractional-colored stochastic heat equation. Stoch. Dyn. 2017, 17, 1750004. [Google Scholar] [CrossRef] [Green Version]
  20. Pearson, K.; Young, A.W. On the product-moments of various orders of the normal correlation surface of two variates. Biometrika 1918, 12, 86–92. [Google Scholar] [CrossRef]
  21. Fang, K.T.; Kotz, S.; Ng, K.W. Symmetric Multivariate and Related Distribution; Chapman and Hall Ltd.: London, UK, 1990. [Google Scholar]
  22. Houdré, C.; Villa, J. An example of infinite dimensional quasi-helix. In Stochastic Models; Mexico City, Mexico; Providence, RI, USA, 2003; Volume 336, pp. 195–201. Available online: https://www.researchgate.net/profile/Jose_Morales14/publication/279400918_An_Example_of_Inflnite_Dimensional_QuasiHelix/links/543d11ca0cf2c432f7424726/An-Example-of-Inflnite-Dimensional-QuasiHelix.pdf (accessed on 8 October 2020).
  23. Mueller, C.; Tribe, R. Hitting probabilities of a random string, Electron. J. Probab. 2002, 7, 10–29. [Google Scholar]
  24. Wu, D.; Xiao, Y. Fractal Properties of Random String Processes; IMS Lecture Notes Monograph Series High Dimens, Probability; Institute of Mathematical Statistics: Beachwood, OH, USA, 2006; Volume 51, pp. 128–147. [Google Scholar]
  25. Mueller, C.; Wu, Z. Erratum: A connection between the stochastic heat equation and fractional Brownian motion and a simple proof of a result of Talagrand. Electron. Commun. Probab. 2012, 17, 10. [Google Scholar] [CrossRef]
  26. Ethier, S.N.; Kurtz, T.G. Markov Processes; Wiley: New York, NY, USA, 1986. [Google Scholar]
  27. Durrett, R. Probability: Theory and Examples, 2nd ed.; Duxbury Press: Belmont, CA, USA, 1996. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, W.; Wang, D. Asymptotic Distributions for Power Variations of the Solutions to Linearized Kuramoto–Sivashinsky SPDEs in One-to-Three Dimensions. Symmetry 2021, 13, 73. https://doi.org/10.3390/sym13010073

AMA Style

Wang W, Wang D. Asymptotic Distributions for Power Variations of the Solutions to Linearized Kuramoto–Sivashinsky SPDEs in One-to-Three Dimensions. Symmetry. 2021; 13(1):73. https://doi.org/10.3390/sym13010073

Chicago/Turabian Style

Wang, Wensheng, and Dazhong Wang. 2021. "Asymptotic Distributions for Power Variations of the Solutions to Linearized Kuramoto–Sivashinsky SPDEs in One-to-Three Dimensions" Symmetry 13, no. 1: 73. https://doi.org/10.3390/sym13010073

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop