Perturbation of an alpha-stable type stochastic process by a pseudo-gradient

We consider the Markov process defined by some pseudo-differential operator of the order $1<\alpha<2$ as the process generator. Using a pseudo-gradient operator, that is, the operator defined by the symbol $i\lambda|\lambda|^{\beta-1}$ with some $0<\beta<\alpha$, the perturbation of the Markov process by the pseudo-gradient with a multiplier integrable at some great enough power is constructed. Such perturbation defines a family of evolution operators, the properties of which are investigated.


Introduction
Let d be some fixed positive integer number.By R d we denote the real d-dimensional Euclidean space.As usual, we denote by (•, •) the inner product and by | • | the norm in R d (we use the last notation for denoting the absolute value of a real number and the module of a complex number).
Let us consider a family of pseudo-differential operators (A(t, x)) t≥0,x∈R d defined by the symbols (a(t, x, λ)) λ∈R d for every t ≥ 0, x ∈ R d .That is, where F is the Fourier transform of the function f : F (λ) = R d f (x)e −i(λ,x) dx, λ ∈ R d .Note that the function a might be complex valued.
We assume the conditions from [5,Ch. 4] about the function a and the operator A, that is, (we denote by Lf (r, •)(x) the result of the operator L used on f (r, x) as the function of x) A 1 ) The function a is homogeneous of a degree α ∈ (1, 2) with respect to the variable λ and Re a(t, x, λ) ≥ a 0 > 0 for all t ≥ 0, x, λ ∈ R d , |λ| = 1.
A 3 ) In the representing the function Ω(t, x, •) is even and non-negative.
Theorem 4.3 from [5] (see also Theorem 4.1 there) states that there exists a bounded non-terminating strict Markov process (x(t)) t≥0 without second kind discontinuities and the fundamental solution (g(s, x, t, y)) 0≤s<t,x,y∈R d to the equation is its transition probability density.The function g can be constructed by the parametrix method (see [5,Sec. 4.1.4]).
If the function a is defined by the equality where c > 0 is some constant, the corresponding Markov process is an isotropic α-stable process.The function g can be presented by the equality in this case.The operator A (it does not depend on t and x) is the generator of this process.This is the simplest example of the processes considered here.Therefore we name the process (x(t)) t≥0 by the α-stable type process.
Choosing some 0 < β ≤ 1, let us denote the β-order "pseudo-gradient" by ∇ β , i.e., it is the operator which is defined by the symbol (iλ|λ| β−1 ) λ∈R d .Note that ∇ β is a vector-valued operator.In the case of β = 1, it is the ordinary gradient.We restrict ourselves to case β = 1 in this paper.
Our goal is to construct a perturbation of the considered process using the operator (b, ∇ β ), where b is some measurable R d -valued function.Under the perturbation of the Markov process (x(t)) t≥0 we understand the construction of a two parametric family of operators (T st ) 0≤s<t , defined on the set C b (R d ) of real-valued continuous bounded functions defined on R d , such that for every ϕ ∈ C b (R d ) and t > 0, the function u(s, x, t) = T st ϕ(x) satisfies in some sense the following Cauchy problem The problem of perturbation of Markov processes was and remains in the focus of attention of many researchers.The diffusion case, that is, if α = 2, was considered by M. I. Portenko (see, [13,14] and the references there).The perturbation operators there is of the form (b, ∇), where the function b belongs to some L p -space of functions or is some generalized function of the delta function type.In [18], the parabolic equation of the type ∇(A∇u) + B∇u − u t = 0 was considered.The case α < 2 was considered in the works of K. Bogdan and T. Jakubowski [2,6], who studied such a perturbation with the function b from a Kato class.S. I. Podolynny, M. I. Portenko [12] and M. I. Portenko [15,16,17] investigated this perturbation with the function b from L p .In [7], α-stable process was perturbed by the gradient operator with multiplier b satisfying a certain integral space-time condition.Y. Maekawa and H. Miura were concerned with non-local parabolic equations in the presence of a divergence free drift term in [9].J.-U.Loebus and M. I. Portenko [8] perturbed the infinitesimal operator of a one-dimensional symmetric α-stable process using the operator (qδ 0 , ∂ α−1 ), where ∂ α−1 is some pseudo-differential operator of the order α − 1. Perturbation of α-stable process by an operator of fractional Laplacian was considered in [4].The results of α ∈ (1, 2) and perturbation operators of the type (b, ∇ α−1 ) can be found in [10,11].We studied the case of an α-stable process and perturbation operators (b, ∇ β ) with a R d -valued time independent function b from L p (R d ) and 0 < β < α in [3].
This article is structured as follows.The next section is devoted to some auxiliary facts.The perturbation equation is solved in Section 3. In Section 4, we study some properties of the corresponding two-parameter evolutionary family of operators.The last section is devoted to constructing a solution to a corresponding Cauchy problem.
We will not use different notations for constants, if it is not important, but we will use the notations C. If we need to emphasize the dependence of the constant C on parameters π, we will write C π .

Auxiliaries
First of all let us note that if 0 < β < 1 the operator ∇ β can be represented in the following integral form: where Moreover this is the way to prove (1).
Next, we will need some auxiliary statements.The following lemma proved in [5] (see Lemma 1.11 there).It will be used frequently in this article.
The statement of the following lemma can be obtained from the results of [5,Ch. 4]).
Lemma 2. If the assumptions A 1 A 3 hold the transition probability density of the process (x(t)) t≥0 have the following properties (0 ≤ s < t ≤ T , x, y ∈ R d ): the function g is continuous differentiable with respect to x ∈ R d and where ∂ k means some derivative of the order k; Here, positive constants N k,T and N β,T can be depended on T .
Proof.Let us note that the function g can be constructed by the parametrix method (see [5,Th. 4.1]).That is, it can be presented by the following equality g(s, x, t, y) = g 0 (s, x, t, y) + h(s, x, t, y), and the function Φ is determined by some integral equation.Theorem 4.1 cited above states that the following inequalities hold The function g 0 satisfies the inequality (see [5, eq.(4.1.25)]) in which ∂ k means some derivative of the integer order 0 ≤ k ≤ N − 2d − 1 (the constant N is defined in assumption A 2 ), N k > 0 are some constants.Note that k can be at least 0, 1, or 2.
Using (2) we can obtain the following inequality valid for k = 0 or 1 where C k > 0 are some constants.Therefore (use (7)) for all 0 ≤ s < t ≤ T , x, y ∈ R d and every T > 0.Here the constants C k,T > 0 depend on T .
For proving inequality (4), we use representation (1) of the operator ∇ β (remind that 0 < β < 1).So, since a(t, x, •) is a homogeneous function and valid for all 0 ≤ s < t, x, y ∈ R d with some positive constant C β .Inequalities (7) allow us to state that the integral x, y ∈ R d (ε > 0 is small enough constant and by B ε (0) we denote the ball of the radius ε and the center placed in the point 0), we have The right-hand sides of these inequalities are integrable as the function of (τ, z, u) on the corresponding domains for every fixed 0 ≤ s < t and x, y ∈ R d .Using the Fubini's theorem we can write down the equality Now, use inequalities ( 6), ( 8) and Lemma 1.The result is as follows where C is some positive constant.Combining this inequality with (8) we obtain ( 4), where it is taken into account that 0 ≤ s < t ≤ T .The lemma is proved.

Perturbation equation solving
Let us consider the perturbation equation (0 Using the formal application of the operator ∇ β to both sides of equation ( 9) with respect to the variable x we can search for its solution in the form G(s, x, t, y) = g(s, x, t, y) where the function (v(s, x, t, y)) 0≤s<t,x,y∈R d is a solution to the following equation in which Theorem 1.Let assumptions Then the following statements are true: there exists a unique solution to equation (11) in the class of functions satisfying the estimate the function (10) is a solution to equation (9) and satisfies the estimate for all 0 ≤ s < t ≤ T , x, y ∈ R d and every T > 0, where a positive constant C T can be depended on T .
Before proving this theorem, let us prove the following auxiliary statement.
Lemma 3. If the assumptions of Theorem 1 hold and a R d -valued function (v(s, x, t, y)) 0≤s<t,x,y∈R d satisfies inequality (13) then the equality is fulfilled.
Proof.Note that the integral converges (it is sufficiently to use inequalities (4), ( 13) and Lemma 1).Indeed, for fixed To prove the existence of left-hand side of (15) we use representation (1) of the operator ∇ β .For fixed 0 ≤ s < t ≤ T and y ∈ R d the function is bounded and Lipschitz continuous.Indeed, it is sufficient to prove a boundedness of the integral as the function of x ∈ R d for fixed 0 ≤ s < t ≤ T and y ∈ R d in the cases of ∇ 0 = I (the identical operator) and ∇ 1 = ∇ (the gradient).
Using inequalities (3), (13) and Lemma 1, for all x ∈ R d , we can write down (k = 0, 1) Together with the operator ∇ β , we consider a family of operators {∇ (ε) for every bounded and Lipschitz continuous function f .
Using inequalities (3), ( 13) we can state, that the following inequality Using Hölder's inequality, we can write down the following , which holds for all 0 ≤ s < t ≤ T , x, y, u ∈ R d .Here the number q ≥ 1 is defined by the relation 1  p + 1 q = 1 (if p = +∞ then q = 1) and the norm of the function b in the space It is easy to see, that the right hand side integrals satisfy the conditions of Lemma 1. Therefore these integrals are finite for all fixed 0 ≤ s < t ≤ T , x, y, u ∈ R d .As a consequence, the function with small enough ε > 0 for every fixed 0 ≤ s < t ≤ T , x, y ∈ R d .Thus, using Fubini's theorem, we obtain the following equality hold for each ε > 0. So, it is enough to pass to the limit as ε → 0+ in equality (16) to complete the proof of the lemma.
Proof of Theorem 1.Let us solve equation (11) for all fixed 0 ≤ s < t ≤ T , x, y ∈ R d , using the method of successive approximations.Namely, consider the sequence of functions (v k (s, x, t, y)) 0≤s<t≤T,x,y∈R d , k = 0, 1, 2, . . ., given by the recurrence relation where v 0 is defined by (12).
Since lim k→∞ B(1 + kθq, θq) = lim k→∞ B((k + 1)θq, 1) = 0 the series ∞ k=0 v k (s, x, t, y) converges uniformly with respect to x, y ∈ R d and locally uniformly with respect to 0 ≤ s < t.Let (v(s, x, t, y)) 0≤s<t,x,y∈R d be the sum of this series.The function v is a solution to equation (11) and the following estimate holds for all 0 ≤ s < t ≤ T , x, y ∈ R d and every T > 0 with some constant C T > 0 depended on T .
The uniqueness of this solution follows from the fact that the difference v * of every two such solutions satisfies the equation Therefore, since v * satisfies estimate (13), using inequality (2), one can obtain the following (0 for all k ∈ N, T > 0, where R k,T is defined by (19).Since lim k→∞ R k,T T kθ = 0 for all T > 0, this means that v * (s, x, t, y) ≡ 0.
Let us define the function G by equality (10) with the function v just constructed.Using (2), (3) and Hölder's inequality we can write the following chain of inequalities where C T (in the last expression) is some positive constant, which might be depended on T .
Let us prove that ∇ β G(s, •, t, y)(x) ≡ v(s, x, t, y).Using the Lemma 3 statement, we obtain the following equality Consequently the function G is a solution to equation ( 9) and it satisfies estimate ( 14).The theorem is proved.

Evolution family of operators
Let us define the two-parameter family of operators {T st : 0 ≤ s < t} defined in the space of continuous bounded functions C b (R d ) by the equality Similarly to the proof of Theorem 1 one can prove the following statement.
for all 0 ≤ s < t ≤ T , x ∈ R d and every T > 0.Here v 0 is defined by (12) and w 0 (s, x, t, ϕ) = R d v 0 (s, x, t, y)ϕ(y) dy.Proof.Equation ( 21) is obtained from equation ( 11) by multiplying it with the functions ϕ and using the Fubini's theorem.The justification of the usage of the Fubini's theorem bases on estimates (4), (13) (see also (12)).Indeed, for each T > 0 and every 0 ≤ s < t ≤ T , x ∈ R d with some constants C T > 0.Here we used the well-known formula , which is valid for all a > 0 and κ > 0.

The next theorem contains the properties of the family of operators (20).
Theorem 2. Let the Theorem 1 assumptions hold.Then the following statements are true: • the operators T st , 0 ≤ s < t are linear and bounded on C b (R d ); • the family of operators {T st : 0 ≤ s < t} has an evolution property, that is, T sτ T τ t = T st for all 0 ≤ s < τ < t; • w -lim s↑t T st = I, where I is the identical operator, i.e., Proof.The linearity of operator T st is evident.Let us prove its boundedness.If ϕ ∈ C b (R d ) then using inequality (14) we can write ( ϕ = max for all 0 ≤ s < t ≤ T and each T > 0. Therefore the operators T st are bounded.Next, if ϕ(x) ≡ 1 then The function w(s, x, t, 1) = R d v(τ, z, t, y) dy, 0 ≤ s < t, x ∈ R d is a solution to equation ( 21) with w 0 (s, x, t, 1) ≡ 0. So, w(s, x, t, 1) ≡ 0 and T st ϕ(x) = R d g(s, x, t, y) dy ≡ 1.
Although the evolution property can be proved in a standard way (see, for example [10,11,13,15,16]), we will provide it here.For this, let us choose arbitrary where Using (21), one can obtain (by changing of integration order) Here we used the well-known relation (the Chapman-Kolmogorov equation) which is true for all 0 ≤ s < u < t, x, y ∈ R d .Since equation ( 21) has a unique solution, we state that w(s, x, t, ϕ) = w(s, x, u, T ut ϕ) and the evolution property is proved.
The last statement of this theorem follows from the following two facts: first, using (22) we have the equality w -lim s↑t T 0 st = I, and second, (note that 1 ≤ q < d+α d+1 ) This completes the proof of the theorem.
Remark 2. We cannot state that the operators T st keep the cone of non-negative functions.We have no proof that the function T st ϕ(x) can have negative values if ϕ(x) ≥ 0, x ∈ R d .But the example of α-stable process and b(t, x) ≡ b ∈ R d confirms this fact.Exactly analogous to how it is done in [1] for the case β = α − 1, we can obtain the following equality in our case.The function G here has negative values as the Fourier transform of a nonpositive definite function.
Thus, the evolution operators family {T st : 0 ≤ s < t} does not define any of the Markov processes but only a pseudo-process possessing a "Markov property".

Cauchy problem
In this section we fix some T > 0 and prove that the function u(s, x, t) = T st ϕ(x), 0 ≤ s < t ≤ T , x ∈ R d is a solution (in some sense) to the following Cauchy problem: for all 0 ≤ s < t ≤ T , x, y ∈ R d and each T > 0 with some positive constant C T .
Proof.Using (10), we can obtain the following Ĝ(s, x, t, y) − G(s, x, t, y) for all 0 ≤ s < t, x, y ∈ R d , where W (s, x, t, y) = ( b(s, x), v(s, x, t, y)) − ( b(s, x), ṽ(s, x, t, y)), in which v and ṽ are solutions to equations obtained from (11) replacing the function b by the functions b and b, respectively.Relation (11) leads us to the equation where W 0 (s, x, t, y) = ( b(s, x)− b(s, x), v 0 (s, x, t, y)).Considering (4), we obtain the following inequality valid for all 0 ≤ s < t ≤ T , x, y ∈ R d and each T > 0. Moreover where for each T > 0, using inequality (2) we obtain the following estimates where, as it was above, θ = 1 − ((d + α)/p + β)/α, q = p/(p − 1) and C is maximum of positive constants derived from inequality (2) in considered four cases.Therefore Then we can write down the following This equation can be solved by the method of successive approximations, i. e., a solution of it can be found in the form W (s, x, t, y) = ∞ k=0 W * k (s, x, t, y).The terms of this series satisfy the following relation To justification this, note that Moreover, by the method mathematical induction one can prove the following estimate ) and C T is some positive constant depended on T , and, for k ≥ 2 Therefore the series ∞ k=0 W * k (s, x, t, y) converges uniformly with respect to x, y ∈ R d and locally uniformly with respect to 0 ≤ s < t.So, its sum W is a solution to equation ( 27).In addition we obtain the following estimate: valid for all 0 ≤ s < t ≤ T , x, y ∈ R d and each T > 0, where C T is some positive constant depended on T .Using (26),( 28) and ( 2) with the Hölder inequality, we obtain (25).
Lemma 6.Let the function w be defined in Lemma 4. For every 0 ≤ s < t ≤ T , x, y ∈ R d , the following inequality holds with some positive constant C T depended on T .
In the next theorem, we construct a generalized solution to the Cauchy problem formulated at the beginning of the section.We denote by v n , w n and G n the objects that are defined as v, w and G, respectively, using b n instead of b.Remind that ∇ β G n (s,

Lemma 4 .
The function w(s, x, t, ϕ) = R d v(s, x, t, y)ϕ(y) dy, 0 ≤ s < t, x ∈ R d and ϕ ∈ C b (R d ) (v is defined in Theorem 1), is a unique (in the class of functions, which satisfy the inequality |w(s, x, t, ϕ)| ≤ C T (t − s) −β/α ) solution to the equation

Theorem 3 .
Let the assumptions of Theorem 1 hold and the function G is constructed there.Then the functionu(s, x, t) = R d G(s, x, t, y)ϕ(y) dy, 0 ≤ s < t ≤ T, x ∈ R dis a generalized solution to the Cauchy problem (23), (24) for each ϕ ∈ C b (R d ).Proof.Let us consider a sequence {b n : n ∈ N} of R d -valued functions, which are infinity continuous differentiable, have compact supports and belong to L p ([0, T ] × R d ), where p is defined in Theorem 1. Assume that L plim n→∞ b n = b, where b is the function from Theorem 1.
Lemma 5  allows us to state that the sequence of corresponding functions G n constructed in Theorem 1 using the functions b n instead of b converges to the function G uniformly with respect to y ∈ R d for each fixed 0 ≤ s < t ≤ T and x ∈ R d .Let us consider the functionf n (s, x, t) = R d (b n (s, x), ∇ β G n (s, •, t, y)(x))ϕ(y) dy, 0 ≤ s < t ≤ T, x ∈ R d .