Lagrange problem for fractional ordinary elliptic system via Dubovitskii–Milyutin method

. In the paper, we investigate a weak maximum principle for Lagrange problem described by a fractional ordinary elliptic system with Dirichlet boundary conditions. The Dubovitskii– Milyutin approach is used to ﬁnd the necessary conditions. The fractional Laplacian is understood in the sense of Stone–von Neumann operator calculus.

In the last years, one can observe the growing interest in the subject of fractional Laplacians. It follows from the numerous applications of them, for example, in mathematical finance (infinitesimal generators of Lévy processes), elastostatics (Signorini obstacle c 2020 Authors. Published by Vilnius University Press This is an Open Access article distributed under the terms of the Creative Commons Attribution Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. problem), hydrodynamics (fractional Navier-Stokes equation and model of the flow in porous media); see [2][3][4] and references therein. System (1) can be viewed as a generalization of the Poisson equation. To our best knowledge, it was investigated for the first time in paper [2], where optimal control for problem (1)- (2) was examined in the context of the existence of solutions. More precisely, in the case of β ∈ (0, 1), m = 1, with the interval (0, π) replaced by a bounded Lipschitzian domain Ω ⊂ R n (n 3) and with exterior homogenous Dirichlet boundary conditions, the results concerning the continuous dependence of the sets of solutions on controls (with respect to the strong and weak convergences of controls) as well as theorem on the existence of a solution to problem (1)- (2) have been obtained. The variational "min" structure of (2) is used in [2]. Similar results are presented in papers [4] and [3] for the control systems possessing the "minimax" and "mountain pass" structures, respectively. For some results concerning the existence, uniqueness, stability and sensitivity of a solution to problem (1), we refer to [2][3][4]14].
Our aim is to derive the necessary conditions for problem (1)- (2). To the best of our knowledge, this issue was not investigated up to now. In our paper, we derive such conditions using the Dubovitskii-Milyutin approach. Specially, we are interested in fractional case, but we do not exclude from our study the classical case of β = 1. In optimal control theory, three main approaches for necessary conditions are known: two general (abstract) approaches and one of a direct character. The first general approach is based on the smooth-convex extremum principle by Ioffe and Tikchomirov (cf. [16]), and the second one -on the Dubovitskii-Milyutin theorem (cf. [11]). The third approach is based on the method of variation of controls. Such a "variational" approach in the fractional case seems an open problem. Its difficulty lies in global character of (−∆) β x -in the case of the systems containing the classical derivative, the local (pointwise) character of this derivative is essentially used in such an approach. We use Dubovitskii-Milyutin method because it allows us to avoid convexity-type assumption on f and f 0 (an assumption of such a type is necessary in the case of the smooth-convex extremum principle). The only assumptions on f , f 0 and their gradients (excluding assumptions of Proposition 7 on the surjectivity of differential F (x * , u * )) ensure differentiability of the mappings used in Dubovitskii-Milyutin approach and are quite standard from the point of view of differential calculus in function spaces.
Our result is analogous to the corresponding Pontryagin maximum principle for classical ordinary systems -necessary conditions take the form of an adjoint system and a minimum condition. Since in this condition the gradients (f ) u and (f 0 ) u appear instead of f and f 0 , therefore we name our principle the weak maximum principle. It is worth of point out that if f , f 0 are smooth and convex in u, then minimum condition with (f ) u , (f 0 ) u and minumum condition with f , f 0 are equivalent.
The idea of Dubovitskii-Milyutin method has been presented in papers [9] and [10] but without proofs of the main results. A systematic exposition of this approach and proofs are contained in the book [11]. Some generalizations of the method have been derived by the second author in [26] and by Ledzewicz in [19][20][21][22]. In monograph [18] of 2015, applications of the Dubovitskii-Milyutin method and of its generalization in set-valued optimization are presented (cf. also [17]). http://www.journals.vu.lt/nonlinear-analysis Existence of optimal solutions and stability results concerning the case of β = 1 can be found in [5,6,23]. Necessary first-order optimality conditions for β = 1 can be deduced in some cases from the results obtained in [12] and [13]. More precisely, in [12], the scalar control system x(0) = x(π) = 0 with the cost functional is investigated under additional equality and inequality constraints. Using the direct method (McShane variations of controls), the authors derived maximum principle of Pontryagin type (it is important that a does not depend on u). In [13], the control system (not necessary scalar) x(0) = x(π) = 0 with cost functional (3) is studied via Ioffe-Tikchomirov approach. The Pontryagin-type maximum principle has been obtained under a convexity assumption on (D x F, D x F, f 0 ). Paper consists of two main parts. In the first part, we give some basics from the area of Dubovitskii-Milyutin method and Stone-von Neumann operator calculus with an application to one-dimensional Dirichlet-Laplace operator. In the second part, we derive necessary optimality conditions in form of a minimum condition and an adjoint system.

Preliminaries
In this section, we give some basics concerning Dubovitskii-Milyutin method, Stone-von Neumann operator calculus and one-dimensional Dirichlet-Laplace operator of fractional order.

Dubovitskii-Milyutin method
For the results of this section, we refer to [11]. Let X be a linear topological space with the dual space (the space of linear continuous functionals on X) denoted by X . If K is a cone in X with vertex at the point 0 (by a cone we mean a set K such that tK = K for any t > 0), then the conjugate cone K * we define by Of course, the conjugate cone is the convex cone with vertex at 0. Let F : X → R be a functional. We say that a vector h ∈ X is a direction of decrease of functional F at a point x 0 if there exist a neighborhood U of h, ε 0 > 0 and α < 0 such that We say that a vector h ∈ X is a feasible direction for the set Q ⊂ X at a point x 0 ∈ Q (Q denotes the closure of the set Q) if there exist a neighborhood U of h and ε 0 > 0 such that We say that a vector h ∈ X is a tangent direction to a set Q at a point x 0 ∈ Q if there exist ε 0 > 0 and mapping r : (0, ε 0 ) → X such that One proves that the set of directions of decrease of functional F at a point x 0 and the set of feasible directions for the set Q at a point x 0 are open cones; the set of tangent directions to a set Q at a point x 0 is a cone. Now, assume X is locally convex and consider the problem where F : X → R, Q i ⊂ X, i = 1, . . . , n + 1. We say that The main role in rest part of the paper is played by the following theorem (cf. [11, Thm. 6.1 and Remark 3 following the theorem]). Theorem 1. Let x 0 be a local minimum point for problem (4), and the cone K 0 of directions of decrease of functional F at x 0 is nonempty and convex, cones K i , i = 1, . . . , n, of feasible directions for the sets Q i at x 0 are nonempty and convex, and the cone K n+1 of tangent directions for the set Q n+1 at x 0 is nonempty and convex. Then there exist functionals f i ∈ K * i , i = 0, 1, . . . , n + 1, not all identically zero, such that http://www.journals.vu.lt/nonlinear-analysis Now, we shall give characterizations of the defined cones in some typical situations. These results can be found in [11,Thms. 7.5,8.2,9.1,resp.].
Proposition 2. If Q is a convex set in linear topological space E, then the cone K f of feasible directions for the set Q at a point x 0 ∈ Q is convex and has the form We also have (cf. [11, Thms. 10.2, 10.5, resp.]) Then Proposition 4. Let Q be a nonempty convex closed set in linear topological space E and where K f is the cone of feasible directions for Q at x 0 .

Basics of Stone-von Neumann operator calculus
Basics of Stone-von Neumann operator calculus presented in this section comes from [1, 24, 25] 1 (cf. also [15] for a more comprehensive coverage of the topic). We give them here for the convenience of the reader. Let H be a real Hilbert space with a scalar product ·, · : H × H → R, and Π(H)the set of all projections of H on closed linear subspaces of H. By the spectral measure in R we mean a set function E : B → Π(H), where B is the σ-algebra of Borel subsets of R, that satisfies the following conditions: By a support of a spectral measure E we mean the complement of the sum of all open subsets of R with zero spectral measure.
If b : R → R is a bounded Borel measurable function, defined E-a.e., then the integral x (with respect to the vector measure (5)) is defined in a standard way, with the aid of the sequence of simple functions converging E(dλ)x -a.e. to b (see [1]).
If b : R → R is an unbounded Borel measurable function, defined E-a.e., then, for (the above integral is taken with respect to the nonnegative measure B P → E(P )x 2 ∈ R + 0 ), there exists the limit for n ∈ N. Let us denote the set of all points x with property (6) by D. One proves that D is a dense linear subspace of H, and by The main property of the integral is its self-adjointness, i.e., If σ ∈ B, then by the integral where χ σ is the characteristic function of the set σ.
Next theorem plays the fundamental role in the spectral theory of self-adjoint operators (below, σ(A) is the spectrum of an operator A).
The basic notion in the Stone-von Neumann operator calculus is a function of a selfadjoint operator. Namely, if A : D(A) ⊂ H → H is self-adjoint and E is the spectral measure determined according to the above theorem, then, for any Borel measurable function b : R → R, one defines the operator It is known that the spectrum σ(b(A)) of b(A) is given by provided that b is continuous (it is sufficient to assume continuity of b on σ(A)).

Remark 1.
To make sense to the integral ∞ −∞ b(λ) E(dλ) in the case of Borel measurable function b : B → R, where B is a Borel set containing the support of the measure E, it is sufficient to extend b on R to a whichever Borel measurable function (putting, for example, b(λ) = 0 for λ / ∈ B).
One can show that and the corresponding eigenspaces for (−∆) and (−∆) β are the same. The function (−∆) β x will be called the Dirichlet-Laplacian of order β of x. We have the following lemmas. x, y ∼β = (−∆) β x, (−∆) β y L 2 , and norm generated by these products are equivalent.

Maximum principle
Let us consider the following problem: where

Assumptions
In paper [14], the following proposition has been derived.
for (t, x, u) ∈ (0, π) × R m × R r , where a, b ∈ L 2 and γ, δ : R + 0 → R + 0 are continuous functions, then F is of class C 1 , and the differential F (x, u) : In a similar way, we obtain a differentiability property of F 0 .
Proof. Indeed, it is sufficient to observe that the mapping is the Gâteaux differential of F 0 at any fixed point (x(·), u(·)) ∈ D((−∆) β ) × L ∞ and the mapping is continuous. Linearity of (F 0 ) (x(·), u(·)) is obvious. Its continuity follows from the estimation (cf. (7)) ∈ (0, π) a.e., http://www.journals.vu.lt/nonlinear-analysis for h(·) ∈ D((−∆) β ). So, to prove that (F 0 ) (x(·), u(·)) is the Gâteaux differential of F 0 , it is sufficient to check that (F 0 ) (x(·), u(·))(h(·), v(·)) is the derivative at the point λ = 0 of the real function of one (real) variable In turn, the differentiability of ψ at λ = 0 means the differentiability of an integral with respect to the parameter and follows from the classical theorem on such a differentiability. Then From the fact that the convergence in D((−∆) β ) implies the almost uniform one and from the growth conditions (10) it follows that right-hand side of the above inequality tends to 0 as n → ∞.
We have the following proposition.

Conjugate cones
Let (x * , u * ) ∈ D((−∆) β ) × L ∞ be fixed. From Propositions 1 and 6 it follows that the cone K d of directions of decrease of functional F 0 at the point (x * , u * ) has the form This set is convex and nonempty, provided that (F 0 ) (x * , u * ) = 0 (vanishing of the differential is equivalent to the equality K d = ∅). In such a case, in view of Proposition 3, the conjugate cone is the following: Using Theorem 2, we assert that if one of the conditions of Proposition 7 is fulfilled for Λ(t) = f x (t, x * (t), u * (t)), then the cone K t of tangent directions for the set It clear that K t = ∅. Of course, K t is a subspace (consequently, convex cone). So, where (ker F (x * , u * )) ⊥ is the set of linear continuous functionals on D((−∆) β ) × L ∞ vanishing on the subspace ker F (x * , u * ); the set (ker F (x * , u * )) ⊥ is called the anulator of ker F (x * , u * ). From the lemma on the anulator 3 it follows that Definition of the adjoint operator (F (x * , u * )) * to the operator F (x * , u * ) : Finally, writing the constraint u ∈ U in the form and using the fact that Int Q = ∅ (because Int U = ∅), we assert (cf. Proposition 2) that the cone K f of feasible directions for the set Q at the point (x * , u * ) ∈ Q is the following: Consequently, it is nonempty and convex. From Proposition 4 (closedness of M implies closedness of Q) it follows that conjugate cone K * f has the form