A sharp error analysis for the DG method of optimal control problems

: In this paper, we are concerned with a nonlinear optimal control problem of ordinary di ﬀ erential equations. We consider a discretization of the problem with the discontinuous Galerkin method with arbitrary order r ∈ N ∪ { 0 } . Under suitable regularity assumptions on the cost functional and solutions of the state equations, we ﬁrst show the existence of a local solution to the discretized problem. We then provide sharp estimates for the L 2 -error of the approximate solutions.


Introduction
In the present work, we discuss discontinuous Galerkin (DG) approximations to a nonlinear optimal control problem (OCP) of ordinary differential equations (ODEs). More precisely, we consider the following optimal control problem: Minimize J(u, x) := T 0 g(t, x(t), u(t)) dt, (1.1) subject to Here u(t) ∈ R m is the control, and x(t) ∈ R d is the state of the system at time t ∈ [0, T ]. Further, g : [0, T ] × R d × R m → R and f : [0, T ] × R d × R m → R d are given, and the set of admissible controls U ad ⊂ U := L ∞ (0, T ; R m ) is given by U ad := {u(t) ∈ R m : u ≤ u(t) ≤ u u } for some u , u u ∈ R m . Here the inequality is understood in the component-wise sense.
There have been a lot of study on the numerical computation for the above problem. The numerical schemes need a discretization of the ODEs, for example, the Euler discretization for the OCPs of ODEs are well studied for sufficiently smooth optimal controls based on strong second-order optimality conditions [2,13,14]. For optimal control problems with control appearing linearly, the optimal control may be discontinuous, for an instance, bang-bang controller, and such conditions may not be satisfied. In that respect, there have been many studies to develop new second-order optimality conditions for the optimal control problems with control appearing linearly [3,21,31,32]. The second-order Runge-Kutta approximations for the OPCs was studied in [15]. Recently, works [16,17] developed a novel stability technique to obtain new error estimates for the Euler discretization of OCPs.
The Pseudo-spectral method is also popularly used for the discretization due to its capability of high-order accuracy for smooth solutions to the OCPs [20,33]. However, the high-order accuracy of the Pseudo-spectral method is known to be often lost for bang-bang OCPs, where the solutions may not be smooth enough. To handle this issue, Henriques et al. [24] proposed a mesh refinement method based on a high-order DG method for the OCPs of ODEs. The DG method discretizes the time interval in small time subintervals, in which the weak formulation is employed. The test functions are usually taken as piecewise polynomials which can be discontinuous at boundaries of the time interval, see Section 2 for more detailed discussion. We refer to [7,19,34] and references therein for DG methods for ODEs. It is also worth to refer to papers for the analysis of the discretization of optimal control problems of PDEs, for example, the elliptic problems [1,23,35] and the parabolic problems [9,12,[25][26][27][28][29]. In addition, the recent works [22,30] studied the discretization of the optimal control for fractional diffusion problems.
In this paper, we provide a rigorous analysis for the DG discretization applied to the nonlinear OCPs (1.1) and (1.2) with arbitrary order r ∈ N ∪ {0} for general functions f and g with suitable smoothness. Motivated from a recent work by Neitzel and Vexler [29], we impose the non-degeneracy condition (2.4) on an optimal controlū of the OCPs (1.1) and (1.2). We obtain the existence and convergence results for the semi-discretized case and the fully discretized case. The rates of the convergence results depend on the regularity of the optimal solutionū and its adjoint state with the degree of piecewise polynomials mentioned above, see Section 2 for details.
It is worth noticing that the control is not required to be linear in the state Eq (1.2), and the control space U ad allows to take into account discontinuous controls. The constraints for controls are defined by lower and upper bounds. Moreover, the cost functional is also given in a general form, not limited to be quadratic. We mention that the DG discretization of zeroth order was used in [29] for the optimal control problem for the semi-linear parabolic equation where the control is linearly applied to the system.
For notational simplicity, we denote by I := (0, T ), X := L 2 (I; R d ), and (v, w) I = (v, w) L 2 (I;R d ) . We also use simplified notations: for 1 ≤ p ≤ ∞. Throughout this paper, for any compact set K ⊂ R m , we assume that f, g ∈ C([0, T ]; for some M > 0. We next introduce the control-to-state mapping G : U → X ∩ L ∞ (I; R d ), G(u) = x, with x solving (1.2). It induces the cost functional j : U → R + , u → J(u, G(u)). This makes the optimal control problems (1.1) and (1.2) equivalent to Minimize j(u) subject to u ∈ U ad . (1.4) In the proof of the existence and convergence results, the main task is to show that the strong convexity of j induced by the second-order optimality condition (2.4) is preserved near the optimal controlū and also for its DG discretized version j h . It is achieved using the second-order analysis in Section 4. As a preliminary, we also justify that j and j h are twice differentiable, by showing the differentiability of the control-to-state mapping G and its discretized version G h in the appendix.
In Section 2, we explain the DG discretization of the ODEs and the OCP. Then we present the main results for the semi-discretized case and provide some preliminary results. In Section 3, the adjoint problems are studied. Section 4 is devoted to study the second order analysis of the cost functionals j and j h . In Section 5, we prove the existence of the local solution and obtain the convergence rate for the semi-discretized case. Section 6 is devoted to establish the existence and convergence results for the fully discretized case. Finally, in Section 7, we perform several numerical experiments for linear and nonlinear OCPs. In Appendix A, we obtain first and second order derivatives of the control-to-state mapping G. Appendix B is devoted to prove a Grönwall-type inequality for the discretization of the ODEs (1.2) involving the control variable. It is used in Appendix C to establish the differentiability of the discrete control-to-state mapping G h and obtain the derivatives. In Appendix D, we prove Lemmas 3.3 and 3.5, which reformulate the first derivatives of the cost functionals in terms of the adjoint states. In Appendix E, we derive the formulas on the second order derivatives of the cost functionals.

DG formulation
In this section, we describe the approximation of the OCPs (1.1) and (1.2) with the DG method, and then we state the main results on the semi-discrete case. First, we illustrate the discretization of the ordinary differential equations Lipschitz continuous with respect to x, i.e., with a constant L > 0. By the Cauchy Lipschitz theorem, we have the existence and uniqueness of classical solution x of (2.1). Given an integer N ∈ N, we consider a partition of I into N-intervals {I n } N n=1 given by I n = (t n−1 , t n ) with nodes 0 =: t 0 < t 1 < · · · < t N−1 < t N := T . Let h n be the length of I n , i.e., h n = t n − t n−1 , and we set h := max 1≤n≤N h n . For a piecewise continuous function ϕ : [0, T ] → R d , we also define The jumps across the nodes is denoted by where P r (I n ) represents the set of all polynomials of t up to order r defined on I n with coefficients in R d . Then the DG approximate solution x h of (2.1) is given as for all ϕ ∈ X r h . Here (·, ·) denotes the inner product in R d , and where C > 0 is determined by L, T , and r.
Now, for given u ∈ U, we consider the approximate solution x ∈ X r h of the control problem (1.2) satisfying for all ϕ ∈ X r h . Throughout the paper, we will consider local solutionsū to (1.4) satisfying the following nondegeneracy condition. Assumption 1. Letū ∈ U ad be the local solution of (1.1). We assume that it satisfies for some γ > 0.
The differentiability of the cost functional j(u) = J(u, G(u)) with respect to u ∈ U is induced by the differentiability of the solution mapping G(u) justified in Appendix A (see also the proofs of Lemmas 3.3 and E.1). Note that the above second-order optimality condition holds under suitable regularity assumptions on the function f , g, and solutions, see Remark E.2 for a detailed discussion. We refer to [4,5] for further discussion on the second-order condition and also [8,10,11] for the optimal control problem of PDEs.
In addition, we assume thatū ∈ U ad has bounded total variation, i.e., V(ū) ≤ R/2 for a fixed value R > 0. Here the total variation V( f ) for f ∈ L ∞ (0, T ) is defined as where P is any partition Considering a discrete control-to-state mapping G h : is the solution of (2.3), we introduce the discrete cost functional j h : U → R + , u → J(u, G h (u)). Let us consider the following discretized version of (1.1): We now define the local solution to (2.5) as follows.
In the first main result, we prove the existence of the local solution to the approximate problem (2.5).
The second main result is the following convergence estimate of the approximate solutions.
Theorem 2.4. Letū ∈ U ad ∩ V R/2 be a local solution of (1.4) satisfying Assumption 1, letū h be the approximate solution found in Theorem 2.3, and let λ(ū) be the adjoint state defined in Definition 3.1 below. Assume that the statex = G(ū) belongs to W k 1 ,∞ (I; R d ) and the adjoint state λ(ū) belongs to W k 2 ,∞ (I; R d ) for some k 1 , k 2 ≥ 1. Then we have The required regularity of solutionsx and λ(ū) can be obtained under suitable smoothness assumptions on f , g, andū, see Remark 3.2 below. The above result establishes the error estimate concerning the discretization of the ODEs in the OCPs. We will give the proofs of Theorems 2.3 and 2.4 in Section 5. On the other hand, to implement a numerical computation to the OCP (1.4), one needs also to consider an approximation of the control space with a finite dimensional space. In Section 6, we will see that the proof of Theorem 2.4 can be extended to the error analysis incorporating the discretization of the control space.

Adjoint states
This section is devoted to study the adjoint states to the OCP (1.1) and its discretized version (2.5). We introduce a bilinear form b(·, ·) for x ∈ W 1,∞ (0, T ) and ϕ ∈ X by Then, for a fixed control u ∈ U and initial data x 0 ∈ R d , a weak formulation of (1.2) can be written as for all ϕ ∈ X with x(0) = x 0 .
Definition 3.1. For a control u ∈ U, we define the adjoint state λ = λ(u) ∈ W 1,∞ (0, T ) as the solution to For u, v ∈ U, the derivative of j at u in the direction v is defined by It is well-known that the derivative of the cost functional can be calculated with the adjoint state, as described below.
Proof. For the completeness of the paper, we give the proof in Appendix D.
Next we describe the adjoint problem for the approximate problem (2.5). For x, ϕ ∈ X r h , we define For approximate solution x h = G h (u) ∈ X r h , the Eq (2.3) with control u ∈ U can be written as Now we define the adjoint equation for the approximate problem (2.5).
Definition 3.4. The adjoint state λ h = λ h (u) ∈ X r h is defined as the solution of the following discrete adjoint equation: In Appendix D, we briefly explain how the adjoint Eq (3.8) can be derived from the Lagrangian related to (2.5). We also have an analogous result to Lemma 3.3.
Proof. The proof is given in Appendix D.
In order to prove the main results in Section 2, we shall use the following lemma.
Proof. We recall from (3.4) and (3. and As an auxiliary function, we consider ζ h ∈ X h solving which is the DG discretization of (3.11) in a backward way (see Lemma 3.7 below). Then, by and Combining these estimates with (3.12) and (3.14) gives where R : I → R d is given by and it satisfies R(t) = O(h min{k 1 ,r+1} ). This, together with Lemma B.4, yields Combining this estimate with (3.15), which completes the proof.
With abusing a notation for simplicity, let us define J as the interval I given a partition 0 = s 0 < s 1 < · · · < s N−1 < s N = T with s j = t N− j . Also we set X r h,J as the DG space X r h with the new partition. Then we have the following lemma.
By an integration by parts, Rearranging this, we get which is the desired equation B(W, ψ) = (F(t, W), ψ) I . The proof is finished.

Second order analysis
In this section, we analyze the second order condition of the functions j and j h , which are essential in the existence and convergence estimates in the next sections.

Second order condition for j
We defined the solution mapping G : U → X ∩ L ∞ (I; R d ) in the previous section. Here we present Lipschitz estimates for the solution mapping G, its derivative G , and the solution to the adjoint Eq (3.4).
There there exists C > 0 such that for all u,û ∈ U ad and v ∈ U we have Proof. Let us denote by x = G(u) andx = G(û). Then it follows from (3.2) that Using this estimate and applying the Grönwall inequality in (4.1), we get the inequality This gives the first inequality. For the second one, if we set y = G (u)v andŷ = G (û)v, then it follows from Lemma A.1 that This together with the first assertion above yields For notational simplicity, we denote by λ = λ(u) andλ = λ(û). Then we get with (λ −λ)(T ) = 0. By applying the Grönwall inequality in a backward way, we obtain where we used This completes the proof.
We now show that the second order condition of j holds near the optimal local solutionū ∈ U ad .
Lemma 4.2. Suppose thatū ∈ U ad satisfies Assumption 1. Then there exists ε > 0 such that On the other hand, it follows from Lemma 4.1 that This together with the following estimate By choosing ε = γ 4C > 0 here, we obtain the desired result.
As a consequence of this lemma, we have the following result.
Theorem 4.3. Letū ∈ U ad satisfy the first optimality condition and Assumption 1. Then, there exist a constant ε > 0 such that for any u ∈ U ad with u −ū L 2 (I) ≤ 2ε.
Proof. Choose ε > 0 as in Lemma 4.2. By Taylor's theorem, we get On the other hand, the first optimality condition implies Moreover, we also find ū −ū s L 2 (I) ≤ s u −ū L 2 (I) ≤ 2ε.
Using these observations and Lemma 4.2, we conclude The proof is finished.

Second order condition for j h
In this part, we investigate the second order condition for the discrete cost functional j h . Similarly as in the previous subsection, we first provide the Lipschitz estimates for G h and the discrete adjoint state.
Lemma 4.4. Let u,û ∈ U ad and v ∈ U be given. Then, there exists C > 0, independent of h ∈ (0, 1), and Proof. The first and the third assertions are proved in Lemma B.5. The second estimate is proved in Lemma C.2. Proof. Defineỹ : [0, T ] → R d by the solution tõ Recall from Lemma A.1 that y satisfies Combining these two equations, we get Using the Grönwall inequality here with (4.2) and (3.13), we find that Ch v L 2 (I) . (4.5) On the other hand, y h satisfies which is the DG discretization of (4.4) in a backward way in view of Lemma 3.7. Thus, we may use Theorem 2.1 to obtain the following error estimate: This, together with (4.5) gives us the estimate The proof is finished.
Proof. We first claim that . Also we let y = G (u)v and y h = G h (u)v. It follows from Lemmas E.1 and E.3 that In order to show (4.6), by using a similar argument as in the proof of Lemma 4.2, it suffices to show that there exists C > 0, independent of h, such that and T 0 |y 2 (t) − y 2 h (t)| dt ≤ Ch v 2 L 2 (I) .
The first and second inequalites in (4.7) hold due to Theorem 2.1 and Lemma 4.5. For the third one in (4.7) is proved in (C.2). By Lemma 3.6, the second inequality in (4.8) holds. We also find which asserts the first inequality in (4.8). Finally, we obtain Ch v 2 L 2 (I) , due to (4.7). All of the above estimates enable us to prove the claim (4.6). This together with Lemma 4.2 yields for 0 < h < h 0 := γ/(4C). The proof is finished.

Existence and convergence results for the semi-discrete case
We first prove the existence of the local solution to the approximate problem (2.5).
Proof of Theorem 2.3. Choose ε > 0 as in Theorem 4.3. We consider the following set B 2ε (ū) = {u ∈ U ad : u −ū L 2 (I) ≤ 2ε}, and recall from Section 2 the space V R = {u ∈ U : V(u) ≤ R}. We will find a minimizerv of j h in the space W ε,R := B 2ε (ū) ∩ V R , and then show that v −ū L 2 (I) < ε. It will imply thatv is a local solution to (2.5).
Since j h is lower bounded on W ε,R , there exists a sequence Moreover, since W ε,R is compactly embedded in L p (I) for any p ∈ [1, ∞), up to a subsequence, there exists a functionv ∈ W ε,R such that {v k } converges tov in L 2 (I) and converges a.e. tov. By definition, the function z k : for all ϕ ∈ X r h . Note that {z k } k∈N is a bounded set in the finite dimensional space X r h by Theorem 2.4 (see also Lemma B.4). Therefore we can find a subsequence such that z k converges uniformly to a function z ∈ X r h . We claim thatz = G h (v). Indeed, since v k (t) converges a.e. tov(t) for t ∈ I and f is Lipschitz continuous, we may take a limit k to infinity in (5.2) for all ϕ ∈ X r h . This yields thatz = G h (v), which enables us to derive This together with (5.1) implies thatv ∈ W ε,R satisfies It remains to show that the minimizerv ∈ W ε,R is achieved in the interior of B ε (ū) = {u ∈ U ad : u −ū L 2 (I) < ε}. To show this, we recall that Since G(u) W 1,∞ (I) ≤ C for all u ∈ U ad , we see from Theorem 2.1 that where C > 0 is independent of h. Combining this with the Lipschitz continuity of G yields that Taking h 0 = γε 2 /(8C). Using this and the estimate Thus, the minimizerv is achieved in B ε (ū). It gives that j h (u) ≥ j h (v) for all u ∈ V R with u−v L 2 ≤ ε.
We now provide the details of the convergence estimate of the approximate solutions.
Proof of Theorem 2.4. Analogous to (4.3), the discrete first order necessary optimality condition for Inserting here u =ū and summing it with (4.3), we get Now, by applying the mean value theorem with a value t ∈ (0, 1), we have where we used Lemma 4.6 in the first inequality and (5.4) in the second inequality. For our aim, it only remains to estimate the right hand side. Let us express it using the adjoint states. From (3.5), we have 6) and it follows from (3.9) that Here we remind thatx h ∈ X r h denotes the solution to (2.3) with controlū and initial data x 0 . Combining (5.6) and (5.7) we find Applying Hölder's inequality here and using (1.3), we deduce (5.8) Now we apply (3.10) and (3.13) to get Combining this with (5.5), we finally obtain ū h −ū L 2 (I) ≤ Ch min{k 1 ,k 2 ,r+1} .
This completes the proof.

Existence and convergence results for the fully discrete case
This section is devoted to the existence and convergence results for the fully discrete case. We consider a finite dimensional space U h which discretizes the control space U ad , for example, the space of step functions or the high-order DG space U h = X r h ∩ U ad with r ∈ N. We say thatū h ∈ U h is a local solution to The existence result of local solution is provided in the following theorem.
Proof. By compactness and continuity, j h has a minimizerū h in since U h is finite dimensional. Next we aim to show that the minimizerū h satisfies To show this, we recall from (5.3) that there is a value h 0 > 0 such that for h ∈ (0, h 0 ) we have Combining this with the minimality ofū h for j h in B 2ε (ū), we find that ū h −ū L 2 (I) ≤ ε. It then yields that Thusū h is a local solution of (6.1).
We establish the convergence result in the following theorem.
Theorem 6.2. Assume the same statements forū ∈ U ad and λ(ū) in Theorem 2.4. In addition, suppose that there exists a projection operator P h : U → U h and a value a > 0 such that Letū h ∈ U h be a local solution to (6.1) constructed in Theorem 6.1. Then the following estimate holds: If we further assume that j (ū) = 0, then the above estimate can be improved to Proof. In this case, by the first optimality conditions onū andū h , we have The latter condition can be written as where R h := j h (ū h )(P hū −ū). Summing up the above two inequalities provides By the assumption of the theorem, On the other hand, by applying the mean value theorem and Lemma 4.6, we obtain Combining this with (6.2) yields Applying here the estimate (5.9) in the previous proof, we have which together with (6.3) gives the desired estimate When we further assume j (ū) = 0, it follows that Using this and the estimates in (5.8), we find ≤ Ch a ū h −ū L 2 (I) + h min{k 1 ,k 2 ,r+1} .
Inserting this into (6.4) yields It gives the desired estimate The proof is done.

Numerical experiments
In this section, we present several numerical experiments which validate our theoretical results. The forward-backward DG methods [18] is employed to solve the examples of the OCPs.

Linear problem
Let us consider the following simple one dimensional OCP, which has been used as an example [36], that consists of maximizing the functional subject to the state equation and U = L 2 ([0, 1]). Using a similar idea as in Section 3 based on the maximum principle, we can derive the adjoint equation to the above optimal control problem: Furthermore, we also find that the optimal solutionsū = −λ andx satisfies (7.2). Thus we have the solutionx . Table 1. Discrete L 2 error: x −x h L 2 (I) and ū −ū h L 2 (I) . For fixed r ∈ N, we use X r h for the approximate space of U. In Table 1, we report the discrete L 2 error between optimal solutions and its approximations for the above optimal control problem. Here r + 1 is the number of grid points on each time interval I n , and we used the equidistant points for our numerical computations. The numerical result confirms that the error is of order h r+1 as proved in Theorem 2.4.

Nonlinear problem
In this part, we consider the following nonlinear optimal control problem: subject to the state equation In this case, the corresponding adjoint equation and optimal control are given as follows.
and thus the optimal solutionx solves In this case, since we have no explicit form of the actual solutions, we take the reference solutions x h (resp.,ū h ) with h = (0.1) × 2 −9 instead ofx (resp.,ū). In Table 2, we arrange the discrete L 2 error between reference solutions and its approximations.
Next we consider a two dimensional problem given by subject to the state equation In this case, the corresponding adjoint equation and optimal control are given as follows.
This case also has no explicit form of the actual solutions and so we take the reference solutionsx h (resp.,ū h ) with h = (0.1) × 2 −9 instead ofx (resp.,ū). The discrete L 2 error between reference solutions and its approximations are arranged in Table 3. Table 3. Discrete L 2 error: x −x h L 2 (I) and ū −ū h L 2 (I) . h x −x h L 2 (I) ū −ū h L 2 (I) log 2 x−x 2h x−

Conclusions
In this paper, we established the analysis for the DG discretization applied to the nonlinear OCP with arbitrary degree of piecewise polynomials r for nonlinear functions f and g with suitable smoothness assumptions. Under the non-degeneracy condition on an optimal control of the OCP, we obtained the existence of the local solution to the approximate problem and the sharp L 2 -error estimates of the approximated solutions. These results was extended to the fully discrete case, in which the control space is also discretized. Finally, we showed numerical experiments validating our theoretical results. Based on the results of this paper, it would be interesting to analyze the mesh refinement method for the discontinuous galerkin method of the optimal control problems. We would like to investigate this problem in the future.
This completes the proof.

B. Grönwall-type inequality for the DG discretization of ODEs
In this section, we provide a Grönwall-type inequality for the DG discretization of ODEs with inputs. It will be used in Section C to establish the differentiability of the discrete control-to-state mapping G h .
We begin with recalling from [34, Lemma 2.4] the following lemma.
The next result is from [34, Lemma 3.1].
We shall use the following Grönwall inequality.
Lemma B.3. Let {a n } N n=1 and {b n } N n=1 be sequences of non-negative numbers satisfying b 1 ≤ b 2 ≤ · · · ≤ b N and b 1 = 0. Assume that for a value h ∈ (0, 1/2) we have (1 − h)b n+1 ≤ b n + a n for n ∈ N. Then there exists a constant Q > 0 independent of h ∈ (0, 1/2) and N ∈ N such that b n ≤ e Q(nh) n k=1 a k for any n ∈ N with n ≤ N/h.
Proof. The proof can be obtained by induction. Now we obtain the Grönwall-type inequality.
for all ϕ ∈ X r h . Then there exists a constant C > 0 independent of h > 0 such that for all u 1 , u 2 ∈ U ad and h > 0 small enough.
As a corollary, we have the following Lipschitz estimates.
Lemma B.5. For u, v ∈ U ad we have Proof. Let us denote by x = G h (u) andx = G h (v). Then it follows from (2.3) that By (1.3), there exists a constant C > 0 such that By applying Lemma B.4, we get the inequality This gives the first inequality. For the second one, we denote by λ = λ h (u) andλ = λ h (v). Then, we see from Lemma 3.8 that By applying Lemma B.4 again in a backward way (see Lemma 3.7), we obtain λ −λ L ∞ (I) ≤ C (∂ x f (·, x, u) − ∂ x f (·,x,û))λ L 2 (I) where we used λ L ∞ (I) ≤ C ∂ x g L ∞ (I) , due to Lemma B.4. This completes the proof.

C. Differentiability of discrete control-to-state mapping
This section is devoted to prove that the discrete control-to-state mapping G h is twice differentiable. We also obtain the first and second derivatives of G h .
Theorem C.1. We denote x s h = G h (u + sv) and set y h ∈ X r h be the solution of the following discretized equation: where x h = G h (u). Then we have d ds x s h (t) = y h (t).
It then follows that and We obtain from Lemma C.2 the estimate |y s h (t)−y h (t)| ≤ Cs. Upon this estimate and that d ds x s h (t)| s=0 = y h (t) from Lemma C.1, an elementary calculus reveals that |A 1 (t)| ≤ Cs 2 and |A 2 (t)| ≤ Cs 2 . Putting this estimate into (C.5) and using Lemma B.4, we find y s (t) − y(t) − sz(t) = O(s 2 ).
This yields that d ds y s h (t)| s=0 = z h (t), and so we have d 2 (ds) 2 G h (u + sv)| s=0 = z h (t) since y s h (t) = d ds G h (u + sv).
The proof is done.

D. Derivations of the first order derivative of cost functionals
In this part, we give the proofs of Lemmas 3.3 and 3.5. Before presenting it, we shall explain how to derive the discrete adjoint Eq (3.8) from the Lagrangian associated to (2.5).
Let us first write the Lagrangian of the problems (1.1) and (3.7) as follows: for λ h ∈ X r h , where the bilinear operator B(·, ·) is given by (3.7). If we compute the functional derivatives of the above Lagrangian (D.1) with respect to the adjoint state λ h , then δL h /δλ h = 0 leads (3.7). We now derive the equation of discrete adjoint state. Using the integration by parts, we find This enables us to rewrite the Lagrangian (D.1) as and this further implies for all ψ h ∈ X r h , where we applied the integration by parts for (ψ h , λ h ) I n to derive the second equality. The above equality corresponds to the adjoint Eq (3.8).
Proof of Lemma 3.3. In order to compute the functional derivative of j with respect to u, we consider j(u + sv) = J(u + sv, G(u + sv)) with v ∈ U and s ∈ R + . If we set x s (t) := G(u(t) + sv(t)) it follows from Lemma A.1 that y = d ds x s (t)| s=0 satisfies with the initial condition y(0) = 0. Recall from (3.4) that the adjoint state λ(t) = λ(u)(t) satisfies Since x s (t) is differentiable with respect to s, the cost j(u + sv) is differentiable with respect to s and it is computed as x(t), u(t))v(t) dt + Proof of Lemma 3.5. The proof is very similar to Lemma 3.3. We consider j h (u+ sv) = J(u+ sv, G h (u+ sv)) with v ∈ U and s ∈ R + . We recall from Lemma C.1 that the function x s h := G h (u + sv) is differentiable at s = 0 with d ds x s h | s=0 = y h , where y h ∈ X r h satisfies the following equation: Using this, we obtain x h (t), u(t))y h (t) dt.

(D.6)
We then take ψ h = y h in (D.2) to get On the other hand, by using the integration by parts, we find where B(·, ·) is appeared in (3.6). This yields