Norm-minimal Neumann boundary control of the wave equation

We consider the problem to control a vibrating string to rest in a given finite time. The string is fixed at one end and controlled by Neumann boundary control at the other end. We give an explicit representation of the L2-norm minimal control in terms of the given initial state. We show that if the initial state is sufficiently regular, the same control is also Lp-norm minimal for p > 2.


Introduction
We consider a system that is governed by the one-dimensional wave equation with homogeneous Dirichlet boundary conditions at one end and Neumann boundary control at the other end. For a sufficiently long given finite time interval [0, T ], the system is exactly controllable (see for example [4]). In this paper we consider the optimal control problem to find a control with minimal L 2 -norm that steers the system to rest at time T . We give an explicit representation of the optimal control in terms of the initial state. In this way, for each control time T , we give an explicit representation of the linear map that is determined by the optimal control problem and maps the initial state to the corresponding optimal control. For the case of two-sided boundary control, the problem has already been studied in [4]. However, in contrast to [4], in this paper we give a completely explicit solution for our optimal problem in terms of the initial data for arbitrary large control times.
For sufficiently regular initial states, we also consider the optimal control problem where the L 2 -norm in the objective function is replaced by the L p -norm ( p > 2). We show that this does not have any influence on the optimal control as long as p is finite. For p = ∞, the solution is different, since in general the optimal control is not uniquely determined: In this case we give the element of the solution set with minimal L 2 -norm. This yields an explicit representation of the minimal value in terms of the initial state. This allows the precise treatment of problems of time-optimal control subject to L ∞ -norm control constraints.
In the problem of time-optimal control, an upper bound for the control norm is prescribed as a constraint for the admissible controls, whereas the terminal time T is considered as an additional decision variable. The aim is to find an admissible control, that steers the system to a position of rest in minimal time (see [9]). M. Gugat (B) Department Mathematik, Friedrich-Alexander Universität Erlangen-Nürnberg, Cauerstr. 11, 91058 Erlangen, Germany E-mail: martin.gugat@fau.de For the solution of the problem of time-optimal control, the problems of norm-minimal control can be used as time-parametric auxiliary problems. Based upon this approach, a Newton method for the computation of time-optimal boundary controls of one-dimensional vibrating systems is given in [3]. A semismooth Newton method based upon transformation of the time interval, penalization of the terminal constraint and Tychonov regularization for the control is presented in [11]. A precise definition of time-optimal controls in the general context of strongly continuous semigroups is given in [12].
In the proofs of our results, we use the method of moments, that has already been considered in [15] for the treatment of problems of optimal exact control of hyperbolic partial differential equations. In [8] the method has also been used to study problems of time-optimal control. The relation between the optimal control problems for the wave equation and the corresponding discretized problems obtained by finite difference methods has been studied in [16]. The analysis for finite element semidiscretizations is given in [13]. Optimality conditions for the optimal control problem for the multidimensional wave equation with Neumann boundary control and pointwise state constraints have been studied in [14].
A numerical method for the solution of the corresponding problem of L 2 -norm minimal Dirichlet boundary control where the control acts at both ends of the considered interval has been presented in [2]. This paper has the following structure. In Sect. 2, the L 2 -case is studied: The optimal control problem is defined and the optimal control is given in Theorem 2.1. In Sect. 3, the corresponding result for the case of L p -norm minimal control is presented in Theorem 3.1. In Sect. 4, we study the L ∞ -case. The result is given in Theorem 4.1. In addition, the problem of time-optimal control is discussed. Section 5 contains the proof of Theorem 2.1. The proof of Theorem 3.1 is given in Sect. 6 and the proof of Theorem 4.1 in Sect. 7.

The problem of L -norm minimal control
Let y 0 ∈ H 1 (0, 1) with y 0 (0) = 0 and y 1 ∈ L 2 (0, 1) be given. Let a control time T ≥ 2 be given. We consider the problem of optimal exact control (EC) The objective in problem (EC) is to minimize the control cost that is given by the L 2 -norm of the control function which is equal to the L 2 -norm of the normal derivative of the solution at the boundary point x = 1.

The optimal control
In this section, we present an explicit representation of the solution of problem (EC).
Then the optimal control u 0 that solves problem (EC) is 4-periodic, with For l ∈ {0, 1, . . . , k − 1}, t ∈ (0, 2) we have: Remark 2.2 If in the spirit of a moving horizon control, at each moment, we use the control u(0) with the current state as initial state we get the feedback law: which is a well-known exponentially stabilizing feedback (see [7]).

Remark 2.3
The optimal value of problem (EC) is given by This can be seen as follows.
Now we insert the representation of u 0 that is given in Theorem 2.1 and obtain [y 0 (s) − y 1 (s)] 2 dt [y 0 (s) + y 1 (s)] 2 dt, and the representation of ω(T ) follows. If = 0, that is for T = 2k this yields

The problem of L p -norm minimal control
For p ∈ [2, ∞), an initial position y 0 ∈ W 1, p (0, 1), y 0 (0) = 0 and a velocity y 1 ∈ L p (0, 1) we consider the problem In the following Theorem, we give the solution of problem ECP(p). It turns out that for sufficiently regular initial data, the optimal control does not depend on p.
The optimal value of problem (ECP(p)) is given by If = 0, that is for T = 2k this yields Let us consider some examples. Example 3.3 Let y 0 (x) = x, y 1 = 0. Then for l ∈ {0, 1, . . . , k − 1}, t ∈ (0, 2) we have: Note that although the initial data are given by smooth functions, the corresponding optimal control u 0 from (2) is discontinuous if T > 2 on account of the jumps at the points of the form t = 2l ∈ (0, T ) where l is a natural number. The discontinuities of the control generate jumps in y x , where y is the state that is generated by the optimal control. This generates kinks in the function y.
If T = 2k, Eq. (2) implies that the optimal control is a bang-bang control, namely Figure 1 shows the optimal control u 0 for T = 10. The control is given by a discontinuous function that is piecewise constant. In the definition of the optimal control problem, only L p -regularity of the control is required. The jumps in the optimal control are generated by the minimization of the L p -norm of the control: The function |u 0 | p is constant for T = 2k. It has minimal L p -norm among all admissible controls. The objective function does not penalize jumps in the control. Figure 2 shows the state y that is generated by the optimal control u 0 for T = 10. The time axis goes from 0 to T = 10 from the left-hand side to the right-hand side in the upper picture and from T = 10 to 0 in the lower picture.
If T = 2k, this implies that the optimal control is u 0 (t) = 1 2k sin( π 2 t). For T = 10, this yields the optimal control Note that Therefore, u 0 is continuous at t = 1. Moreover, we have Thus at the points t ∈ {1, 2, 3, . . .} ∈ [0, T ] no jumps appear in the control. Since y 0 and y 1 are smooth, this yields the continuity of the optimal control u 0 .
Since the initial state is also compatible with the homogeneous Dirichlet boundary condition at x = 0, this implies that the state generated by the optimal control is continuously differentiable. Figure 3 shows the optimal control u 0 for T = 10. Figure 4 shows the state y that is generated by the optimal control u 0 for T = 10. The time axis goes from 0 to T = 10 from the left-hand side to the right-hand side. The front axis corresponds to the boundary point x = 0 where the zero position is prescribed by the Dirichlet boundary condition.

The problem of L ∞ -norm minimal control
In this section we define the problem of L ∞ -norm minimal control. Let y 0 ∈ W 1,∞ (0, 1), y 0 (0) = 0 and y 1 ∈ L ∞ (0, 1) be given. The problem of L ∞ -norm minimal optimal exact Neumann boundary control is

The optimal control
In the following theorem, we give the optimal value of problem ECP(∞) explicitly in terms of the initial data. Moreover, also in this case the control u 0 is an optimal control. Theorem 4.1 Let y 0 ∈ W 1,∞ (0, 1), y 0 (0) = 0 and y 1 ∈ L ∞ (0, 1) be given. Then the optimal value of problem ECP(∞) is given by allows to determine the solutions of problems of time-optimal control of the form with a given bound K > 0 for the L ∞ -norm of the control. Problems of this type have been considered in [11].
2k . Thus we see that in this case ν is piecewise constant and the values jump down at the times T = 2k where ν(2k) = 1 2k . In particular, the function ν only attains a countable number of values of the form 1 2k , k ∈ {1, 2, 3, . . .}. This implies that for K ≥ 1 2 , the optimal control times that solve (3) have the form T = 2k, and the controls u 0 for T = 2k are time-optimal for k ∈ [ 1 2K , 1 2K + 1). Note that these are bang-bang controls, see Example 3.3. In this context, the situation that ν is not continuous is called the non-normal case. In contrast to the situation in the L ∞ -case, in the L p -case with p ∈ [2, ∞), the corresponding function ω(T ) is of integral type and continuous. If it is strictly decreasing, the situation is called the normal case.
Remark 4.5 Theorem 4.1 is helpful for the analysis of the problem of L 2 -norm minimal Neumann boundary control subject to L ∞ -norm control constraints, since the optimal value of ECP(∞) allows to determine whether box constraints of the form −|u a | ≤ u ≤ |u a | on [0, T ] with a real constant u a > 0 admit feasible exact controls (see also [5] for the problem of L 2 -norm minimal Dirichlet boundary control subject to L ∞ -constraints). Only if ν(T ) ≤ |u a |, an admissible control exists. In [10], the corresponding problems of distributed optimal control subject to box constraints for the control are considered.

Proof of Theorem 2.1
We start with the proof of Theorem 2.1. The state that is generated by the control u 0 is the solution of the initial boundary value problem

Weak solution of the initial-boundary-value problem
Now we determine a series representation of the solution y of (4). Let ϕ : [0, 1] → R be a twice differentiable test function that satisfies the boundary conditions ϕ(0) = 0 and ϕ x (1) = 0. Then using integration by parts, for the solution y of (4) we get To choose suitable test functions, we consider the eigenvalue problem The corresponding eigenfunctions are with the eigenvalues λ n = ( π 2 + nπ) 2 and the normalization 1 0 ϕ n (x) 2 dx = 1, n ∈ {0, 1, 2, . . .}. Note that the functions ((ϕ n (x)) ∞ n=0 form a complete orthonormal system in L 2 (0, 1). Now we want to determine a sequence of functions functions α n (t) : [0, T ] → R such that the solution of our initial boundary value problem can be represented as the series From (5)- (9), we obtain the sequence of differential equations with the initial conditions Hence we get α n (t) = α 0 n cos With this definition of the functions α n (t), we get the series representation (11) for the solution y(t, x).

End conditions: the state (y(T ), y t (T )) generated by u 0
Now we show that the control function u 0 ∈ L 2 (0, 1) steers the system from the initial state (y 0 , y 1 ) to a position of rest at time T . First, we consider the position y(T, ·). Let n ∈ {0, 1, 2, . . .} be given. We consider the integral term that appears in the definition of α n (T ) namely Note that sin(( π 2 + nπ)(s + 2)) = − sin(( π 2 + nπ)s) and u 0 (T − (s + 2)) = −u 0 (T − s). This implies the equation This yields With the definition of the function d this yields Now we extend u 0 to a 4-periodic function that is defined also for t < 0 by letting for n ∈ {0, 1, . . . , k − 1}, t ∈ (0, 2) and we extend d to a 2-periodic function by letting This yields the equation Since the integrand is 2-periodic, we get by substitution With the definition of u 0 we get Now we compute Z 0 . By substitution in the second integral, we get where in the last step we have used the identity sin(( π 2 + nπ)( − s)) − sin(( π 2 + nπ)( + s)) = −2 cos(( π 2 + nπ) ) sin(( π 2 + nπ)s). Now integration by parts yields Using cos(( π 2 + nπ)(1 − τ )) = (−1) n sin(( π 2 + nπ)τ ), the substitution s = 1 − τ yields Now we compute Z 1 . By substitution in the second integral, we get Now the substitution s = 1 − τ yields From (12) we can compute α n (T ) = α n (2k+ ). If we replace the integral I 0 by we see that α n (T ) = 0. Thus we have y(T, ·) = 0. Now we consider the velocity y t (T, ·). We have For this purpose, we consider the integral term that appears in the definition of α n (T ) namely With a similar computation as for I 0 we get the equation With the definition of u 0 we get Now we compute Y 0 . By substitution in the second integral, we get

5.3
The control u 0 has minimal L 2 -norm among all successfull controls In this section we show that the control u 0 solves the optimal control problem (EC).
For u ∈ L 2 (0, T ), we can represent the L 2 -norm of u in the form Moreover, if u is admissible in the sense that it generates a state that satisfies the end-conditions y(T, ·) = 0 = y t (·, T ), the series representation of the state y and the computations in Sect. 5.2 imply that u solves the following moment equations for n ∈ {0, 1, 2, 3, . . .
For s ∈ (0, 2), define Then H satisfies the moment equations (for n ∈ {0, 1, 2, 3, . . .}) In fact, the sequence of moment equations (20), (21) determine the function H uniquely. Therefore, since the control u 0 is admissible as we have shown in Sect. 5.2, we get the equation for s almost everywhere in (0, 2) for all admissible controls u. In the next section we show that for s almost everywhere in (0, 2) the optimal control u 0 is the solution of the constraint (22) for which the value is minimal.

A finite dimensional parametric optimization problem
Let s ∈ (0, 2) and the value H (s) ∈ R be given. We consider the finite dimensional optimization problem The solution is given in Lemma 2.7 in [4], namely Due to the definition of u 0 , for s ∈ (0, 2) and l ∈ {0, 1, . . . , d(s) − 1} we have Therefore, Eq. (22) implies Thus we get in turn from (24) Hence for s ∈ (0, 2) almost everywhere, u 0 is the admissible control function where u 0 (T − s − 2l) (l ∈ {0, 1, ..., d(s) − 1}) solves problem P(s). Hence u 0 is the admissible control that minimizes the integrand (18) in the objective function almost everywhere. Hence u 0 is the admissible control function with minimal L 2 -norm, that is u 0 solves problem (EC). Thus we have proved Theorem 2.1.

Proof of Theorem 3.1
The proof of Theorem 3.1 is quite similar to the proof of Theorem 2.1. Since the given initial data satisfies the assumptions of Theorem 2.1, we can use the series (11) to represent the solution of the initial-boundaryvalue problem (4). As in Sect. 5.2, this implies that the state y generated by u 0 satisfies the end-conditions y(T, ·) = 0, y t (T, ·) = 0.

The control u 0 has minimal L p -norm among all successfull controls
In this section we show that the control u 0 solves the optimal control problem (ECP(p)). For u ∈ L p (0, T ), we can represent the L p -norm of u in the form Moreover, if u is admissible in the sense that it generates a state that satisfies the end-conditions y(T, ·) = 0 = y t (·, T ), the series representation of the state y and the computations in Sect. 5.2 imply that u solves the following moment equations for n ∈ {0, 1, 2, 3, . . .
We have T 0 u(T − s) sin π 2 + nπ s ds (28) and T 0 u(T − s) cos π 2 + nπ s ds (30) For s ∈ (0, 2), define H as in (19). Then for all n ∈ {0, 1, 2, 3, . . .}H satisfies the moment equations (20), (21). In fact, the sequence of moment equations (20), (21) determines the function H uniquely. Therefore, since the control u 0 is admissible, we get the equation for s almost everywhere in (0, 2) for all admissible controls u. In the next section, we show that for s almost everywhere in (0, 2) the optimal control u 0 is the solution of the constraint (32) for which the value is minimal.

A finite dimensional parametric optimization problem
Let s ∈ (0, 2) and the value H (s) ∈ R be given. We consider the finite dimensional optimization problem The solution is the same as for the case p = 2, and given in Lemma 2.7 in [4], namely Due to the definition of u 0 , for s ∈ (0, 2) and l ∈ {0, 1, . . . , d(s) − 1} we have Therefore, Eq. (32) implies Thus we get in turn from (34) Hence for s ∈ (0, 2) almost everywhere, u 0 is the admissible control function where u 0 (T − s − 2l) (l ∈ {0, 1, . . . , d(s) − 1}) solves problem PP(s, p). Hence u 0 is the admissible control that minimizes the integrand (27) in the objective function almost everywhere. Hence u 0 is the admissible control function with minimal L p -norm, that is u 0 solves problem (ECP(p)). Thus we have proved Theorem 3.1.

Proof of Theorem 4.1
In contrast to the problems for p < ∞ that we have considered before, if T > 2 the solution of problem ECP(∞) is in general not uniquely determined. Still, for the proof of Theorem 4.1 we proceed in a similar way as in the proof of Theorem 2.1. Since the given initial data satisfies the assumptions of Theorem 2.1, we can use the series (11) to represent the solution of the initial-boundary-value problem (4). As in Sect. 5.2, this implies that the state y generated by u 0 satisfies the end-conditions y(T, ·) = 0, y t (T, ·) = 0. 7.1 The control u 0 has minimal L ∞ -norm among all successfull controls In this section, we show that the control u 0 is a solution of the optimal control problem (ECP(∞)). For u ∈ L ∞ (0, T ), we can represent the L ∞ -norm of u in the form Moreover, if u is admissible in the sense that it generates a state that satisfies the end-conditions y(T, ·) = 0 = y t (·, T ), the series representation of the state y and the computations in Sect. 5.2 imply that u solves the moment equations (28)  Hence for s ∈ (0, 2) almost everywhere, u 0 is the admissible control function where u 0 (T − s − 2l) (l ∈ {0, 1, . . . , d(s) − 1}) solves problem PP(s, ∞). Hence u 0 is an admissible control that minimizes the maximum (37) that also appears implicitly in the essential supremum that defines the objective function (35) almost everywhere. In fact, since u 0 is the solution of (EC), that is the admissible control function with minimal L 2 -norm, it is clear that from the set of all solutions of problem (ECP(∞)), u 0 is the uniquely determined element with minimal L 2 -norm. Thus we have proved Theorem 4.1.

Conclusion
We have given optimal Neumann boundary controls explicitly in terms of the initial state for problems with the wave equation in the 1-d case. These results are useful in order to get some insight into the structure of the optimal Neumann controls for the wave equation. Moreover, they are also helpful to test numerical methods. The extension of these result to the case of string networks (see [1]) is an open problem. There is no obvious extension of the results to higher dimensions, nevertheless it is an interesting question under which conditions for sufficiently smooth data the L 2 -norm minimal control is also L p -norm minimal.