A nonlinear plate control without linearization

Abstract In this paper, an optimal vibration control problem for a nonlinear plate is considered. In order to obtain the optimal control function, wellposedness and controllability of the nonlinear system is investigated. The performance index functional of the system, to be minimized by minimum level of control, is chosen as the sum of the quadratic 10 functional of the displacement. The velocity of the plate and quadratic functional of the control function is added to the performance index functional as a penalty term. By using a maximum principle, the nonlinear control problem is transformed to solving a system of partial differential equations including state and adjoint variables linked by initial-boundary-terminal conditions. Hence, it is shown that optimal control of the nonlinear systems can be obtained without linearization of the nonlinear term and optimal control function can be obtained analytically for nonlinear systems without linearization.


Introduction
The linear vibration control for civil structures are studied in a large number of papers, not limited to; [3]- [14]. But nonlinear vibration control stands as a challenging problem. Controlling and analyzing nonlinear systems is not easy due to both the presence of nonlinearity and the extensive computation required. So, researchers generally tend to transform these nonlinear systems to linearized ones [26]- [29]. But the linearized form of a nonlinear system, no matter how good, is not able to preserve the crucial features of the main system. Due to this reason, nonlinear control of civil structures has attracted great attention [15]- [22]. Therefore, the present paper deals with the vibration control of a nonlinear system to show that the controlling of a nonlinear system is both possible and easy without linearization of the nonlinear system via the maximum principle [3]- [13].
The original contribution of the present study to literature is that the papers existing in the literature present the control of nonlinear system by using linearized form of the nonlinear system or by using lengthy computation procedures while the present paper gives a simpler algorithm for obtaining the control function analytically without linearization and lengthy computation. Also, the maximum principle is employed for controlling the nonlinear system in the current study without linearization. In contrast, in the literature it is used for controlling linear or linearized systems. The obtained results have the potential to be extended to control other nonlinear systems.
In this study, the vibration control problem for a nonlinear plate is considered. For controlling the system, wellposedness and controllability of the nonlinear system is investigated. The performance index functional of the nonlinear plate consists of a weighted quadratic functional of the displacement and velocity and also includes a quadratic functional of the control function as a penalty term. By means of maximum principle, the optimal control function is analytically obtained for the nonlinear plate without linearization.

Mathematical formulation of the problem
In this study, the vibration control problem for a nonlinear plate is considered as follows [2]; 0; t f /; N 1 is the normal load per unit length in the x direction, N 2 is the normal load per unit length in the y direction, D D Eh 3 12.1 v 2 / is the plate rigidity in which E; h and v are modulus of elasticity, plate thickness and Poisson's ratio, respectively, is the material density, T is the normal load per unit area, > 0 is a small parameter, is the damping coefficient of the plate and is the control function. In [2], existence and uniqueness of the solution to the nonlinear homogeneous plate equation is given. In order to ensure integrity of the present study, let us recall the existence of the solution. Define the scalar product and norm in standard L 2 space as follows: Also, let us define the Sobolev spaces as follows: Let the initial data fw 0 ; w 1 g belong to E 0 : Then there exists a function Then, w.t / is a weak solution of the system under consideration (see [1,2]). By introducing the following lemma, let us show that the system given by Eqs.
Lemma 2.1. Let w " satisfy the system given by Eqs.
(1)-(3) corresponding to the control function f " .x; y; t/: Also, w ı is the optimal state function corresponding to the optimal control function f ı .x; y; t /: Difference function is defined by w.x; y; t / D w " .x; y; t / w ı .x; y; t /: Note that w.x; y; t / satisfies the following equation w ı 2 y d and the following homogeneous boundary conditions in addition to the zero initial conditions w.x; y; 0/ D 0; w t .x; y; 0/ D 0: Then Proof. Let .x 1 ; y 1 ; t 1 /; :::; .x p ; y p ; t p / be P arbitrary points in the open region Q and " j are the coefficients such that the regions, R j D OEx j ; x j C p " j OEy j ; y j C p " j OEt j ; t j C p " j do not have any intersection for 1 Ä j Ä p. Let us define the following energy integral like in [7,25]: Eq. (9) can be written in the following form: With integration by parts and using homogeneous boundary conditions given by Eq. (7), Eq. (10) becomes OEf .x; y; t / C $ w t dQ: Applying the Cauchy-Schwartz inequality to the space integral, one obtains Taking the sup of the both sides of Eq. (11) leads to where O.r/ is a quantity such that lim By means of Eq. (12), the following inequality is observed for each t 2 OE0; t f : Because 4=3 > 1, the following equality is obtained Because the coefficients of Eq. (1) are bounded away from zero, the conclusion of Lemma 2.1 is obtained from Eq. (13). Now, consider that in order to preserve the uniqueness of the solution, the control function f; corresponding to the unique solution w, must be unique. Note that the system under consideration has a unique solution and unique control function. Then, it is called observable. The Hilbert Uniqueness method proved that observable is equal to the controllable [14,24]. Namely, the system under consideration is controllable. Hilbert Uniqueness method has introduced J. L. Lions in 1988 for showing that observable is equal to the controllable. This method has aroused a lot of interest among scientists, from both origins: partial differential equations and general dynamic systems [23].

Optimal control problem and maximum principle
The aim of the optimal control problem is to determine an optimum function f .x; y; t / to minimize the performance index functional of the plate at t f with the minimum expenditure of the control. Therefore, the performance index functional is defined by the weighted dynamic response of the plate and the expenditure of the control over .0; t f / as follows: where 1 ; 2 0; 1 C 2 ¤ 0 and 3 > 0 are weighting constants. The first integral in Eq. (14) is the modified dynamic response of the plate and the last integral represents the measure of the total control expense that accumulates over .0; t f /: The set of admissible control functions is given by f ad D ff .x; y; t / j f .x; y; t / 2 L 1 .0; t f I H /; j f j< c < 1g: The optimal control of a nonlinear plate is expressed as subject to Eqs. (1)- (3). In order to achieve the maximum principle, let us introduce an adjoint variable .x; y; t / satisfying the equation and boundary conditions A maximum principle in terms of the Hamiltonian functional is derived as a necessary condition for the optimal control function. In [3], it is proved that under some convexity assumptions, which are satisfied by Eq. (14), on performance index function, the maximum principle is also a sufficient condition for the control function to be optimal. Deriving the maximum principle, the nonlinear control problem is reduced to solving a system of partial differential equations for the state variable and the adjoint variable subjected to boundary, initial and terminal conditions. Also, the maximum principle gives an explicit expression for the optimal control function and relates the optimal control to the state variable implicitly. Then, the maximum principle can be given as follows:

Theorem 3.1 (Maximum principle). The maximization problem states that if
HOEt HOEt I ; f .x; y; t / (19) in which D .x; y; t / satisfies the adjoint system given by Eqs. where f ı .x; y; t/ is the optimal control function.
Proof. Before starting the proof, let us introduce an operator and its adjoint operator as follows: The deviations are given by w D w w ı ; w t D w t w ı t in which w ı is the optimal displacement. The operator ‡ .w/.x; y; t / D f .x; y; t / C $ is subject to the following boundary conditions w.0; y; t / D 0; w.`; y; t / D 0; w xx .0; y; t / D 0; w xx .`; y; t / D 0; (23a) w.x; 0; t / D 0; w.x;`; t / D 0; w yy .x; 0; t / D 0; w yy .x;`; t / D 0 (23b) and initial conditions w.x; y; t / D w t .x; y; t / D 0 at t D 0: Consider the following functional OEf .x; y; t / C $ dQ: Integrating the left side of Eq. (25) by parts, twice with respect to t and four times with respect to x, using Eqs. (23)- (24), one observes the following relation: Let us expand the w 2 .x; y; t f / and w 2 t .x; y; t f / terms by Taylor series about w ı 2 .x; y; t f / and w ı 2 t .x; y; t f /, respectively. Then, one observes the following w 2 .x; y; t f / w ı 2 .x; y; t f / D 2w ı .x; y; t f /w.x; y; t f / C r; (29a) where r D .w/ 2 > 0 and r t D .w t / 2 > 0: Substituting Eq. (29) into Eq. (28)  By taking the first variation of the Hamiltonian, the optimal control function is obtained as follows: f .x; y; t / D .x; y; t / 2 3 : (32)

Conclusion
The nonlinear optimal vibration control problem is defined with a nonlinear partial differential equation and a performance index functional, which is chosen as a quadratic functional of the displacement, velocity and control function. Wellposedness and controllability of the system are discussed. While using the maximum principle, the linearized form of the nonlinear term is not taken into account. Hence, it is shown that optimal control of nonlinear systems can be obtained without linearization of the nonlinear term for and the optimal control function can be obtained analytically for nonlinear systems without linearization. The results in the present study can be extended to the other nonlinear control systems. Hence, by means of the results obtained in the present paper, nonlinear control problems can be solved preserving the crucial features of the main system and without lengthy computation.